Home Page
Archive > Posts > Tags > cPanel
Search:

Backing up just the user settings in cPanel

One of the companies I work for recently moved one of our cPanel servers to a new collocation, still running cPanel. We decided to use a new backup solution called r1soft, which so far has been working spectacularly. I’d love to use it for my personal computers, except the licenses, which are geared towards enterprise business, are way too costly.

However, since r1soft only backs up files (on the incrementally block level, yay) you can’t use it to restore a cPanel account. It can only restore things like the user’s home directory and SQL databases. Because of this, when we had need to restore an entire account today, and found out there is no easy/quick way to do it, we were up a creek. The obvious future solution for this would be to use cPanel’s backup (or legacy backup) systems, but unfortunately, you can’t easily set them to not backup the user’s databases and home directory, which can be very large, and are already taken care of by r1soft. I ended up adding the following script, ran nightly via cron, to back up user account settings.

It saves all the user settings under the backup path in their own directory, uncompressed, and named cpmove-USERNAME. It is best to do it this way so r1soft’s incremental backups don’t have much extra work if anything changes. Make sure to change line 3 in the following script to the path where you want backups to occur.

#!/bin/bash
#Create and move to backup directory
BACKUPDIR=/backup/userbackup
mkdir -p $BACKUPDIR #Make sure the directory exists
cd $BACKUPDIR

#Remove old backups
rm -rf cpmove-*

#Loop over accounts
for USER in `/usr/sbin/whmapi1 listaccts | grep -oP '(?<=user: )\w+$' | sort -u`; do
  #Backup the account
  /scripts/pkgacct --nocompress --skipbwdata --skiphomedir --skiplogs --skipmysql --skipmailman $USER ./

  #Extract from and remove the tar container file
  tar -xvf cpmove-$USER.tar
  rm -f cpmove-$USER.tar

  #Save MySQL user settings
  mysqldump --compact -fnt -w "User LIKE '$USER""_%'" mysql user db tables_priv columns_priv procs_priv proxies_priv \
  | perl -pe "s~('|NULL)\),\('~\1),\n('~ig" \
  > cpmove-$USER/mysql-users.sql
done;

This script skips a few backup items that need to be noted. Mailman, logs, homedir, and bandwidth data should all be easy 1:1 copy over restores from r1soft. I excluded them because those can take up a lot of room, which we want r1soft to handle. The same goes for MySQL, except that your MySQL users are not backed up to your account, which is why I added the final section.

Do note, for the final section, the line starting with “| perl” is optional. It is there to separate the insert rows into their own lines. A very minor warning though; it would also pick up cases where the last field in MySQL’s user table ends in “NULL),​(”. This would only happen if someone is trying to be malicious and knew about this script, and even then, it couldn’t harm anything.

Bonus note: To restore a MySQL database which does not use a shared-file (like InnoDB does by default), you could actually stop the MySQL server, copy over the binary database files, and start the server back up.

Lets Encrypt HTTPS Certificates

After a little over a year of waiting, Let’s Encrypt has finally opened its doors to the public! Let’s Encrypt is a free https certificate authority, with the goal of getting the entire web off of http (unencrypted) and on to https. I consider this a very important undertaking, as encryption is one of the best ways we can fight illegal government surveillance. The more out there that is encrypted, the harder it will be to spy on people.

I went ahead and got it up and running on 2 servers today, which was a bit of a pain in the butt. It [no longer] supports Python 2.6, and was also very unhappy with my CentOS 6.4 cPanel install. Also, when you first run the letsencrypt-auto executable script as instructed by the site, it opens up your package manager and immediately starts downloading LOTS of packages. I found this to be quite anti-social, especially as I had not yet seen anywhere, or been warned, that it would do this before I started the install, but oh well. It is convenient. The problem in cPanel was that a specific library, libffi, was causing problems during the install.


To fix the Python problem for all of my servers, I had to install Python 2.7 as an alt Python install so it wouldn’t mess with any existing infrastructure using Python 2.6. After that, I also set the current alias of “python” to “python2.7” so the local shell would pick up on the correct version of Python.


As root in a clean directory:
wget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgz
tar -xzvf Python-2.7.8.tgz
cd Python-2.7.8
./configure --prefix=/usr/local
make
make altinstall
alias python=python2.7

The cPanel lib problem was caused by libffi already being installed as 3.0.9-1.el5.rf, but yum wanted to install its devel package as version 3.0.5-3.2.el6.x86_64 (an older version). It did not like running conflicting versions. All that was needed to fix the problem was to manually download and install the same devel version as the current live version.

wget http://pkgs.repoforge.org/libffi/libffi-devel-3.0.9-1.el5.rf.x86_64.rpm
rpm -ivh libffi-devel-3.0.9-1.el5.rf.x86_64.rpm

Unfortunately, the apache plugin was also not working, so I had to do a manual install with “certonly” and “--webroot”.


And that was it; letsencrypt was ready to go and start signing my domains! You can check out my current certificate, issued today, that currently has 13 domains tied to it!

Useful Exim Scripts
For fighting spam

In the course of my Linux administrative duties (on a cPanel server), I have created multiple scripts to help us out with Exim, our mail transfer agent. These are mostly used to help us fight spam, and determine who is spamming when it occurs.



This monitors the number of emails in the queue, and sends ours admins an email when a limit (1000) is reached. It would need to be run on a schedule (via cron).
#!/bin/bash
export AdminEmailList="ADMIN EMAIL LIST SEPARATED BY COMMAS HERE"
export Num=`/usr/sbin/exim -bpc`
if [ $Num -gt 1000 ]; then
        echo "Too many emails! $Num" | /usr/sbin/sendmail -v "$AdminEmailList"
        #Here might be a good place to delete emails with “undeliverable” strings within them
        #Example (See the 3rd script): exim-delete-messages-with 'A message that you sent could not be delivered'
fi

This deletes any emails in the queue from or to a specified email address (first parameter). If the address is the recipient, the from must be "<>" (root)
#!/bin/bash
exiqgrep -ir $1 -f '<>' | xargs exim -Mrm
exiqgrep -if $1 | xargs exim -Mrm

This deletes any emails in the queue which contain a given string (first parameter)
#!/bin/bash
if [ "$1" == "" ]
then
  echo 'Cannot delete with empty string'
else
  grep -lir "$1" /var/spool/exim/input/ | sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs /usr/sbin/exim -Mrm
fi

Get a count of emails in the queue per sender (sender email address is supplied by sender and can be faked)
#!/bin/bash
exim -bp | grep -oP '<.*?>' | sort | uniq -c | sort -n

Get a count of emails in the queue per account (running this script can take a little while)
#!/bin/bash
exim -bp | grep -Po '(?<= )[-\w]+(?= <)' | xargs -n1 exim -Mvh | grep -ioP '(?<=auth_sender ).*$' | sort | uniq -c

Bonus: Force all non-specified accounts on Exim to use a certain IP address for sending. It would need to be run on a schedule (via cron).
#!/bin/bash
export IPAddress="YOUR ADDRESS HERE"
/usr/bin/perl -i -pe 's/\*:.*/*: '$IPAddress'/g' /etc/mailips
Selectively skipping data in a cPanel backup
Using a hammer instead of a scalpel

I was having problems on one of our production Linux cPanel servers in which our backup drive was not able to hold all the data from our primary drive for both our daily and weekly backups. An easy hack to fix this is to mount any subfolders you wish to exclude (generally very large ones) as a readonly temp file system in the appropriate backup folder. With this method, you can selectively exclude individual directories to one or more of the daily/weekly/monthly backup folders.

The only downside to this method is that pkgacct (called by cpbackup) logs will throw readonly file system errors for each file that cannot be copied.


So, to have cPanel discard an individual directory during the backup, you need to do the following:
First, make sure the backup directory to exclude is created and empty by running:
rm -rf PATH;
mkdir -p PATH;
NOTE: BE CAREFUL WITH “rm -rf”, IT IS A DANGEROUS COMMAND

To manually mount the directory, run:
mount tmpfs PATH -t tmpfs -o defaults,ro
To permanently mount the directory (mount on boot), edit /etc/fstab and add the following line:
tmpfs PATH tmpfs defaults,ro 0 0
If you do the permanent fix, don’t forget to run “mount PATH” to have it mount it to the live system, since fstab will not mount all its listed file systems until the next boot.

An example of a PATH might be: /backup/cpbackup/weekly/dakusan/public_html/uploads

cPanel also recently added (experimental) hard linking for backups, which really helps out with space concerns, and makes the need for this script a bit less.

Always make sure your envelope sender is correct
Otherwise things may not work as you expect

When you send an email there may be multiple fields in the email header that specify the email address that it came from and how to reply back to that address. Some of these are:

  • From: This is the field that the user sees in their email client as the "From" address. This field is the most easily (and most often) spoofable as you can put anything you want in this field and it doesn't change how the email is received or responded to. Most systems, in my experience, don't try to protect this field either.
  • Envelope sender: This is used internally by email software to see who the email was really from. Different systems (i.e. spam blockers) can use this field for different purposes.
  • Return path: This field specifies the email address to reply to when you click the "reply" button on your email client.

There can be multiple problems if the latter 2 field are not properly set. Some of these are:
  • Spam blockers may be more likely to identify the email as spam
  • The email might be sent from the wrong IP address. Exim (which cPanel uses by default) might be configured to check /etc/mailips to determine what IP address to send from depending on the domain of the envelope sender.
  • The recipient might reply to the wrong email address when replying to the email.

When sending an email from PHP via the mail function through Exim you can only manually set the "From" header field (of the three) through the "additional_headers" (4th) parameter. This might be possible to remedy on some systems however.

If your server is configured to allow it (it may require privileged user permission), you can pass to the "additional_parameters" (5th) parameter of the mail function the -f Exim option, which sets the envelope sender and return path. For example:

mail('example@gmail.com', 'This is an example', 'Example!', 'From: example@yourdomain.com', '-f example@yourdomain.com');

On a related security note, if you think an email may not be legitimate, don't forget to check the email headers by viewing the original email source. Our servers include many useful headers in emails to help combat fraud including (depending on circumstances) the account the email was sent from, the IP address it was sent from, if it was sent from PHP, and if so, the script it was sent from.

cPanel Hard Linked VirtFS Causing Quota Problems
Quota problems are annoying and expensive to debug :-\

For a really long time I’ve seen many cPanel servers have improper space reporting and quota problems due to the VirtFS system, which is used for the “jailshell” login for users (if a user logs into SSH [or presumably Telnet] whom has their shell set to “/usr/local/cpanel/bin/jailshell” in “/etc/passwd”). Whenever a user logs into their jailshell, it creates a virtual directory structure to chroot them into, and “hard links” their home directory inside this virtfs directory, which doubles their apparent hard disk usage from their home directory (does not include their sql files, for example). Sometimes, the virtfs directory is not unmounted (presumably if the user does not log out correctly, which I have not confirmed), so the doubled hard disk usage is permanent, causing them to go over their quota if a full quota update is ran.

I had given this up as a lost cause a while back, because all the information I could find on the Internet said to just leave it alone and not worry about it, and that it couldn’t be fixed. I picked the problem back up today when one of our servers decided to do a quota check update and a bunch of accounts went over. It seems a script was added recently at “/scripts/clear_orphaned_virtfs_mounts” that fixes the problem :-) (not sure when it was added, but the creation, modification, and access times for the files on all 3 of my cPanel servers shows as today...). Surprisingly, I could not find this file documented or mentioned anywhere on the Internet, and still no mentions anywhere on how to fix this problem.


So the simplest way to fix the problem is to run
/scripts/clear_orphaned_virtfs_mounts --clearall
I did some digging and found that that script calls the function “clean_user_virtfs” in “/scripts/cPScript/Filesys/Virtfs”, so you can just clear one user directory “properly” by running
cd /scripts
perl -e 'use cPScript::Filesys::Virtfs();cPScript::Filesys::Virtfs::clean_user_virtfs("ACCOUNTNAME");'
All this script really does is unmount the home directory inside the jailed failed system (so the hard link is done through a “mount” that doesn’t show up in “df”... interesting). So the problem can also be fixed by
umount -f /home/virtfs/USERNAME/home/USERNAME

After this is done, a simple
/scripts/fixquotas
can be used to set the users’ quotas back to what they should be. Do note that this operation goes through every file on the file system to count users’ disk usages.
Core Dump Files
Not all OSs crash in the same way :-)

If you ever find a file named “core.#” when running Linux, where # is replaced by a number, it means something crashed at some point. Most of the time, you will probably just want to delete the file, but sometimes you may wonder what crashed. To do this, you use gdb (The GNU debugger), a very power tool, to analyze the core dump file.

gdb --core=COREFILENAME

Near the very bottom of the blob of outputted text after running this command, you should see a line that says “Core was generated by `...'.”. This tells you the command line of what crashed. To exit gdb, enter “quit”. You can also use gdb to find out what actually happened and troubleshoot/debug the problem, but that’s a very long and complex topic.


Recently, I started seeing hundreds of core dump files taking up gigabytes of space showing up in “/usr/local/cpanel/whostmgr/docroot/” on multiple of our web servers. According to several online sources, it seems cPanel (web hosting made easy!) likes to dump many, if not all, of its programs' core files into this directory. In our case, it has been “dnsadmin” doing the crashing. We’ve been having some pretty major DNS problems lately, this kind on the name server level, so I may have to rebuild our DNS cluster in the next few days. Joy.