Setting Up a Home Server with Ubuntu 11.10 – Part 3


I am continuing a series of blog entries documenting how I set up my home server. Part 1 contains a description of what I am trying to accomplish and instructions for doing the initial install. Part 2 explains how to set up MythTV.  In this part, I set up some miscellaneous components.

Automatic Updates

I like to set up the server so that security updates are automatically installed. And I get notifications of non-security updates, which I manually install.

The following guides came in useful:

First, let’s setup the unattended-upgrades package which will do our automatic updates:

  1. sudo apt-get install unattended-upgrades

Set up the actions that are performed automatically, and set how often they are performed:

  1. sudo $EDITOR /etc/apt/apt.conf.d/10periodic
  2. Set the APT::Periodic::Download-Upgradeable-Packages line to "1". This will download updateable packages daily. Then, when you do manual updates, you will not have to wait for them to download.
  3. Set the APT::Periodic::AutocleanInterval line to "7". This will do an auto-clean every week. This cleans up packages that are no longer being used.
  4. Add a line:
    APT::Periodic::Unattended-Upgrade "1";

    This means automatic updates will be performed daily.

Configure the automatic updates:

  1. sudo $EDITOR /etc/apt/apt.conf.d/50unattended-upgrades
  2. The Unattended-Upgrade::Allowed-Originsblock enables specific types of updates. Updates that are commented out will not be automatically run (this doesn’t affect manual updates, though). I make sure that this line is the only one enabled:
    "${distro_id} ${distro_codename}-security";

    This enables security updates. If you are adventurous, you can uncomment other lines to enable other updates. These other updates are commented out because they are more likely to break something – they are best done manually so you can batch them together and plan them appropriately.

  3. Uncomment this line so that unattended-upgradescan email you:
    Unattended-Upgrade::Mail "root@localhost";

Now let’s set up apticron to send notifciations of pending updates. This is how I know what non-security updates I need to manually apply:

  1. sudo apt-get install apticron
  2. sudo $EDITOR /etc/apticron/apticron.conf
  3. By default, apticron uses the results of /bin/hostname --all-fqdns to get the host name. On my server, this returns nothing (I’m guessing because it is trying to do a reverse lookup and I don’t have a valid domain). So I need to uncomment this line:

    Instead of hardcoding a hostname, though, you can use the hostname command by changing the line like this:


By default, you will get daily notifications. I don’t want notifications that frequently. As far as I can tell, this interval is hardcoded in the script itself, so I change the script:

  1. Backup the script:
    sudo cp /usr/sbin/apticron /usr/sbin/apticron.bak
  2. sudo $EDITOR /usr/sbin/apticron
  3. Find this line:
  4. test "x$( find $tsfile -mmin -1381 2>/dev/null )" = "x$tsfile" && exit 0

    And change -mmin -1381 to -mtime -X, where X is the number of days to wait between emails. I set mine to -mtime -7, which means weekly emails.

The apt-listchanges package gets installed with apticron. This causes the apt-get dist-ugprade command to display all updates, with descriptions. It makes you hit a key after each screen, which can take a while. Let’s change it:

  1. sudo $EDITOR /etc/apt/listchanges.conf
  2. Comment out this line (put a “#” in front of it):

    And add this line:


    This means that apt-get will still print out the changes, but won’t pause after each screen full. apt-listchanges will still email you the changes that were applied, so you will have a record. More options are listed here.


BOINC is a program that allows you to take on slices of distributed computing projects. There are many projects that run on BOINC – I use World Community Grid. I also used to run GIMPS, which has its own client program to install. Keep in mind that these programs will heat up your CPU, drive up it’s power usage, and possibly shorten it’s life. It will also act like a heater. Although during cold weather, this can be a good thing – if your going to run a heater you might as well run one that has additional benefits.

I used this guide during the setup.

  1. sudo apt-get install boinc-client boinc-manager
  2. At this point, I have already registered on the World Community Grid.
  3. Run the setup utility. Note this pops up a graphical window, which means you need to SSH into your server with X11 forwarding enabled (see Part 1). Note that on my server, I get some garbled windows. I’m not sure if this is just my computer, or if everyone has the same problem. But I am able to get through.
  4. Start the manager.
    sudo boincmgr
  5. I select “Add project”.
  6. I select World Community Grid.
  7. I login using my World Community Grid account.
  8. Finish.
  9. I still have a window open, so I close it out (it is garbled, I can’t tell what it says). I have a bunch of errors on the command line about not being able to load images, so I have to Ctrl-C to kill it. But when I start boincmgr back up, the window is no longer garbled,a nd all seems well.
  10. When you go into the manager, you can monitor progress and change preferences. Even without the manager running, BOINC stays in the background. You can see by running this:

    But, don’t worry, it is supposed to throttle back when you are using the CPU for other tasks and isn’t supposed to interfere.

  11. After a while, I can see results by logging on to the World Community Grid webpage.

Hardware Monitoring

I like to be able to check the CPU temperature, especially since I know it will be running hot from BOINC. So, I set up the lm-sensors package, which not only gives CPU information, but gives other hardware information as well. You will be accessing hardware and loading kernel modules, which can be a bit risky. Skip this if you don’t want to take the risk:

  1. sudo apt-get install lm-sensors
  2. To view the current sensor data:

    This will only show sensors for which drivers are already installed.

  3. To probe the hardware and find out what additional drivers should be installed (be careful since this probes hardware – it will prompt you before performing tests, and give you an idea of how risky it is):
    sudo sensors-detect

    I answered YES to all questions, except for the one asking if you would like to automatically update /etc/modules. I prefer to do that myself.

  4. You will be given you a section between the “cut here” lines that you can copy and paste into your /etc/modules file (reboot once you are done for them to take effect). You can also use modprobe to immediately load modules temporarily for testing (replace modulewith the module name):
    sudo modprobe module



I use Transmission as my BitTorrent client. Transmission has a web client with which it can be managed.  This makes it ideal for running on a server.  Let’s set it up:

  1. sudo apt-get install transmission-daemon
  2. Add your admin user to the debian-transmission group:
    sudo adduser adminuser debian-transmission

    Log out and back in for the group change to take effect.

  3. Stop transmission. You always need stop it before changing the settings.json configuration file – when Transmission shuts down, it rewrites the file with the original settings it loaded when starting up.
    sudo /etc/init.d/transmission-daemon stop
  4. Edit the configuration file:
    sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  5. Changing the rpc-whitelist-enabled entry to false will allow you to access the web client from any PC. This will allow you to access the website from any computer. Alternatively you can grant access to individual PCs by adding them to the rpc-whitelist entry.
  6. The rpc-username entry contains the userid you use to log into the Transmission web client. It defaults to transmission.
  7. The rpc-password entry contains the encrypted password. It defaults to transmission also. To change the password, type a plaintext password in between the quotes – when transmission starts it will automically encrypt it for you.
  8. The rpc-port entry sets the port on which the web client listens. It defaults to 9091.
  9. Start Transmission back up:
    sudo /etc/init.d/transmission-daemon start
  10. Go to the transmission web client at:
  11. Log in.
  12. You can set some of the preferences from within the web client, which will automatically be applied in the configuration file for you. I suggest that you at least put some limits on the upload and download speeds.

I like to set up a separate partition for the torrent downloads.  I assume you haven’t download any torrents into the downloads folder yet:

  1. Create an LVM partition as described in Part 1. I call the partition torrentdownloads.
  2. Make a note of the permissions on the existing Transmission download folder:
    ls -dl /var/lib/transmission-daemon/downloads
  3. sudo $EDITOR /etc/fstab
  4. Add this line to mount the new partition in Transmission’s default location (replace the <tab>’s with actual tabs):
  5. sudo mount /var/lib/transmission-daemon/downloads
  6. Change the permissions on the new partition to match the old one:
    sudo chmod 4775 /var/lib/transmission-daemon/downloads
    sudo chown debian-transmission:debian-transmission /var/lib/transmission-daemon/downloads

Now you can try go to the web client and try adding a new torrent.

SSL with stunnel

Transmission does not support SSL internally (i.e. you need to use an http URL and cannot use https). So, if you want SSL, you need to use a reverse proxy server. Ultimately, it sounds like using the Apache web server would be a good idea. But for now, I will use stunnel. This will accepts https requests on a port of your choosing, and forwards the requests to transmission on its http port (the traffic is internal to the server, so it is supposedly secure).

I used these guides:

some guides usefuly for creating certficates are:

Let’s install stunnel and set it up:

  1. sudo apt-get install stunnel
  2. Edit the configuration file:
    sudo $EDITOR /etc/default/stunnel4
  3. Set ENABLED to 1.
  4. Create a certificate – you need to generate a .pemfile. I generate a self-signed certificate (the guides I mentioned earlier explain how to do this). Note that with a self-signed certificate, you will need to set up your web browser to accept the certificate. Here is the command I use to generate the self-signed certificate:
    sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/stunnel/transmission.pem -keyout /etc/stunnel/transmission.pem
  5. Set the persmissions on the new certficate. We need to protect this certificate, especially since it does not have a password:
    sudo chmod 600 /etc/stunnel/transmission.pem
    sudo chown root:root /etc/stunnel/transmission.pem
  6. Create a configuration file for the Transmission SSL tunnel:
    sudo $EDITOR /etc/stunnel/transmission.conf
  7. Add this text inside the file (in this case I make the SSL port 9092):
    cert = /etc/stunnel/transmission.pem
    accept = 9092
    connect = 9091
  8. Start stunnel (it shouldn’t be running yet):
    sudo /etc/init.d/stunnel4 start
  9. If it doesn’t start, you can look at syslog to try and troubleshoot:
    sudo cat /var/log/syslog
  10. Try accessing the new SSL port at:

    With https, transmission is finicky about the URL, so it needs to be exact (see this thread).

Now you can make the 9091 port inaccessible from outside the server:

  1. sudo /etc/init.d/transmission-daemon stop
  2. sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  3. Change rpc-whitelist-enabled back to true.
  4. rpc-whitelist should be set to
  5. sudo /etc/init.d/transmission-daemon start

Try the


link again – it should give you a “Forbidden” error.

Web Server

I like to set up my web server so that it has two ports:

  • Default http port 80 – I do port forwardng on my router to expose this externally. And then I access it using a domain name from DynDNS Remote Access – this gives me a domain name I can use to access my router/server. My router is set up to automatically update the DNS entry with the latest IP address. I don’t use the public web server often, but I like to have it ready for when I do need it.
  • A second private http port – only accessible within my network.

Let’s install the web server (if you installed MythTV earlier in Part 2, then this is already installed:

sudo apt-get install apache2

Check it out at this URL:


You should get Apache’s “It Worked!” webpage.

But, if you have MythWeb installed, instead it will redirect to the MythWeb web client. Let’s disable it and restore the default site – this will give us a baseline to start from:

  1. Disable the MythWeb site:
    sudo a2dissite default-mythbuntu
    sudo a2dissite mythweb.conf
  2. Enable the defautlt site:
    sudo a2ensite default
  3. Reload the settings:
    sudo service apache2 reload
  4. Check the site again, it should be the default again.  You may need to delete your temporary internet files, though, before it will work.

Now let’s disable the default sites and re-arrange things.

  1. sudo a2dissite default
  2. Make sure no other sites are enabled – /etc/apache2/sites-enabled should not contain anything:
    ls /etc/apache2/sites-enabled

Let’s re-arrange the /var/www folder. I make a different folder underneath /var/www for each port. I got this idea from VirtualHost examples where a directory is created for each virtual host.

  1. Create the public folder:
    sudo mkdir /var/www/public
  2. Put whatever you want in the public folder, or just put the default index.html there like so:
    sudo cp /var/www/index.html /var/www/public

Repeat this for the private site. I name the private folder private.

Create the public site configuration:

  1. Create the config file:
    sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/public
  2. sudo $EDITOR /etc/apache2/sites-available/public
  3. Change all /var/www references to /var/www/public.
  4. Enable the new public site:
    sudo a2ensite public
  5. Reload the settings
    sudo service apache2 reload

Test your website and you should get your public web page.

Now set up your private port:

  1. sudo $EDITOR /etc/apache2/ports.conf
  2. Add these lines (xxxshould be replaced with your private port number):
    NameVirtualHost *:xxx
    Listen xxx

Set up your private site:

  1. Follow the same instructions as for the public one, using a site name of private. In addition, when editing the config file, change
    <VirtualHost *:80>


    <VirtualHost *:xxx>
  2. Restart Apache – it looks like this is necessary before it will start listening on the new port:
    sudo service apache2 restart

Test your website using the following link:


and you should get your private web page.

If you have MythWeb installed, re-enable it, but under the private port:

  1. Move the MythWeb folder to its new home:
    sudo mv /var/www/mythweb /var/www/private/mythweb
  2. Create a new version of the mythbuntu configuration file:
    sudo cp /etc/apache2/sites-available/default-mythbuntu /etc/apache2/sites-available/private-mythbuntu
  3. sudo $EDITOR /etc/apache2/sites-available/private-mythbuntu
  4. Change <VirtualHost *:80> to <VirtualHost *:xxx>
  5. Remove this line (this does the redirect):
    DirectoryIndex mythweb
  6. Change all /var/www references to /var/www/private/mythweb.
  7. Create a new version of the mythweb.conf configuration file:
    sudo cp /etc/apache2/sites-available/mythweb.conf /etc/apache2/sites-available/private-mythweb.conf
  8. sudo $EDITOR /etc/apache2/sites-available/private-mythweb.conf

    Change all /var/www/mythweb references to /var/www/private/mythweb

  9. Enable the MythWeb site:
    sudo a2ensite private-mythbuntu
    sudo a2ensite private-mythweb.conf
  10. sudo service apache2 reload

Now test the mythweb site, using the following link:


And you are done with the basic setup.

You may get an error from Apache: “apache2: Could not reliably determine the server’s fully qualified domain name, using for ServerName”. To fix this problem, do the following (as described here):

  1. sudo $EDITOR /etc/apache2/httpd.conf

    Add this line (replace xxx with your server name):

    ServerName xxx
  2. sudo service apache2 restart

You shouldn’t get the error anymore.


I use Git as my source control software. This is probably only something that software developers would be interested in. I set up several pieces of server software used to interact with Git.


Gitosis not only allows you to clone repositories, but also allows push your changes back to the repositories. This wiki has a pretty good summary of what it does. I used the Ubuntu Git Community Documentation to set it up.

Let’s install it:

sudo apt-get install git-core gitosis

I like to create a spearate partition for the git repositories. I originally tried creating the partition for the the /srv/gitosis/repositories folder. But, Gitosis was getting confused with the lost+found folder that is created inside a partition. So, I create the partition for the /srv/gitosis folder instead.

  1. Create the LVM partition as described in Part 1.  I name the partition gitosis.
  2. Mount the new partition in a temporary location (we need to copy the Gitosis folder structure in the new partition):
    sudo mkdir /mnt/gitosistmp
    sudo mount -t ext4 /dev/mainvg/gitosis /mnt/gitosistmp
  3. Compare the new and old folder permissions:
    ls -ld /srv/gitosis
    ls -ld /mnt/gitosistmp
  4. Set the permissions on the new folder:
    sudo chown gitosis:gitosis /mnt/gitosistmp
  5. Copy the directory structure from Gitosis’ folder. I set the rsync options to copy pretty much everything.
    sudo rsync -avxHAXS /srv/gitosis/ /mnt/gitosistmp
  6. Now remove the existing folder structure:
    sudo rm -r /srv/gitosis/*
  7. Unmount the new partition from its temporary location:
    sudo umount /mnt/gitosistmp
    sudo rmdir /mnt/gitosistmp
  8. sudo $EDITOR /etc/fstab

    Add this line (replace the <tab>’s with actual tabs):


    And mount it:

    sudo mount /srv/gitosis

Now you can initialize your repository. This is pretty well documented in various places. The Ubuntu Git Community Documentation has a pretty good guide. And the Ubuntu Generating RSA Keys Guide explains how to generate the key on the client.

Git Daemon

This is the daemon that allows you to clone a git reposity using the git:// protocol.

These were guides I used to set it up:

Let’s install it. I assume you already have Gitosis installed and are using the /srv/gitosis/git folder to store your repositories.

  1. sudo apt-get install git-daemon-run
  2. Edit the script that starts the daemon:
    sudo $EDITOR /etc/sv/git-daemon/run
  3. Change base-path from /var/cache to /srv/gitosis/git.
  4. Remove the /var/cache/git argument at the end
  5. If you want to make all repositories available, add --export-all option to the git-daemon command line.Otherwise, you need to create a git-daemon-export-okfile in each repository you want to make available:
    sudo touch /srv/gitosis/git/myrepository.git/git-daemon-export-ok
  6. Now kill the daemon and it should respawn with the new settings:
    sudo ps -A | grep git

    Run this with xxx replaced by the pid of the daemon:

    sudo kill xxx

Now you can test it out by cloning a repository.


ViewGit is a web client that allows you to browse your repositories with your web browser. The Ubuntu Git Community Documentation has a section about installing it. My instructions differ a somewhat, but they are pretty similar. Let’s install it:

Download and extract the viewgit tar file into your home directory. ViewGit is at version 0.0.6 as of this writing.

  1. Install necessary packages (i assume you already have apache2installed):
    sudo apt-get install libapache2-mod-php5 php-geshi
  2. cd ~
    tar -xvf viewgit-0.0.6.tar.gz
  3. Create the configuration file:
    cp viewgit/inc/config.php viewgit/inc/localconfig.php
  4. Edit the configuration file:
    $EDITOR viewgit/inc/localconfig.php
  5. Add this line in the projects array (the comma is important):
    'myrepository' => array('repo' => '/srv/gitosis/git/myrepository.git'),
  6. Since you installed the php-geshi package, you can enable GeSHI highlighting. Change the $conf['geshi'] setting to true, and uncomment these two lines:
    $conf['geshi_path'] = 'inc/geshi/geshi.php';
    $conf['geshi_path'] = '/usr/share/php-geshi/geshi.php'; // Path on Debian

    I’m still learning about this highlighting option, so I’m not sure what additional configuration should be done (.e.g. the $conf['geshi_line_numbers'] setting).

Move ViewGit into your web folder and set permissions. I assume here that you have a private web folder set up like I did under the Web Server section.

  1. sudo mv viewgit /var/www/private
  2. sudo chown -R root:root /var/www/private/viewgit
  3. Add the www-data user to the gitosisgroup – this ensures apache has access to the gitosis folders:
    sudo adduser www-data gitosis

Now test ViewGit by going to the following URL:


Disk Usage Alerts

I like to set up a job to monitor disk usage and email me if it gets low. I use a Perl script based on the one in this blog. I posted my version in the comments section of the blog. I assume you will be using this script.

  1. Place your script in a file named /etc/cron.daily/diskspacecheck.
  2. In order to use the Perl dfcommand, you need to install the Perl diskspace package:
    sudo apt-get install libfilesys-diskspace-perl
  3. Make the script executable:
    sudo chmod +x /etc/cron.daily/diskspacecheck

Keep in mind that this Perl script can only check filesystem sizes (i.e. those shown when you run the df command). One way you can test the script is by adjusting the thresholds such that they will trigger an email.

What’s Next

I’m done for now, but there are still some things I want to set up:

  • Web search capability of my file shares. I have been looking into using Regain.
  • Auotmatic incremental backups on a daily basis.
  • I want to look into using monitoring packages. The Ubuntu Server Monitoring guide discusses a couple.
  • Need to research security more – for example maybe I should set up a firewall. A coworker suggested using UFW.
  • The Ubuntu Server Guide has a lot more ideas…

2 Responses to Setting Up a Home Server with Ubuntu 11.10 – Part 3

  1. This is a very detailed and nicely explained post. Thanks!

    • Andre Beckus says:

      Thanks a lot! I’m glad you like it – it took a long time to put these together.
      I see that you were writing a similar blog entry at the same time. Some interesting things on there that I wasn’t aware of. And I like your OpenSSH post.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: