Setting Up a Home Server with Ubuntu 11.10 – Part 3


I am continuing a series of blog entries documenting how I set up my home server. Part 1 contains a description of what I am trying to accomplish and instructions for doing the initial install. Part 2 explains how to set up MythTV.  In this part, I set up some miscellaneous components.

Automatic Updates

I like to set up the server so that security updates are automatically installed. And I get notifications of non-security updates, which I manually install.

The following guides came in useful:

First, let’s setup the unattended-upgrades package which will do our automatic updates:

  1. sudo apt-get install unattended-upgrades

Set up the actions that are performed automatically, and set how often they are performed:

  1. sudo $EDITOR /etc/apt/apt.conf.d/10periodic
  2. Set the APT::Periodic::Download-Upgradeable-Packages line to "1". This will download updateable packages daily. Then, when you do manual updates, you will not have to wait for them to download.
  3. Set the APT::Periodic::AutocleanInterval line to "7". This will do an auto-clean every week. This cleans up packages that are no longer being used.
  4. Add a line:
    APT::Periodic::Unattended-Upgrade "1";

    This means automatic updates will be performed daily.

Configure the automatic updates:

  1. sudo $EDITOR /etc/apt/apt.conf.d/50unattended-upgrades
  2. The Unattended-Upgrade::Allowed-Originsblock enables specific types of updates. Updates that are commented out will not be automatically run (this doesn’t affect manual updates, though). I make sure that this line is the only one enabled:
    "${distro_id} ${distro_codename}-security";

    This enables security updates. If you are adventurous, you can uncomment other lines to enable other updates. These other updates are commented out because they are more likely to break something – they are best done manually so you can batch them together and plan them appropriately.

  3. Uncomment this line so that unattended-upgradescan email you:
    Unattended-Upgrade::Mail "root@localhost";

Now let’s set up apticron to send notifciations of pending updates. This is how I know what non-security updates I need to manually apply:

  1. sudo apt-get install apticron
  2. sudo $EDITOR /etc/apticron/apticron.conf
  3. By default, apticron uses the results of /bin/hostname --all-fqdns to get the host name. On my server, this returns nothing (I’m guessing because it is trying to do a reverse lookup and I don’t have a valid domain). So I need to uncomment this line:

    Instead of hardcoding a hostname, though, you can use the hostname command by changing the line like this:


By default, you will get daily notifications. I don’t want notifications that frequently. As far as I can tell, this interval is hardcoded in the script itself, so I change the script:

  1. Backup the script:
    sudo cp /usr/sbin/apticron /usr/sbin/apticron.bak
  2. sudo $EDITOR /usr/sbin/apticron
  3. Find this line:
  4. test "x$( find $tsfile -mmin -1381 2>/dev/null )" = "x$tsfile" && exit 0

    And change -mmin -1381 to -mtime -X, where X is the number of days to wait between emails. I set mine to -mtime -7, which means weekly emails.

The apt-listchanges package gets installed with apticron. This causes the apt-get dist-ugprade command to display all updates, with descriptions. It makes you hit a key after each screen, which can take a while. Let’s change it:

  1. sudo $EDITOR /etc/apt/listchanges.conf
  2. Comment out this line (put a “#” in front of it):

    And add this line:


    This means that apt-get will still print out the changes, but won’t pause after each screen full. apt-listchanges will still email you the changes that were applied, so you will have a record. More options are listed here.


BOINC is a program that allows you to take on slices of distributed computing projects. There are many projects that run on BOINC – I use World Community Grid. I also used to run GIMPS, which has its own client program to install. Keep in mind that these programs will heat up your CPU, drive up it’s power usage, and possibly shorten it’s life. It will also act like a heater. Although during cold weather, this can be a good thing – if your going to run a heater you might as well run one that has additional benefits.

I used this guide during the setup.

  1. sudo apt-get install boinc-client boinc-manager
  2. At this point, I have already registered on the World Community Grid.
  3. Run the setup utility. Note this pops up a graphical window, which means you need to SSH into your server with X11 forwarding enabled (see Part 1). Note that on my server, I get some garbled windows. I’m not sure if this is just my computer, or if everyone has the same problem. But I am able to get through.
  4. Start the manager.
    sudo boincmgr
  5. I select “Add project”.
  6. I select World Community Grid.
  7. I login using my World Community Grid account.
  8. Finish.
  9. I still have a window open, so I close it out (it is garbled, I can’t tell what it says). I have a bunch of errors on the command line about not being able to load images, so I have to Ctrl-C to kill it. But when I start boincmgr back up, the window is no longer garbled,a nd all seems well.
  10. When you go into the manager, you can monitor progress and change preferences. Even without the manager running, BOINC stays in the background. You can see by running this:

    But, don’t worry, it is supposed to throttle back when you are using the CPU for other tasks and isn’t supposed to interfere.

  11. After a while, I can see results by logging on to the World Community Grid webpage.

Hardware Monitoring

I like to be able to check the CPU temperature, especially since I know it will be running hot from BOINC. So, I set up the lm-sensors package, which not only gives CPU information, but gives other hardware information as well. You will be accessing hardware and loading kernel modules, which can be a bit risky. Skip this if you don’t want to take the risk:

  1. sudo apt-get install lm-sensors
  2. To view the current sensor data:

    This will only show sensors for which drivers are already installed.

  3. To probe the hardware and find out what additional drivers should be installed (be careful since this probes hardware – it will prompt you before performing tests, and give you an idea of how risky it is):
    sudo sensors-detect

    I answered YES to all questions, except for the one asking if you would like to automatically update /etc/modules. I prefer to do that myself.

  4. You will be given you a section between the “cut here” lines that you can copy and paste into your /etc/modules file (reboot once you are done for them to take effect). You can also use modprobe to immediately load modules temporarily for testing (replace modulewith the module name):
    sudo modprobe module



I use Transmission as my BitTorrent client. Transmission has a web client with which it can be managed.  This makes it ideal for running on a server.  Let’s set it up:

  1. sudo apt-get install transmission-daemon
  2. Add your admin user to the debian-transmission group:
    sudo adduser adminuser debian-transmission

    Log out and back in for the group change to take effect.

  3. Stop transmission. You always need stop it before changing the settings.json configuration file – when Transmission shuts down, it rewrites the file with the original settings it loaded when starting up.
    sudo /etc/init.d/transmission-daemon stop
  4. Edit the configuration file:
    sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  5. Changing the rpc-whitelist-enabled entry to false will allow you to access the web client from any PC. This will allow you to access the website from any computer. Alternatively you can grant access to individual PCs by adding them to the rpc-whitelist entry.
  6. The rpc-username entry contains the userid you use to log into the Transmission web client. It defaults to transmission.
  7. The rpc-password entry contains the encrypted password. It defaults to transmission also. To change the password, type a plaintext password in between the quotes – when transmission starts it will automically encrypt it for you.
  8. The rpc-port entry sets the port on which the web client listens. It defaults to 9091.
  9. Start Transmission back up:
    sudo /etc/init.d/transmission-daemon start
  10. Go to the transmission web client at:
  11. Log in.
  12. You can set some of the preferences from within the web client, which will automatically be applied in the configuration file for you. I suggest that you at least put some limits on the upload and download speeds.

I like to set up a separate partition for the torrent downloads.  I assume you haven’t download any torrents into the downloads folder yet:

  1. Create an LVM partition as described in Part 1. I call the partition torrentdownloads.
  2. Make a note of the permissions on the existing Transmission download folder:
    ls -dl /var/lib/transmission-daemon/downloads
  3. sudo $EDITOR /etc/fstab
  4. Add this line to mount the new partition in Transmission’s default location (replace the <tab>’s with actual tabs):
  5. sudo mount /var/lib/transmission-daemon/downloads
  6. Change the permissions on the new partition to match the old one:
    sudo chmod 4775 /var/lib/transmission-daemon/downloads
    sudo chown debian-transmission:debian-transmission /var/lib/transmission-daemon/downloads

Now you can try go to the web client and try adding a new torrent.

SSL with stunnel

Transmission does not support SSL internally (i.e. you need to use an http URL and cannot use https). So, if you want SSL, you need to use a reverse proxy server. Ultimately, it sounds like using the Apache web server would be a good idea. But for now, I will use stunnel. This will accepts https requests on a port of your choosing, and forwards the requests to transmission on its http port (the traffic is internal to the server, so it is supposedly secure).

I used these guides:

some guides usefuly for creating certficates are:

Let’s install stunnel and set it up:

  1. sudo apt-get install stunnel
  2. Edit the configuration file:
    sudo $EDITOR /etc/default/stunnel4
  3. Set ENABLED to 1.
  4. Create a certificate – you need to generate a .pemfile. I generate a self-signed certificate (the guides I mentioned earlier explain how to do this). Note that with a self-signed certificate, you will need to set up your web browser to accept the certificate. Here is the command I use to generate the self-signed certificate:
    sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/stunnel/transmission.pem -keyout /etc/stunnel/transmission.pem
  5. Set the persmissions on the new certficate. We need to protect this certificate, especially since it does not have a password:
    sudo chmod 600 /etc/stunnel/transmission.pem
    sudo chown root:root /etc/stunnel/transmission.pem
  6. Create a configuration file for the Transmission SSL tunnel:
    sudo $EDITOR /etc/stunnel/transmission.conf
  7. Add this text inside the file (in this case I make the SSL port 9092):
    cert = /etc/stunnel/transmission.pem
    accept = 9092
    connect = 9091
  8. Start stunnel (it shouldn’t be running yet):
    sudo /etc/init.d/stunnel4 start
  9. If it doesn’t start, you can look at syslog to try and troubleshoot:
    sudo cat /var/log/syslog
  10. Try accessing the new SSL port at:

    With https, transmission is finicky about the URL, so it needs to be exact (see this thread).

Now you can make the 9091 port inaccessible from outside the server:

  1. sudo /etc/init.d/transmission-daemon stop
  2. sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  3. Change rpc-whitelist-enabled back to true.
  4. rpc-whitelist should be set to
  5. sudo /etc/init.d/transmission-daemon start

Try the


link again – it should give you a “Forbidden” error.

Web Server

I like to set up my web server so that it has two ports:

  • Default http port 80 – I do port forwardng on my router to expose this externally. And then I access it using a domain name from DynDNS Remote Access – this gives me a domain name I can use to access my router/server. My router is set up to automatically update the DNS entry with the latest IP address. I don’t use the public web server often, but I like to have it ready for when I do need it.
  • A second private http port – only accessible within my network.

Let’s install the web server (if you installed MythTV earlier in Part 2, then this is already installed:

sudo apt-get install apache2

Check it out at this URL:


You should get Apache’s “It Worked!” webpage.

But, if you have MythWeb installed, instead it will redirect to the MythWeb web client. Let’s disable it and restore the default site – this will give us a baseline to start from:

  1. Disable the MythWeb site:
    sudo a2dissite default-mythbuntu
    sudo a2dissite mythweb.conf
  2. Enable the defautlt site:
    sudo a2ensite default
  3. Reload the settings:
    sudo service apache2 reload
  4. Check the site again, it should be the default again.  You may need to delete your temporary internet files, though, before it will work.

Now let’s disable the default sites and re-arrange things.

  1. sudo a2dissite default
  2. Make sure no other sites are enabled – /etc/apache2/sites-enabled should not contain anything:
    ls /etc/apache2/sites-enabled

Let’s re-arrange the /var/www folder. I make a different folder underneath /var/www for each port. I got this idea from VirtualHost examples where a directory is created for each virtual host.

  1. Create the public folder:
    sudo mkdir /var/www/public
  2. Put whatever you want in the public folder, or just put the default index.html there like so:
    sudo cp /var/www/index.html /var/www/public

Repeat this for the private site. I name the private folder private.

Create the public site configuration:

  1. Create the config file:
    sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/public
  2. sudo $EDITOR /etc/apache2/sites-available/public
  3. Change all /var/www references to /var/www/public.
  4. Enable the new public site:
    sudo a2ensite public
  5. Reload the settings
    sudo service apache2 reload

Test your website and you should get your public web page.

Now set up your private port:

  1. sudo $EDITOR /etc/apache2/ports.conf
  2. Add these lines (xxxshould be replaced with your private port number):
    NameVirtualHost *:xxx
    Listen xxx

Set up your private site:

  1. Follow the same instructions as for the public one, using a site name of private. In addition, when editing the config file, change
    <VirtualHost *:80>


    <VirtualHost *:xxx>
  2. Restart Apache – it looks like this is necessary before it will start listening on the new port:
    sudo service apache2 restart

Test your website using the following link:


and you should get your private web page.

If you have MythWeb installed, re-enable it, but under the private port:

  1. Move the MythWeb folder to its new home:
    sudo mv /var/www/mythweb /var/www/private/mythweb
  2. Create a new version of the mythbuntu configuration file:
    sudo cp /etc/apache2/sites-available/default-mythbuntu /etc/apache2/sites-available/private-mythbuntu
  3. sudo $EDITOR /etc/apache2/sites-available/private-mythbuntu
  4. Change <VirtualHost *:80> to <VirtualHost *:xxx>
  5. Remove this line (this does the redirect):
    DirectoryIndex mythweb
  6. Change all /var/www references to /var/www/private/mythweb.
  7. Create a new version of the mythweb.conf configuration file:
    sudo cp /etc/apache2/sites-available/mythweb.conf /etc/apache2/sites-available/private-mythweb.conf
  8. sudo $EDITOR /etc/apache2/sites-available/private-mythweb.conf

    Change all /var/www/mythweb references to /var/www/private/mythweb

  9. Enable the MythWeb site:
    sudo a2ensite private-mythbuntu
    sudo a2ensite private-mythweb.conf
  10. sudo service apache2 reload

Now test the mythweb site, using the following link:


And you are done with the basic setup.

You may get an error from Apache: “apache2: Could not reliably determine the server’s fully qualified domain name, using for ServerName”. To fix this problem, do the following (as described here):

  1. sudo $EDITOR /etc/apache2/httpd.conf

    Add this line (replace xxx with your server name):

    ServerName xxx
  2. sudo service apache2 restart

You shouldn’t get the error anymore.


I use Git as my source control software. This is probably only something that software developers would be interested in. I set up several pieces of server software used to interact with Git.


Gitosis not only allows you to clone repositories, but also allows push your changes back to the repositories. This wiki has a pretty good summary of what it does. I used the Ubuntu Git Community Documentation to set it up.

Let’s install it:

sudo apt-get install git-core gitosis

I like to create a spearate partition for the git repositories. I originally tried creating the partition for the the /srv/gitosis/repositories folder. But, Gitosis was getting confused with the lost+found folder that is created inside a partition. So, I create the partition for the /srv/gitosis folder instead.

  1. Create the LVM partition as described in Part 1.  I name the partition gitosis.
  2. Mount the new partition in a temporary location (we need to copy the Gitosis folder structure in the new partition):
    sudo mkdir /mnt/gitosistmp
    sudo mount -t ext4 /dev/mainvg/gitosis /mnt/gitosistmp
  3. Compare the new and old folder permissions:
    ls -ld /srv/gitosis
    ls -ld /mnt/gitosistmp
  4. Set the permissions on the new folder:
    sudo chown gitosis:gitosis /mnt/gitosistmp
  5. Copy the directory structure from Gitosis’ folder. I set the rsync options to copy pretty much everything.
    sudo rsync -avxHAXS /srv/gitosis/ /mnt/gitosistmp
  6. Now remove the existing folder structure:
    sudo rm -r /srv/gitosis/*
  7. Unmount the new partition from its temporary location:
    sudo umount /mnt/gitosistmp
    sudo rmdir /mnt/gitosistmp
  8. sudo $EDITOR /etc/fstab

    Add this line (replace the <tab>’s with actual tabs):


    And mount it:

    sudo mount /srv/gitosis

Now you can initialize your repository. This is pretty well documented in various places. The Ubuntu Git Community Documentation has a pretty good guide. And the Ubuntu Generating RSA Keys Guide explains how to generate the key on the client.

Git Daemon

This is the daemon that allows you to clone a git reposity using the git:// protocol.

These were guides I used to set it up:

Let’s install it. I assume you already have Gitosis installed and are using the /srv/gitosis/git folder to store your repositories.

  1. sudo apt-get install git-daemon-run
  2. Edit the script that starts the daemon:
    sudo $EDITOR /etc/sv/git-daemon/run
  3. Change base-path from /var/cache to /srv/gitosis/git.
  4. Remove the /var/cache/git argument at the end
  5. If you want to make all repositories available, add --export-all option to the git-daemon command line.Otherwise, you need to create a git-daemon-export-okfile in each repository you want to make available:
    sudo touch /srv/gitosis/git/myrepository.git/git-daemon-export-ok
  6. Now kill the daemon and it should respawn with the new settings:
    sudo ps -A | grep git

    Run this with xxx replaced by the pid of the daemon:

    sudo kill xxx

Now you can test it out by cloning a repository.


ViewGit is a web client that allows you to browse your repositories with your web browser. The Ubuntu Git Community Documentation has a section about installing it. My instructions differ a somewhat, but they are pretty similar. Let’s install it:

Download and extract the viewgit tar file into your home directory. ViewGit is at version 0.0.6 as of this writing.

  1. Install necessary packages (i assume you already have apache2installed):
    sudo apt-get install libapache2-mod-php5 php-geshi
  2. cd ~
    tar -xvf viewgit-0.0.6.tar.gz
  3. Create the configuration file:
    cp viewgit/inc/config.php viewgit/inc/localconfig.php
  4. Edit the configuration file:
    $EDITOR viewgit/inc/localconfig.php
  5. Add this line in the projects array (the comma is important):
    'myrepository' => array('repo' => '/srv/gitosis/git/myrepository.git'),
  6. Since you installed the php-geshi package, you can enable GeSHI highlighting. Change the $conf['geshi'] setting to true, and uncomment these two lines:
    $conf['geshi_path'] = 'inc/geshi/geshi.php';
    $conf['geshi_path'] = '/usr/share/php-geshi/geshi.php'; // Path on Debian

    I’m still learning about this highlighting option, so I’m not sure what additional configuration should be done (.e.g. the $conf['geshi_line_numbers'] setting).

Move ViewGit into your web folder and set permissions. I assume here that you have a private web folder set up like I did under the Web Server section.

  1. sudo mv viewgit /var/www/private
  2. sudo chown -R root:root /var/www/private/viewgit
  3. Add the www-data user to the gitosisgroup – this ensures apache has access to the gitosis folders:
    sudo adduser www-data gitosis

Now test ViewGit by going to the following URL:


Disk Usage Alerts

I like to set up a job to monitor disk usage and email me if it gets low. I use a Perl script based on the one in this blog. I posted my version in the comments section of the blog. I assume you will be using this script.

  1. Place your script in a file named /etc/cron.daily/diskspacecheck.
  2. In order to use the Perl dfcommand, you need to install the Perl diskspace package:
    sudo apt-get install libfilesys-diskspace-perl
  3. Make the script executable:
    sudo chmod +x /etc/cron.daily/diskspacecheck

Keep in mind that this Perl script can only check filesystem sizes (i.e. those shown when you run the df command). One way you can test the script is by adjusting the thresholds such that they will trigger an email.

What’s Next

I’m done for now, but there are still some things I want to set up:

  • Web search capability of my file shares. I have been looking into using Regain.
  • Auotmatic incremental backups on a daily basis.
  • I want to look into using monitoring packages. The Ubuntu Server Monitoring guide discusses a couple.
  • Need to research security more – for example maybe I should set up a firewall. A coworker suggested using UFW.
  • The Ubuntu Server Guide has a lot more ideas…

Setting Up a Home Server with Ubuntu 11.10 – Part 2: MythTV


I am continuing a series of blog entries documenting how I set up my home server.  Part 1 contains a description of what I am trying to accomplish and instructions for doing the initial install.  In this part, I set up the MythTV Digital Video Recorder software. I’ve found this part particularly tedious, and the most frustrating part of setting up the server, since it requires hardware setup. This is one of those times where I strongly suggest you do a test install on your PC first, just to get the hang of it. I still don’t have a thorough understanding of all this, but was able to fumble my way through the setup.

My recording situation is probably not common. The first 100 or so cable channels are included with my apartment rent. The broadcast channels (i.e. those you can get with an antenna) are digital, but the rest of the channels are still analog – you need to get a cable box in order to get the digital versions. So, I still need both a digital and an analog tuner. I have two video capture cards installed:

  • pcHDTV HD-5500 – This can tune both digital and analog channels, but not at the same time.  It looks like the digital tuner can record more than one program at a time – I’m not sure the details about how this works.
  • Hauppauge WinTV-HVR-1600 – This can tune digital and analog channels at the same time (it has two cable inputs, one for each). Unfortunately, I had problems with the digital tuner tuning properly, so I am not using it. On the positive side, the card has an MPEG-2 encoder built in for the analog signal, which means less load on my CPU.

I am setting up a headless server, so I will only install the MythTV backend. The videos will be watched on a separate PC, which can either use the MythTV frontend, or stream the video via MythWeb using a standard web browser and video player.

Here are some guides I used during the setup:

Testing the Capture Cards

Before I set up MythTV, I find it worthwhile to test the video capture cards. Some video capture cards have less than perfect Linux support (although I can see the developers have put a huge amount of work into developing drivers, the hardware is often closed source).  Getting MythTV working and getting the hardware working are both frustrating enough when done separately, much less at the same time. Getting the video cards working outside MythTV also helped broaden my understanding of the hardware and drivers.

There are different types of capture cards which are summarized on MythTV’s capture card wiki page. I imagine that only a small subset of people receive analog broadcasts anymore. But an analog card could still be useful if you are trying to record from a VCR or camcorder,or if you are getting an analog signal from a cable box.

There are many programs out there for playing the video from the capture cards, but I use MPlayer, so let’s install it with this command:

sudo apt-get install mplayer

I also used MPlayer on my client PC to play back recorded files.  Note that MPlayer has many options, and there are many other utility programs out there, so there are different ways to do the test. For example, you can tune your card using MPlayer, but I use an external program.

In order to do the tests via a remote connection, you will need to use SSH with X11 Forwarding enabled, as explained in Part 1. Keep in mind that the video will be slow and you will not get sound (I have read there are ways to get sound via SSH, but I haven’t tried yet).

You will be dealing with devices in the /dev folder. To figure out which device is which, you can use this command to query the specifications for that device (xxx is the device file):

udevadm info -a -p $(udevadm info -q path -n /dev/xxx)

You can also look at your boot log to see if a particular device was loaded:


To look for a particular piece of hardware (in this example, the cx88 chipset):

dmesg | grep -i "cx88"

Generally, you will look for chipsets that are included on your capture card. As with all computer hardware, the chipset’s manufacturer/ model number will likely not be the same as the card’s manufacturer/model number. You will need to do some research to figure out which chipsets are used on your particular card.

Digital Tuner Card

I imagine most people recording TV will use a digital card, due to the cutover to digital broadcasts. There are different digital standards internationally – mine is using the United States standard. And there are different standards based on whether you receive your TV via cable (QAM) or via an antenna (ATSC). In my case, I am receiving via cable.

The digital video capture card devices can be found under /dev/dvb/adapterX, where X is an adapter number (you will have multiple adapters if you have multiple cards installed). I use the udevadm command line I gave earlier, running it against the /dev/dvb/adapterX/dvr0 device. I know my pcHDTV card when I see this output:

looking at parent device '/devices/pci0000:00/0000:00:0e.0/0000:02:0a.2':
  DRIVERS=="cx88-mpeg driver manager"

Let’s start testing the card(s):

  1. Install the utilties:
    sudo apt-get install dvb-apps
  2. You first need to scan the channels using this command (replace the X after the -a option with the adapter device number).
    sudo scan -a X /usr/share/dvb/atsc/us-Cable-Standard-center-frequencies-QAM256 > ~/dvb_channels.conf

    This takes a while, and it’s normal to get many “tuning failed” messages.

  3. I have the problem described on this wiki page. I’m not sure why it is (I imagine it has something to do with my cable provider), but my channel numbers are in brackets, and they repeat. so I need run the script on the wiki page:
    perl -pe 's/^.{6}/++$i/e;' ~/dvb_channels.conf > ~/dvb_channels_fixed.conf
  4. If you look in the channels file you will see a number of lines like this:

    with the first number being the channel number. In my case, these channel numbers are arbitrary since I ran the Perl script. And I don’t know enough to figure out how to tell which line corresponds to which channel number on my TV.

  5. Now you can start streaming the video from the card:
    sudo mplayer /dev/dvb/adapterX/dvr0
  6. And then you can tune the card while I watch it.  In fact, you probably won’t get any video until you tune to a valid channel.  I use the azap command (there are other commands you can use depending on the digital tuner type you have and what broadcast standard is used). This tunes in channel 1 from the channels config file (run this in a separate SSH session):
    sudo azap -c ~/dvb_channels_fixed.conf -a X -r 1

    The tuner will continue running, showing signal quality information.  Ctrl-C out when finished.

  7. Many of the channels listed in the conf file did not work for me, so I went through the channels starting at channel 1 and moving up to find a working channel. Note that at one point, I received a message “ERROR: error while parsing modulation (syntax error)” – so I had to remove the line for that channel (I couldn’t tune to any channels after that one until I removed the line). Once I found a channel, I saved the line in its own conf file for future use. It’s a cumbersome way to do things, but I figured once I found a channel, I wouldn’t have to do it again.
  8. Once you find a good channel, you can capture it to a file:
    sudo mplayer /dev/dvb/adapterX/dvr0 -dumpstream -dumpfile mydvbfile.ts

    Again you can tune using a separate session. Ctrl-C when you are finished recording.  And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

Analog Hardware Encoder Card

The Hauppauge card I have has a built-in MPEG-2 encoder. This MPEG stream, which contains the audio as well as the video, can be accessed via a /dev/video device. The card has multiple video devices, which represent the different physical inputs on the back of the card. Let’s test the card(s):

  1. Install the utilities:
    sudo apt-get install ivtv-utils
  2. Start playing the video stream in a window (X is the device number):
    sudo mplayer /dev/videoX
  3. And then you can tune the card while you watch it (keep in mind that the channel can take a while to chane in your window, since there is a long lag). This will change to channel 10, assuming a US cable frequency table. Run this command in a separate SSH session:
    sudo ivtv-tune --device=/dev/videoX --freqtable=us-cable --channel=10
  4. You can now capture to a file:
    sudo mplayer /dev/videoX -dumpstream -dumpfile ~/myanalogvideo.ts

    Again you can tune using a separate session. Ctrl-C when you are finished recording.  And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

Analog Framebuffer Card

My pcHDTV card provides a video stream in unencoded form. You have three devices, one for video (/dev/videoX), one for video blanking (/dev/vbiX), and one for audio. The audio device used to be /dev/dspX, but you no longer have that device now that Ubuntu has stopped supporting the OSS sound drivers. Now, you need get the sound via the ALSA framework. You need to look at the cards registered with ALSA and figure out which one is yours (again, you will need to know what audio chipset your card is using):

cat /proc/asound/cards

I know my pcHDTV card when I see this line:

1 [CX8801         ]: CX88x - Conexant CX8801
Conexant CX8801 at 0xee000000

Let’s test the card(s):

  1. Install the required utilities:
    sudo apt-get install ivtv-utils mencoder
  2. Start playing the video stream in a window (X is the device number):
    sudo mplayer tv:// -tv driver=v4l2:device=/dev/videoX

    This is just showing the video stream, which does not include any sound, even if you could hear sound over your SSH session.

  3. Tune the card while you watch it. This will change to channel 10, assuming a US cable frequency table. Run this command in a separate SSH session:
    sudo ivtv-tune --device=/dev/videoX --freqtable=us-cable --channel=10
  4. Let’s record the video and audio. This is more complicated than for a hardware encoder card, since you need to encode the video and audio yourself. In the adevice part, substitute Y with the card number you got from /proc/asound/cards earlier (in my case, this is 1 based on the output I show above):
    sudo mencoder tv:// -tv driver=v4l2:device=/dev/videoX:alsa=1:adevice=hw,Y.0 -oac pcm -ovc lavc -lavcopts vcodec=mpeg4:vpass=1 -o ~/test.avi

    Again you can tune using a separate session.

  5. You can monitor the CPU usage of mencoder. Encoding is a CPU intensive job, and this could be a concern with an older PC. I use this command to show the CPU usage of the most CPU-intensive processes, of which mencoder is the most intensive:
  6. Ctrl-C when you are finished recording. And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

I don’t test the VBI device. As far as I can tell, this is useful because it contains extra data, such as closed captioning.

udev Rules

It is tedious to use device numbers to identify your devices. And there’s also the possibility that the numbers could change. You can use a udev rule file to set up sensibly named symlinks to your device files. For example, instead of using /dev/dvb/adapter1 to get at my pcHDTV card, I create a symlink called /dev/dvb/adapter_pchdtv. Basically, to make a rule, you need to get information from the udevadm command I gave earlier – you need to find information that uniquely identifies a particular card. Then you need to write the rules. Explaining how to write rules is way beyond the scope of this writeup. But there are many guides, including this one.

Here’s how to create the rule file:

  1. sudo $EDITOR /etc/udev/rules.d/10-mythtv.rules
  2. Here is a sample of what I put in my rule file:
    # /etc/udev/rules.d/10-mythtv.rules
    # *** PCHDTV Digital ***
    SUBSYSTEM=="dvb", DRIVERS=="cx88-mpeg driver manager", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter_pchdtv/%%s $${K#*.}'", SYMLINK+="%c"
    # *** PCHDTV Analog ***
    #Note the brackets in ATTR{name} could not be matched because these
    #signify a character range in the match string.  Had to use * instead.
    #Could not figure out how to escape the brackets.
    SUBSYSTEM=="video4linux", ATTR{name}=="cx88* video (pcHDTV HD5500 HD*", DRIVERS=="cx8800", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", SYMLINK+="video_pchdtv"
    SUBSYSTEM=="video4linux", ATTR{name}=="cx88* vbi (pcHDTV HD5500 HDTV*", DRIVERS=="cx8800", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", SYMLINK+="vbi_pchdtv"
    # Note that you cannot match attributes from more than one parent at a time
    # This line is no longer necessary, since we no longer have a dsp device.
    #KERNEL=="dsp*", SUBSYSTEM=="sound", ATTRS{id}=="CX8801", SYMLINK+="dsp_pchdtv"
    # *** Hauppauge Digital ***
    SUBSYSTEM=="dvb", DRIVERS=="cx18", ATTRS{subsystem_vendor}=="0x0070", ATTRS{subsystem_device}=="0x7404", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter_hvr1600/%%s $${K#*.}'", SYMLINK+="%c"
    # *** Hauppauge Analog ***
    SUBSYSTEM=="video4linux", ATTR{name}=="cx18-0 encoder MPEG*", DRIVERS=="cx18", ATTRS{subsystem_vendor}=="0x0070", ATTRS{subsystem_device}=="0x7404",  SYMLINK+="video_hvr1600"
  3. Once you save the file, you can do a test. You need to find your device(s) under the /sys/class folder structure, which is a virtual filesystem where you can see raw device/driver information. Then run the following commands against those files. I give two examples, one for my analog card and one for my digital card:
    sudo udevadm test /class/video4linux/video0
    sudo udevadm test /class/dvb/dvb0.dvr0

    The command should print out a bunch of information. Then when you look in your /dev folder you will see the new symlinks to your devices, as specified by the rules. And you can test out the new symlinks using MPlayer. If there’s a problem, you’ll have to continue to tweak your udev file. Once you have it working, I suggest you reboot and then recheck to make sure all the symlinks show up.

Installing MythTV

Finally, we can start installing MythTV. The MythTV wiki’s backend configuration guide came in useful for this part.
I like to install some of the more universally used packages separately.

  1. The first is the MySQL database:
    sudo apt-get install mysql-server mysql-client

    You will need to select a root password during the install.

  2. The second package is the Apache web server:
    sudo apt-get install apache2

Now install MythTV:

  1. sudo apt-get install mythtv-backend-master
  2. When asked “Will other computers run MythTV?”, I answer Yes. I don’t normally use a frontend, but would like to be prepped for one, and I also use the frontend initially for testing. It appears the way a frontend connects is to first log in to the MySQL database running on the backend, from which it gets the connect info for the MythTV daemon itself. Then, it logs into the MythTV daemon. Trying to open up the access later is a big hassle which requires granting more permissions in the database. On the other hand, this is a security risk. There is a way to tighten the MySQL database down so that only certain PCs can get into it, as outlined here.  You can also do some security tightening using a firewall.
  3. I answer No when asked “Would you like to password-protect MythWeb?”. It would be more secure to have a password, but the last time I tried to do this, I found that video streaming clients do not handle passwords well.  Ubuntu’s MythWeb guide has more information about setting up and protecting MythWeb.
  4. I answer No when asked “Will you be using this webserver exclusively with mythweb?”

I like to set up a separate partition for MythTV,  although I still mount it in MythTV’s default folder: /var/lib/mythtv.  I use the XFS filesystem as suggested here.

  1. sudo apt-get install xfsprogs xfsdump
  2. Create the LVM partition as described in Part 1.  I name the partition mythtv.  But instead of using mkfs.ext4, I use mkfs.xfs:
    sudo mkfs.xfs /dev/mainvg/mythtv
  3. We need to mount the partition in a temporary area (we need to copy the MythTV directory structure into the new partition).
    sudo mkdir /mnt/mythtvtmp
    sudo mount -t xfs /dev/mapper/mainvg-mythtv /mnt/mythtvtmp
  4. Make sure the /mnt/mythtvtmp folder has the permissions to match /var/lib/mythtv:
    ls -ld /var/lib/mythtv
    ls -ld /mnt/mythtvtmp
  5. Copy the directory structure from MythTV’s default folder. I set the rsync options to copy pretty much everything.
    sudo rsync -avHAXS /var/lib/mythtv/ /mnt/mythtvtmp

    Then verify the folders are in their new location.

    ls /mnt/mythtvtmp
  6. Now remove the existing folder structure:
    sudo rm -r /var/lib/mythtv/*
  7. Unmount the partition from its temporary location:
    sudo umount /mnt/mythtvtmp
    sudo rmdir /mnt/mythtvtmp
  8. sudo $EDITOR /etc/fstab

    Add this line (replace the <tab>’s with actual tabs):


    And mount it:

    sudo mount /var/lib/mythtv

Configuring MythTV

I am assuming you have an account with Schedules Direct, from which MythTV can get it’s TV schedule. You have to pay a monthly fee. I’m not sure if there are any other legitimate sites, but I find Schedules Direct well worth the cost.

Start the setup program:

  1. sudo mythtv-setup
  2. Answer Yes when asked “Would you like to automatically be added to the group?”
  3. Answer Yes at the “Save all work and then press OK to restart your session” prompt.
  4. Answer OK at the “Please manually log out of your session for the changes to take effect” prompt.
  5. Log out and back in again.
  6. sudo mythtv-setup
  7. Answer Yes when asked “Is it OK to close any currently running mythbackend processes?”
  8. You should get a graphical set up window.

General Setup

  1. Select “1. General”.
  2. I leave the IP address as the address of the server. If no other PCs will run a front end, then you can make it .
  3. Select a Security PIN.
  4. Next.
  5. Next.
  6. Uncheck “Delete files slowly”. You shouldn’t need this for an XFS filesystem.
  7. Hit Next until you are on the last screen. Check the “Automatically run mythfilldatabase” checkbox. mythfilldatabase is what downloads the listings from Schedule Direct.
  8. Finish.

Capture Card Setup

  1. Select “2. Capture Card”.
  2. Create a “New capture card” for each card you have. Remember to use the symlink devices that you created earlier.

For a DVB card use the following settings:

  • Card type: DVB DTV capture card
  • DVB device number: /dev/dvb/adapterXXX/frontend0

For an analog hardware encoder card use these settings:

  • Card type: IVTV MPEG-2 encoder card
  • Video device: /dev/videoXXX
  • Default input: Select an input – I use Tuner 1

For an analog framebuffer card, use these settings:

  • Video device: /dev/videoXXX
  • VBI device: /dev/vbiYYY
  • Audio device (X is the ALSA device number): ALSA:hw:X,0
  • Force audio sampling rate: I set this to 48000 (this is recommended for the pcHDTV analog card).  Otherwise you can leave it set to None.
  • Default input: Select your input – I use Television

For my pcHDTV card, I cannot record from the digital and the analog tuners at the same time.  When setting up the pcHDTV digital tuner, I need to go into “Recording Options”, and make sure the following are set properly:

  • “Open DVB card on demand” should be checked.
  • “Use DVB card for active EIT scan” should be unchecked.

Otherwise, when I am using the analog tuner, I start getting static.

If you are running in a VM:

  1. Put video files somewhere on your server (I put them in my home folder). I used the .ts files I captured on real hardware during testing.
  2. When adding the capture card, select a card type of Import test recorder.
  3. Tner the file’s path.

Video Source Setup

Before you do this, you will need to have your lineups set up inside Schedules Direct. I have one lineup for the digital channels, and one for the analog channels. Make sure to edit the lineups to remove any bogus channels that you don’t actually have.

  1. Select “3. Video sources”.
  2. Create a “New video source” for each lineup.
  3. I name my video sources “Analog” and “Digital” based on the lineup types.
  4. I keep the Listings grabber set to North America (
  5. Enter your Schedules Direct login information.
  6. “Retrieve Lineups”
  7. In Data Direct Lineup, select the right lineup.
  8. Set the “Channel frequency table”. You don’t need to if it is the same as the one in General settings, but I always set it anyway. For the Digital video source, I set it to us-bcast, and for my Analog video source, I set it to us-cable.

Input Connections

The first time you set up MythTV on your server, it is probably best to set up one input connection at a time. Give one connection a try, then remove that connection, and add the next one. Fortunately, MythTV will remember the fetched/scanned channels when you re-create a connection that you previously removed.

  1. Select “4. Input connections”.
  2. Map each card/input to a video source. Note that analog cards will have multiple inputs to select. If you don’t use a particular input, leave it mapped to None.

To map a card/input, first select it, and then select a video source.

For an analog card/input, I don’t need to scan channels. I just use the channels form the lineup by selecting “Fetch channels from listing source”.

For a digital card:

  1. Select “Scan for channels” (it appears the scan is mandatory).
  2. In my case, I select a frequency table of “Cable” and leave the defaults for the rest.
  3. Select Next.
  4. Now wait while the channels are scanned. This takes a while. It should filter out any channels that are scrambled.
  5. I get multiple prompts regarding non-conflicting channels. I choose to “Insert all” non-conflicting channels
  6. When I get conflicting channels, I update them (I pick an arbitrary high channel number). I assume these channels will not work and I’ll probably end up deleting these later, but I am more comfortable keeping them for now.
  7. Finish.
  8. I do not select to “Fetch channels from listing source”.
  9. Select a starting channel.
  10. Next.
  11. Finish.

Again, for the pcHDTV card, I need to prevent it from record from the analog and digital sources at the same time.  I need to do the following:

  1. When inside the Input Connections screen, select “Create a New Input Group”.
  2. For both the digital and analog tuners, specify this group in the “Input group 1” field. With both cards in the same group, MythTV does not try to use them at the same time.

I have two analog tuners, but I always want my Hauppauge hardware encoder tuner to take precedence – I only want to use the pcHDTV framebuffer tuner if the Hauppauge tuner is busy.  This is because the pcHDTV tuner requires more CPU resources, and also because the video that MythTV encodes from the pcHDTV card less flexible than the MPEG stream from the Hauppauge card (see below for more details).  To do this, I add the Hauppauge card first when I am setting up the Input Connections. This gives it a lower ID in the database, which means MythTV uses it first.  Crude, but it seems to work.

Channel Setup

  1. Select “5. Channel Editor”.
  2. I get a bunch of channels without names when I do a digital scan, so I clean them up. Cleaning these can be tedious. You can wait until you get the frontend set up and then flip through in Live TV mode and see which ones work. But I’ve found that tuning to a bad channel can cause the frontend to crash or hang. I get some good channels that don’t show up in Schedule Directs listing, so the schedule isn’t reliable. And the channels that the scan finds don’t exactly match the ones my TV tunes to. I ended up just deleting any digital channels that don’t have a name. I figure if they don’t come up with a name, they are not of interest, and are more than likely “bad” channels.
  3. As far as the analog channelse go, I can add any analog channels that are missing from the schedule (since we didn’t do a scan, we will only have channels in the listing).

I had one odd situation: Schedules Direct shows a channel in my analog lineup, but it doesn’t show the channel in my digital lineup.  In reality, they are the opposite – I have the digital channel, but not the analog version.  To fix this, I manually added the digital channel, and then I set it’s XMLTV ID to the same value as the analog channel (I got the ID from Schedules Direct, and it also shows up in MythTV’s channel setup).  This associates the schedule from the analog channel with the digital channel.  I would like to remove the analog channel, but if I do that it will not download the schedule, which is needed for the digital channel.  I need to look into it some more, but for now I just leave the analog channel and remember that I can’t use it…

Inside the Channel Editor, I also download icons for the channels:

  1. Select “Icon Download”
  2. Select “Download all icons…”
  3. The icons will be downloaded. For some channels, you will need to select which icon is the correct on, or you can skip if none are correct (or none are shown).  This takes a while.

Finishing the Setup

Now you can exit the setup program by hitting the Escape key. Answer Yes when asked “Would you like to run mythfilldatabase?” This will load the schedules to your PC.

Managing MythTV

Later on, if you need to stop or start the MythTV backend, use these commands:

sudo stop mythtv-backend
sudo start mythtv-backend

If you want manually do a schedule update:

sudo mythfilldatabase

Using MythTV

Like setting up the backend, setting up a frontend to watch the video can be difficuly.  You have several different options, which I explain here.

MythTV Frontend

This is MythTV’s official frontend.  Even though I don’t plan on using it over the long-term, I still find it usefult to test and troubleshoot my backend setup.  I install the frontend on another Ubuntu PC (the package is mythtv-frontend).  At some point, I may look into setting it up on a Windows PC, which is possible, but appears to be a more difficult process.

Before trying MythTV, I try connecting with MySQL, since this is the first thing that MythTV does.

  1. Get the database login info. On the server, look at the MySQL config file:
    sudo cat /etc/mythtv/mysql.txt
  2. Then run this command on a client machine (I assume you are on an Ubuntu machine and have the mysql-client package installed):
    mysql -u mythtv -h serveripaddress -p mythconverg

    You will need to enter the password, at which time you should be logged in.

Now, try running the frontend. Inside Watch TV, use these keyboard commands. You can switch cards/inputs by pressing “Y” and “C”. But if you have multiple cards of the same type (e.g. if you have two digital input cards), it doesn’t seem to allow you to switch between the two of them so that you can test them both. A trick I’ve used to switch to a different card is to press “R” so that it will start recording, and then change to a different channel, and it will start using your other card. I’ve found the frontend flaky, though, and sometimes I can’t manually stop the recording… I have to cancel it in the recording area, but then sometimes I have to restart MythTV backend to really make it stop recording.


To access the MythWeb page, go to http://ipaddress/mythweb. I find the web page to be more reliable then the frontend. And you don’t need to install anything on your local PC, as long as it has a media program that can stream video.  But, the page is only useful for managing and watching recordings – you can’t watch live TV with it.

Make sure to check out the Backend Status tab, which can come in handy during troubleshooting.

I have an issue when I select a program to record – it shows an error and takes a long time for the details to come up.  But after my first recording, the issue seems to go away.  The problem is described in this thread.  I haven’t had a chance to look into it yet…

I have trouble playing the streaming video with Windows Media Player.  VLC media player seems like the most popular, although I had problems playing the .nuv formatted video that MythTV encodes from the pcHDTV “analog framebuffer” tuner (the digital tuner and the analog hardware encoder card both create MPEG videos, which seem to be better supported).  Mplayer (I used the SMplayer frontend) works well with the .nuv videos, but it doesn’t appear to support the streaming video.

MythTV Player

I recently discovered the MythTV Player, which is a lightweight alternative to the MythTV frontend.  And, it works well in Windows (MythTV frontend is supported on Windows, but the compile and setup process seemed intimidating).  I have used version 0.7.0 of MythTV Player to play recorded videos, and so far it is working well (the user interface is a little glitchy, though).  And, it plays all of my recordings, including  the .nuv videos that VLC had trouble with.  But, it appears to be only for watching video – it doesn’t look like you can use it to set up recordings, etc..  So, I still need to use MythWeb or the MythTV frontend for that part.

Setting Up a Home Server with Ubuntu 11.10 – Part 1


I am in the process of re-installing my home Ubuntu server, and I figured I’d post my install steps in case it helps anyone else.  This is a multi-part post – see Part 2 and Part 3 for the rest.  I first set up this server with an older version of Ubuntu a couple of years ago. It has been a ragged server from the start, and only become more so over time. I’ve learned a lot from my mistakes, and this time I am trying to be more deliberate about the setup process. I have the following primary goals for the server:

  1. To be a file server.
  2. To be a Digital Video Recorder.
  3. To do something useful while it is idling. Even if it isn’t useful to me, it should do something to benefit the community as a whole.

Specifically, this is the functionality I want:

  1. Samba – for file sharing.
  2. (Future) Web search capability of my file shares. I have been looking into using Regain.
  3. MythTV DVR software for recording video. The video can be managed and watched via the MythWeb front-end.
  4. BOINC client, which allows me to donate idle CPU time to distributed computing projects. I used to run GIMPS, but switched to BOINC due to it’s wide availability of projects. This heats up the CPU and uses more electricity, but I figure it is worth the extra cost to benefit important research projects.
  5. Transmission BitTorrent client – For the few occasions when I need to download something via BitTorrent. It will be left running the rest of the time to act as seed for others to download from.
  6. Git server for software source control.  I use this for my programming projects.
  7. Two primary hard drives in a RAID-1 mirrored configuration, for reliability.
  8. Ability to backup my data. I am currently working on a Perl script to do daily backups, but it is not ready yet… Manual backups will have to do for now. A third hard drive is installed in the server to backup to – I wanted a separate hard drive for extra reliability. And then I occasionally copy the backups from there to offline storage.
  9. Logical Volume Manager partitions on the primary hard drives for easy partition management.
  10. Ability to email me any pertinent information.
  11. S.M.A.R.T. monitoring of all hard drives for reliability.

The server is built out of the following hardware:

  1. Single core AMD 64 bit processor.
  2. 1.5 GB RAM.
  3. Basic video card. The server is meant to be a headless server with no graphics. I can hook it up to a monitor for installation and troubleshooting, but it just sits at a plain text login prompt.
  4. Basic network card built into motherboard.  Plugged directly into wireless router.
  5. Two primary SATA hard drives plugged directly into motherboard.
  6. Third (backup) hard drive plugged into PCI extension card with additional SATA ports (motherboard only has two SATA ports).
  7. Two video capture cards.

I am using Ubuntu Server 11.10 64 bit, although I don’t have enough memory where I need 64 bits. In theory, 64 bit operating systems should be faster than 32 bit since there are more registers and a wider datapath, but I’ve also seen benchmarks that say they are slower for typical applications in Linux… Regardless, I figure it’s best for future expandability. And besides, I can’t stand to waste half of my CPU’s bits 🙂

The information I provide here can be found in various sites, blogs, and forums. But my goal was to pull the sprawled information together into a consolidated install script. Everyone will have specific needs and circumstances not covered here, but hopefully this at least helps. Disclaimer: do your own research, as I’m no server expert, especially not in the area of security. I also assume you have at least some understanding of Linux, partitioning, etc., and that you are familiar enough to know if my instructions will get you into trouble. There were a couple of general guides that are helpful to reference:

Before installing on my real server, I set up a test server inside a virtual machine on my PC (I use Oracle’s Virtualbox VM software). This has helped immensely, and I can test pretty much everything on the virtual server, even the RAID setup (I set up multiple virtual hard drives). The only thing I couldn’t test was the video recording, since I can’t emulate the video capture cards. But even then, MythTV has some features that help. For hardware-oriented applications like MythTV, I also suggest you do a test install on real hardware. I did a test on my actual server, by installing an extra hard drive. Once I did my tests, I could easily remove the hard drive or switch BIOS back to boot off my main drive.

Installing Ubuntu

If you have any old hard drives with existing data, I suggest you leave them disconnected until you finish the install. See the Ubuntu Software RAID reference for more info on setting up RAID. Keep in mind when choosing partition sizes that it is generally easier to grow a partition than it is to shrink it.

  1. By default, the installer autoconfig sets up the network using DHCP. I then go back later and manually change it to use a static IP address. You can also leave the network cable disconnected, which casues the network autconfig to fail and gives you the option to configure the network manually (I’m sure there’s another way, but I didn’t research it too much). But I prefer to leave the cable connected to allow the installer to connect to the Internet when it needs to.
  2. Before you start, you might want to check your hard drives. Various manufacturers have there own check programs, but Seagate’s Seatools works with any brand. I burn it to a bootable CD.
  3. Boot off of the Ubuntu installation CD and select your language.
  4. Check the disc for defects. Testing the memory might not be a bad idea, either. The memory test will continuously repeat until you reboot – I like to let it run overnight, or at least long enough to get a few passes, to make sure it is thoroughly tested. For some reason, the memory test locked up on my PC when I ran it off the Ubuntu CD, so I used a bootable memtest CD, which worked fine.
  5. Choose “Install Ubuntu Server”.
  6. Go through the language setup.
  7. Choose a hostname.
  8. Select your timezone.
  9. Choose “Manual” partitioning.
  10. You should see a list of existing partitions as well as options for creating new ones. Create partition tables on any drives that don’t have one – you get the option when you select the drive. Also remove any existing partitions on the primary drives. Be careful not to delete anything you want to keep. From this point forward, I assume your primary hard drives do not have any partitions. I had trouble deleting an old RAID/LVM partition on one of my primary drives since the installer said it was active (I assume it was activated by the installer). I rebooted and the partition was gone. It probably would have been better to just boot off a GParted CD, and remove the partition there.
  11. Choose “Configure software RAID” (see the Ubuntu Software RAID installation guide for more info). You will be prompted to save pending changes before you can continue. I happened to have an old swap partition in the system, and it said it would be formatted. I didn’t seem to have any choice in the matter, but I guess I don’t care since it will be going away.
  12. Select “Create MD device”. Choose “RAID1”.
  13. Select 2 for the number of active devices in the RAID array, and 0 for the number of spare devices
  14. Select your two primary hard drives as the RAID partitions. In my case, I chose /dev/sda and /dev/sdb. Note that we are selecting the whole hard drive, and not using numbered partitions.
  15. You will be prompted to save changes before you can continue. After that, select “Finish”.
  16. On the main screen, select “Configure the Logical Volume Manager”. You will need to confirm you are satisfied with existing changes before continuing.
  17. Select “Create volume group”. I named mine mainvg. This was about the most descriptive name I could think of.
  18. Select your RAID partition (mine is /dev/md0). Don’t pick the one that is marked unusable. You will once again need to confirm that you are satisfied with the setup.
  19. Select “Create logical volume”. This is your main operating system partition. Select to create it in your newly created volume group. Name the logical volume. I named mine ubuntu1110 to distinguish it from future versions of Ubuntu I might install in the volume group. I chose a size of 50G.
  20. Create another logical volume. This is the swap partition. I chose to make mine twice the size of my RAM (see the Ubuntu SwapFAQ). I named mine ubuntu1110swap.
  21. Once done, select “Finish”.
  22. On the main screen, select the main operating system partition. Choose to “Use as” Ext4. Set the mount point to /. Select “Done setting up the partition”.
  23. On the main screen, select the swap partition. Choose to “Use as” swap area. Select “Done setting up the partition”.
  24. Back on the main screen, select to “Finish partitioning and write changes to disk”.
  25. Select to boot system if RAID is degraded.
  26. Select to write partition changes to disk.
  27. Wait for the base system to install.
  28. Enter your admin user’s full name, account name, and password.
  29. I did not encrypt my home directory.
  30. Enter your proxy info. I left it blank.
  31. Wait.
  32. Choose “No automatic updates”. We’ll set those up later manually.
  33. Do not install any additional collections of software. We’ll install those as we go.
  34. Wait.
  35. Install GRUB in the master boot record.
  36. Ubuntu will spit out your CD. Select to continue and reboot.
  37. Once the server reboots, you should have a login prompt where you can log in with your admin user account.

Initial Configuration

 Default Text Editor

I use nano as my text editor. But throughout this guide, I will use the $EDITOR environment variable in my commands so that you can choose your editor.  To set the $EDITOR variable permanently:

  1. Edit this file: ~/.bashrc
  2. Add the following line at the end of the bashrc file:
    export EDITOR=nano
  3. The next time you log in, it should be set for you.

While we’re on the subject, when you are editing configuration files, you might want to back them up (copying them and tacking on a .bak extension seems like a popular way to do this).  For the most part, I leave this up to you.

Network Settings

Make the server use a static IP address – this way you can set up your router to do port forwarding, and you can reference the server using the IP address. You also need to check the DNS configuration.

  1. sudo $EDITOR /etc/network/interfaces
  2. Change the applicable network interface section to look like this (replacing the xxx‘s as appropriate):
    auto eth0
    iface eth0 inet static
  3. sudo $EDITOR /etc/resolv.conf
  4. I left the domain and search lines as they were. There should be a nameserver line for each DNS server. Adjust the nameserver lines as you feel appropriate. I use the OpenDNS DNS servers first, followed by my router’s DNS server (


Install an SSH daemon. This will allow you to connect to the server remotely using SSH.

  1. sudo apt-get install openssh-server

I also open another SSH port to listen on. This is so that I can forward the port through my router if I need to SSH in from outside my home (I don’t leave the forwarding enabled by default, though – I only enable it during the time I plan on using it). I could forward the standard port 22, but I like to use a different number to make it less susceptible to attack. Security by obscurity isn’t that great, but I figure it helps a little:

  1. sudo $EDITOR /etc/ssh/sshd_config
  2. Add this line (substituting in the port number you choose):
    Port 12345

Activate Power Button

My server sits in my living room, and I prefer to shut it down by pressing the power button on the front of the computer, like I can with my desktop PC.  I do this by installing the acpid package. I know that ACPI encompasses a lot of other power options, and I ‘m not sure exactly what else the acpid package activates besides the power button. I need to look into it, but I haven’t had any issues so far.

  1. sudo apt-get install acpid


At this point I like to restart the server using this command:

sudo shutdown -r now

To shut it down, you can use this command:

sudo shutdown -h now

Remote Login

Now that you’ve installed the SSH server, you can SSH into your server from a remote PC. Note that after a while you will see messages saying various filesystems will be checked on the next reboot. But, even after rebooting, the messages still appears, and no check appears to have happened. This appears to be a bug and is discussed in this forum.

Initial RAID Build

You will probably notice a lot of hard drive activity. This is because the server is still building the RAID array, i.e. mirroring the first drive to the second drive. The RAID array is not redundant until this is finished. You can view the progress with this command:

cat /proc/mdstat

Manual Update

Now would be a good time to install the latest updates.

  1. Update the local package index:
    sudo apt-get update

    You don’t have to do this – but if you run into problems with apt-get, this is the first thing to try to fix it.

  2. Do the update:
    sudo apt-get dist-upgrade

    dist-upgrade sounds like it will do a major upgrade, but it won’t. As I understand, it’s just a more thorough version of the upgrade option. I don’t fully understand the difference, but their are many discussions out there if you want to Google it.

SSH Notes

During the setup, there are a couple of times where you will need graphics. I have a headless server, so I can’t see graphics even if I hook a monitor up to my PC. And it would be inconvenient anyway. Fortunately, you can view graphic windows via SSH using X11 forwarding. To connect to the server, I use the ssh command line client from within a Linux PC (or virtual machine). I just add a -X (that’s an upper case X) parameter to the command line to enable X11 forwarding. And then when I run a command on the server that spawns a windows, it shows up on my local PC.

Note: I get this error the first time I SSH into my server with X11 forwarding enabled:

/usr/bin/xauth: file /home/andre/.Xauthority does not exist

But, it goes away on subsequent logins.

Besides SSHing into the server, you can also perform SFTP transfers to and from the server using the same SSH port. I use the FileZilla client to do SFTP.

Setting up Outgoing Email

Many system components will email the root account with statuses and problems. So, I set up the the server to email out early on. Here are some references I used:

You need an email account and an SMTP server you can use to send email. I chose to use the email account that comes with my Internet Service Provider (ISP), which is Earthlink. You will need to get the specific information for the SMTP server you use. Some of the security setup may differ for your ISP. Keep in mind that emails are not secure and can be read by anyone who may intercept them, since they are in plain text.  I use the Postfix mail server software, although there are others.

First, install the packages.

  1. sudo apt-get install postfix mailutils
  2. When the config screen comes up, select “Internet Site”.
  3. Enter a system mail name. I use the same name as my server name. I’m still not sure if this is the right thing to do.

Create the generic database file. This tells what From email address should be used for different user accounts. If you send from an account that is not in this file, the email will probably be rejected by your email provider. After filling in the information, you need to generate a Postfix .db file, which is required to provide quicker lookups.

  1. sudo $EDITOR /etc/postfix/generic
  2. Enter the following into the file. Substitute with the email address your server will be sending from, and adminuser with your admin user account. This sets the “From” address in your outbound email.
  3. sudo postmap hash:/etc/postfix/generic

Set up the email login database. Note this file contains a password, so we make sure to lock down the permissions first before entering in the info. Note that initially, we will be using the IP address of the SMTP server. Later, we will go back and use the host name of the SMTP server so that we are not hardcoding the IP address. I found this two step process makes troubleshooting easier, especially because of the way Postfix is configured.

  1. sudo touch /etc/postfix/sasl_passwd
  2. sudo chown root:root /etc/postfix/sasl_passwd
  3. sudo chmod 600 /etc/postfix/sasl_passwd
  4. sudo $EDITOR /etc/postfix/sasl_passwd
  5. Enter the following line. Substitute the IP address of the SMTP server you use. 587 is the standard SMTP port. Substitute in the username and password you need to log into the email account. username:password
  6. Create the .db file:
    sudo postmap hash:/etc/postfix/sasl_passwd
  7. Verify the sasl_password and sasl_password.db files are locked down:
    ls -l /etc/postfix

Edit the main Postfix configuration file.

  1. sudo $EDITOR /etc/postfix/
  2. Edit the relayhost line to include the IP address of SMTP server:
    relayhost =
  3. Comment out the “TLS parameters” that start with smtpd/smtp (my email provider does not use TLS – you may need to do something different here for your email provider) and add the following lines:
    smtp_generic_maps = hash:/etc/postfix/generic
    # SASL parameters
    smtp_sasl_auth_enable = yes
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

Now lets restart Postfix and do a test:

  1. sudo /etc/init.d/postfix restart
  2. Send a test message to yourself from the server. Substitute with the email address where you would like the server to send emails to.
    echo "test message" | mail -s "test"

Did you get your email? If not, you can check a couple of places

  • Check the log files /var/log/mail.log and /var/log/mail.err.
  • Check your admin user’s mail. Run this while logged in as the admin user:
  • Check root’s mail. Run:
    sudo mail

Once you’ve got it working, lets replace the SMTP IP address with a hostname.

  1. sudo $EDITOR /etc/postfix/sasl_passwd
  2. Change the IP address to a hostname: username:password
  3. sudo postmap hash:/etc/postfix/sasl_passwd
  4. sudo $EDITOR /etc/postfix/
  5. Replace the IP address here too:
    relayhost =

Restart Postfix and do another test. If you have problems with resolving the server name, check this file to make sure it is accurate: /var/spool/postfix/etc/resolv.conf. For security reasons, Postfix uses this file instead of the default config file in /etc/resolv.conf. However, the two files appear to be kept in sync automatically.

Add an alias so that email sent to root and to your admin user will be forwarded to your email instead of being stored locally.

  1. sudo $EDITOR /etc/aliases
  2. Add these lines:
  3. Process the aliases file:
    sudo newaliases

Restart Postfix and test again. This time, send the email to root and your admin user.

  1. echo "test message to root" | mail -s "test to root" root
  2. echo "test message to admin user" | mail -s "test to admin user" adminuser

I would like to figure out how to specify what computer the email is coming from – this would come in handy if you have multiple computers emailing you (I can’t distinguish between emails from my test VM server and my real server). I also noticed that when I receive the email, the “To” address contains the email account that the email is being sent from (i.e. the “From” address). Not sure why. I need to look into it….

I once had a problem where my cron tasks were getting an error when sending email out, even though email was working fine elsewhere. I don’t remember the exact details of what was wrong, but I remember I had to edit the mydestination list inside the /etc/postfix/ file. To test things out, I make a cron task that will email me every minute.

  1. sudo $EDITOR /etc/cron.d/emailtest
  2. Put in this line (the change should be picked up automatically):
    * * * * * root echo "This is a cron email test"
  3. Look in the system log, and you should eventually see the job run:
    cat /var/log/syslog
  4. Verify you got an email.
  5. Delete the task:
    sudo rm /etc/cron.d/emailtest

Running Jobs

Background Jobs

There are times when you need to run long jobs, such as copying files from one folder to another, or doing backups.  You can use the bg command or add an ampersand at the end of the command line, but I’ve had trouble getting these to work the way I want them to.  And trying to use them with sudo, or redirecting output can get tricky.  I have lately gotten into the habit of using the at command instead. This will run the job as a separate process, so you don’t need to stay logged in, and it will email you the results.

To run a job now (the -m option tells at to send an email to you even if the job does not have any output):

at now -m

To run a job at midnight:

at 0:00 -m

To run a job at midnight with elevated rights:

sudo at 0:00 -m

After entering the above command lines you will get a “>” prompt to enter the actual commands. You can enter multiple commands to run in a row. When you are finished, hit Ctrl+D to finish. at will email you any output from the commands. The at man page has more details and also other commands to manage queued jobs.

Cron Jobs

You can make regularly-scheduled jobs using the cron daemon.  You can do this in one of the following ways:

  1. Add an entry to the /etc/crontab file.
  2. Add a file in the /etc/cron.d folder.
  3. Add an entry to one of the following folders inside the /etc folder: cron.hourly, cron.daily, cron.weekly, or cron.monthly.

I won’t go into details, but there is plenty of information available.

Redirecting Output

When running commands, you might want to redirect to a file by adding this after the command:

2>&1 > ~/logfile.log

Or, you can email root the output by adding this after the command:

2>&1 | mail -s "My Subject" root

I include 2>&1, which redirects the standard error output to standard output, so that it is included in the file. Otherwise, the error output shows up in your command window. You may want this in some cases, in which case you can omit the 2>&1. For example, you may want to run a command with the at command, redirecting normal output to a file, but letting at email you the errors.

RAID Failover Test

Now that we have email working, this is a good time to test the RAID failover. This will make sure you get a notification, and also give you practice rebuilding the array. You need to wait until the RAID array has finished it’s initial build after the install. Unfortunately, doing this test will require the array to be built a second time.

  1. Pick a primary drive to disconnect. Figure out the device of the RAID partition that is on that drive (e.g. /dev/sda1). To list hard drives/partitions, use this command:
    sudo fdisk -l

    While you’re at it, now would be a good time to make a list of your hard drive devices for future reference.

  2. Disconnect the drive. This will be your “failed” hard drive. I do this while the server is off.
  3. Start up the server. It should boot off of your remaining hard drive.
  4. You should get an email, as well as a warning at the login prompt.
  5. Login.
  6. View the RAID status with the following command.
    cat /proc/mdstat

    There should only be one drive in the array.

  7. Now re-attach the “failed” hard drive (again, I do this while the power is off). The hard drive is already partitioned properly. But, if you were attaching a new hard drive, you would need to to set up a RAID partition on it (the RAID partition needs to be of type “Linux raid autodetect”).
  8. Add the RAID partition on the “failed” hard drive back into the array (assuming the RAID array is md0 and the re-attached RAID partition is sda1):
    sudo mdadm --add /dev/md0 /dev/sda1
  9. Now, the re-attached RAID partition will be re-built to mirror the RAID partition on your existing hard drive, just like it did after the initial install. To monitor progress, look at mdstat again:
    cat /proc/mdstat

You can also verify you have GRUB installed on both hard drives. Try booting off each primary hard drives by selecting each one as the boot drive in your BIOS. You should be able to boot off of either one.

Hard Drive SMART Monitoring

Let’s start monitoring the hard drives – this will hopefully give us an early warning before a hard drive fails. I use smartmontools. I used the following guides:

Here we go:

  1. Install the package:
    sudo apt-get install smartmontools
  2. Make sure SMART is enabled for each of your hard drives. Run this for each drive:
    sudo smartctl -s on /dev/sda
  3. Run this command to view SMART information for a given drive:
    sudo smartctl -a /dev/sda
  4. Edit the configuration file:
    sudo $EDITOR /etc/smartd.conf

    We don’t want to use the default “auto” configuration, so comment out the following line (see the comments above it for more info):

    DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner

    Add a line for each hard drive. The following line is typical of what I use. I will not explain everything here, but you can use the guides. Unfortunately figuring everything out can get tricky. The temperature limit can take time to adjust properly.

    /dev/sda -a -o on -S on -W 2,0,34 -R 5 -m root -s (S/../../7/02|L/../01/./02)
  5. Restart SMART to pick up changes (I guess you can also send it a signal, but I just restart it):
    sudo /etc/init.d/smartd restart
  6. To do a test of the email notification, edit the smartd.conf file and add -M test onto each hard drive line. For example:
    /dev/sda -a -o on -S on -W 2,0,34 -R 5 -m root -s (S/../../7/02|L/../01/./02) -M test

    When you restart smartd you will get an email. You can then remove the -M test directive and restart again. Another way you can test is to set the temperature limit to a low value so that the hard drive temperature exceeds the limit.

Creating LVM Partitions

Here are the directions to create a new LVM partition:

  1. Create the logical partition. Adjust the -L parameter to the size you want – in this example it is set to 1 GB.  Remember, it is generally easier to grow a partition than to shrink it.  mypartition is the partition name, and mainvg is the volume group it should be created in:
    sudo lvcreate -L 1G -n mypartition mainvg
  2. Create a file system on the new partition (in this case I use ext4):
    sudo mkfs.ext4 /dev/mainvg/mypartition

To mount the partition:

  1. Make the mount point (in this example, I make it under the /srv folder):
    sudo mkdir /srv/mymountpoint
  2. sudo $EDITOR /etc/fstab
  3. Add a line (<tab>‘s should be replaced with actual tabs):
  4. sudo mount /srv/mymountpoint

Samba File Sharing

Now that you have the basic configuration done, lets set up file sharing.

I like to create a separate partition for my file share data. This partition will contain multiple folders, one for each share that I want to create.  Create the partition and mount it.  I name my partition usershare and mount it in a folder named /srv/usershare.

Set the permissions on the new share folder:

  1. I like to give owner and group full permissions, but not let anyone else into the shares:
  2. sudo chmod 770 /srv/usershare
  3. I create a group just for my shares:
    sudo groupadd usershare
    sudo chgrp usershare /srv/usershare
  4. Make sure to add any users to the group who need access. I add my admin user:
    sudo adduser adminuser usershare
  5. log out and back in to let group permissions take effect.

Now let’s set up a share folder:

  1. sudo mkdir /srv/usershare/myshare
  2. Copy in any files you have to the user share. I mount the old partition from which I want to copy under /mnt. Assuming the old partition is mounted under /mnt/source, I use the following command to copy the data over (there are many other ways to do this, though). See the rsync man pagefor details on the options.
    sudo rsync -av --progress /mnt/source/ /srv/usershare > ~/usershare_sync.log

    You can monitor progress by looking at the log file:

    cat ~/usershare_sync.log
  3. I now set the permissions on all files underneath the share:
    sudo chmod -R 770 /srv/usershare/myshare
    sudo chgrp -R usershare /srv/usershare/myshare

Now let’s set up Samba:

  1. sudo apt-get install samba
  2. sudo $EDITOR /etc/samba/smb.conf
  3. Uncomment this line:
    security = user
  4. Add a section for each share, using this as a template:
        comment = My share
        path = /srv/usershare/myshare
        browsable = yes
        guest ok = no
        read only = no
        create mask = 0770
        directory mask = 0770
        force group = usershare
  5. I also uncomment all the lines in the homes section. There is a section to make a cdrom share. I have not used this yet, but it seems intriguing. I don’t have a printer hooked to the server, so I don’t do anything with the printer section. I would like to hook my inkjet printer to the server, but I have not had great experiencing hooking these printers up to Linux machines. They tend have limited functionality when not using the proprietary drivers that come with Windows.
  6. Let’s set up Samba to sync its accounts with the system accounts on the server:
    sudo apt-get install libpam-smbpass
  7. sudo restart smbd
  8. At this point I still can’t access the Samba share. So, per this thread, I log out and back onto the server again (using SSH). Once I do that, it appears my account is synced up and I can access the new share via Samba at this path: \\serverip\myshare . Not sure how the syncing works when you have multiple users….


Backup Hard Drive Setup

I have a third hard drive permanently mounted in my server for backups. It’s still physically located with my primary hard drives, and is always online, so it’s not the most secure backup. But, it at least provides a separate drive in case my RAID array gets corrupt, and I also store multiple backups of the same location. So, if I find out I accidentally deleted a file before my last backup, I can still go back further in time to get it from an earlier backup. To mount the backup hard drive:

  1. Run fdisk passing in the backup hard drive device (I’ll assume the device is /dev/sda and we are creating a partition /dev/sda1
    sudo fdisk /dev/sda
  2. At this point I assume you are ready to create a new partition.
  3. At the prompt, enter n to create a new partition.
  4. Go through the partition setup. I will be using an ext4 partition, so I leave the default partition type (id=83/system=Linux).
  5. Create the file system on the new partition:
    sudo mkfs.ext4 /dev/sda1
  6. Find the UUID of the new partition:
    sudo blkid /dev/sda1
  7. Make the mount point:
    sudo mkdir /srv/backup
  8. sudo $EDITOR /etc/fstab
  9. Add a line, using the UUID you just found (<tab>‘s should be replaced with actual tabs):
  10. sudo mount /srv/backup
  11. Set permissions on the new folder as you like.

Manual Backup

I am working on a PERL script to do routine backups of my server. For now, I just manually run tar. I run the following command to do a backup:

sudo tar -czf "/srv/backup/backupname_`date +%Y-%m-%d_%H%M%S`.tar.gz" -C / srv/usershare/backupfolder

I generally use the at command as described earlier, stringing together multiple backup commands for each folder I want to back up.

Take a break

At this point, I take a break. Now that I’ve got the file shares set up, the rest can be set up later. I will be posting additional parts describing the rest of my setup.