Exploring Maximum Power Transfer with 3D Plots in Matlab

I recently took a course in circuit analysis where we learned the maximum power transfer theorem.  While there are many plots of the power transfer in the purely resistive case, I did not see any for the case where the source and loads are impedances (with both resistance and reactance).  I played around with 3D plots to get a better understanding of power transfer and to practice using Matlab.  Here is a write-up of what I did.

Introduction

We will analyze the following circuit:

Image: “Source and load circuit Z“. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

The relevant elements are as follows:

  • V_S is the voltage source, which we assume to be a steady-state sinusoidal source.
  • Z_S=R_S+X_Sj is the source or internal impedance (with resistance R_S and reactance X_S).
  • Z_L=R_L+X_Lj is the load impedance (with resistance R_L and reactance X_L).

The idea is to find the amount of power transferred to the load by the voltage source given a fixed source impedance.  We are particularly interested in finding the load impedance that will give us the maximum power transfer to the load.  There are many references that discuss these concepts, so I will not go into more detail here.  I will note however that there is an interesting article “Non-Calculus Derivation of the Maximum Power Transfer Theorem” by Dr. Kenneth V. Cartwright which discusses the theory in a non-traditional manner.

I will show the equations, but will not derive them – I will use the same notation and form as found in the “Maximum power transfer theorem” Wikipedia article.  I will include short Matlab code blocks throughout this document.  These are meant to be run in order (in other words, each code snippet will depend on variables or plots from previous snippets).  I am no Matlab expert, so I am sure there are better ways to write and present the code. For variables, I use the following convention: scalar constants are designated with lower case letters, whereas vectors and matrices are designated with upper case letters. For simplicity, I use the value 1 for all constants.  To get the most detail from the plots I include, you will need to click on the images.

Power Transfer with Pure Resistance

For the purely resistive case (X_L=X_S=0), the average load power as a function of load resistance (we assume the source voltage and load resistance are constants) is:

P_L(R_L)=\frac{1}{2}\frac{|V_S|^2R_L}{(R_S+R_L)^2}

The plot of this power function is found in many places. We can plot it ourselves as follows:

vs = 1;        % Source voltage is normalized to 1V
rs = 1;        % Source resistance is normalized to 1 ohm

RL = 0:0.1:7;  % Use load resistance in the range (0,5) ohms

% Calculate power
P = 0.5 * (vs.^2 .* RL) ./ ((rs + RL).^2);

% Plot the power
figure;
plot(RL,P);
title('Power Transfer (Resistive Case)');
xlabel('R Load (ohms)');
ylabel('Average Power Transfer');

MaxPowerXfer_Resistive

As can be confirmed by the graph, maximum power transfer occurs at R_L=R_S=1 where P=\frac{1}{8}W

Power Transfer with Impedance

The load power as a function of load impedance (as before we assume the source voltage and load impedance are constants) is:

P_L(R_L,X_L)=\frac{1}{2}\frac{|V_S|^2R_L}{(R_S+R_L)^2+(X_S+X_L)^2}

We can plot this function as follows:

xs = 1;        % Source impedance is normalized to 1 ohm

XL = -5:0.1:5; % Use load reactance in the range (-5,5) ohms

% Build a mesh grid for the 3D plot
[RL_mesh,XL_mesh] = meshgrid(RL, XL);

% Calculate power
P = 0.5 * (vs.^2 .* RL_mesh) ./ ((rs + RL_mesh).^2 + (xs + XL_mesh).^2 );

% Plot the power
figure;
surf(RL_mesh, XL_mesh, P);
title('Power Transfer (Reactive Case)');
xlabel('RL (ohms)');
ylabel('XL (ohms)');

MaxPowerXfer_Reactive

Lines of Maximum Power Transfer

For a given R_L we can find the X_L that gives the maximum power by solving the equation \frac{\partial P_L}{\partial X_L}=0 to get:

X_L=-X_S

Note that the impedance turns out to be independent of R_L.  We can plot a line to follow this maximum power:

% Calculate reactance where power is maximum for each resistance (note it is constant)
XLMAX = -xs * ones(size(RL));

% Calculate maximum power along the line
PMAX = 0.5 .* (vs.^2 .* RL) ./ ((rs + RL).^2 + (xs + XLMAX).^2 );

% Plot the line on top of the existing surface
hold on;
plot3(RL, XLMAX, PMAX,'LineWidth',2,'Color',[0 0 0]);

Likewise, for a given X_L we can find the R_L that gives the maximum power by solving the equation \frac{\partial P_L}{\partial R_L}=0 to get:

R_L(X_L)=\sqrt{R_S^2+(X_S+X_L)^2}

We can plot a line to follow this maximum power as well:

% Calculate resistance where power is maximum for each reactance
RLMAX = sqrt(rs^2 + (XL + xs).^2);

% Calculate maximum power along the line
PMAX= 0.5 .* (rs.^2 .* RLMAX) ./ ((rs + RLMAX).^2 + (xs + XL).^2 );

% Plot the line on top of the existing surface
hold on;
plot3(RLMAX, XL, PMAX,'LineWidth',2,'Color',[0 0 0]);

MaxPowerXfer_ReactiveWithLines

Maximum power occurs where the lines intersect. As can be confirmed on the graph, this occurs when the source and load impedances are complex conjugates of eachother:

Z_L=Z_S^*=R_S-X_Sj=1-j

As before the maximum power is P=\frac{1}{8}W.

Efficiency

The efficiency (power dissipated in the load versus total power dissipated) is a function of resistance only:

\eta(R_L)=\frac{R_L}{R_S+R_L}

In the plots so far, the surface is colored automatically to match the magnitude of the power (blue indicates a smaller magnitude and red indicates a larger magnitude). We can also specify our own color scheme. For example we could color the surface to indicate the degree of efficiency:

% Calculate efficiency
EFF = RL_mesh ./ (rs + RL_mesh);

% Plot a new graph which indicates efficiency using color
figure;
surf(RL_mesh , XL_mesh, P, EFF);
title('Power Transfer (with Efficiency)');
xlabel('RL (ohms)');
ylabel('XL (ohms)');

MaxPowerXfer_ReactiveWithEfficiency

Note that the numbers along the vertical axis of the graph still indicate power magnitude – efficiency is only indicated by color.  As expected, the efficiency increases asymptotically to 100% as load resistance increases (dark blue indicates 0% and dark red indicates close to 100%).

Ant Build Script for Android Icons

I have been working on two small Android apps recently.  I did not want to manually create the icons since I do not have a lot of graphics design experience.  Because the icons are simple and do not need to be particularly original, I decided to build my icons mostly out of open source icons.  Because I was going to release the source code for my apps anyway, I ended up using icons from Wikimedia Commons that were released under the GPL license.

I wanted to automate this process as much as possible, so I created Ant build scripts.  I thought I would share the build scripts here.  I am sure that most Android icons are created manually using graphic design tools, but I still think build scripts would be useful for certain tasks.  For example, developers need to provide several duplicate launcher icons of different sizes (there are websites which provide this automated resizing functionality).  Also, it is recommended that you optimize your icons to make them smaller before packaging.

Although I am writing about Android icons, these build scripts could also be used for other types of icons: Windows ICO files or Linux SVG icons.  In fact, one of the build scripts I made does create these icons (see first icon below).

I still had to do a lot of manual editing.  Fortunately the original icons were in the SVG format, which uses vector graphics encoded in XML, so I was able to edit them using a text editor.  I also had to use a lot of trial and error with the build scripts to get everything sized and arranged properly.

The build scripts are designed to run using Apache Ant on Ubuntu Linux, and require the following tools to be installed:

If your source graphics are vector SVG files, you can edit them directly using a text editor (I am sure there are dedicated tools for editing them as well).  If you want to manipulate them using ImageMagick, you will need to rasterize them first, which will result in a loss of quality.  The SVG files can be easy to work with since they are basically scripts for drawing an image.

The exact build process will vary widely depending on your circumstances, but the general flow I used is as follows:

  1. Convert the SVG files into raster/bitmap files (using Inkscape).
  2. Draw any required graphics (using ImageMagick).
  3. Crop/resize the graphics file as required (using ImageMagick).
  4. Compose the graphics into a final icon (using ImageMagick).  Since the final image is a bitmap, make it at least as large as the largest size you need.  In other words, you always want to scale the image smaller when you resize it – scaling it larger will result in a loss of detail.  Make sure launcher icons have a transparent background.
  5. Resize the icon into the various required sizes (using ImageMagick).
  6. For the feature graphic, I used the launcher icon, but resized it (it uses a different aspect ratio than the launcher icons) and filled it in with a white background (using ImageMagick).
  7. Optimize the final icons that will be packaged with the app (using OptiPNG).
  8. The resulting icons can be placed directly into the Android folder hierarchy.

Some side notes for non-Android icons:

  • Windows ICO files contain multiple copies of the icon image at various sizes.  The ICO file can be generated using ImageMagick (see the first icon’s build script).
  • The version of Linux I use (Ubuntu 12.04) uses SVG files for icons.  Since these use vector graphics, there is no need to perform many (or any) of the build steps if you use SVG as your source graphics – this is because many of the build steps involve working with bitmap files.  I am not familiar with all the types of Linux icons – I am sure there are other types besides SVG depending on your distribution and window manager.

When the scripts are run, all tasks are re-run even if dependencies have not changed.  It would be possible to only regenerate artifacts that are out of date, but I did not feel it was worth the effort in my case.

First Icon

Final icon created by build script:

WebShortcut_ic_launcher-web

This icon is basically the composite of two original icons.  I manually edited the arrow SVG file to remove its shadow and get this result.  I then manually added the arrow to the globe icon.  The build script was only used to rasterize the icon and perform cropping, resizing, etc..

Note that this build script also creates an ICO file (for Windows) and an SVG file (for Linux), which are there for future use.

The build resources can be viewed here:

Second Icon

Final icon created by build script:

ShareFile_ic_launcher-web

This icon was a little more complex than the first.  The green “share” logo was drawn using ImageMagick and then overlaid onto the “document” icon.  I manually edited the document icon to remove the wrench and get this result.

The build resources can be viewed here:

Use “sdelete -z” when Shrinking a Windows Guest’s Virtual Hard Drive

I was installing Windows 7 in a VirtualBox Virtual Machine, and tried to shrink the dynamic virtual hard drive (vdi format). There are several guides out there explaining how to do this. SDelete is commonly used to zero out the virtual hard drive’s free space before compacting the hard drive file.  As the SDelete manual page states, “-z” is the correct option to use for this purpose.  However, many of the guides I read use the “-c” option.  This is counterproductive since it results in random data being written to the free space, thus causing the virtual hard drive file to expand to its maximum size (in my case, this filled my host PC’s hard drive).  This issue has been discussed quite a bit in forums and blog comments; “-c” at one time was the correct option to use, but the semantics of the “-c” and “-z” options changed with one of the recent SDelete releases (I understand it was with version 1.6).

Conclusion: With the latest version of SDelete, use the “-z” option when you are shrinking a virtual hard drive.

As a side note, it took a long time for SDelete to finish on my machine – I think it was because I made my virtual hard drive very large.  I chose a size much larger than I actually needed, since I thought there would be no disadvantage to having too much space (since it is a dynamic drive, the extra virtual space takes no physical space).  In hindsight, maybe I should have chosen a smaller size.

 

Update: I found this blog entry, which has a similar post.

VirtualBox – PowerShell Script to Start a Virtual Machine with the Maximum Number of Monitors

With the help of a Plugable UGA-2K-A, I sometimes attach an extra monitor to my PC.  I have a VirtualBox virtual machine that I like to run across all of my physical monitors, and I used to manually change the virtual machine’s Monitor Count when connecting or disconnecting the extra monitor.  I now have a PowerShell script which automatically detects the number of monitors connected to my PC, sets the Monitor Count of the virtual machine, and then starts the machine.  I am using the script on a Windows 8 host. I posted the script in this GitHub Gist.

Update: I found this blog entry, which has a similar script (although some of the mechanics are different).

Configuring OpenOCD with an Olimex ARM-USB-TINY-H in Ubuntu 12.04 64 bit

I have an Olimex ARM-USB-TINY-H JTAG programmer which I recently installed in Ubuntu 12.04 64 bit (running inside Virtualbox).  I am using the programmer with OpenOCD.  I had some old instructions I wrote up, which I updated to work with 12.04.  I had some issues (due to some stupid mistakes on my part) and I came across this post while doing research.  My instructions are generally the same, but they differ somewhat in the details.  I decided to post what I have in case it helps anyone out.

References

Configure the Programmer

The programmer uses the FTDI chipset, so we need to install the FTDI drivers – we will use the libFTDI driver:

sudo apt-get install libftdi-dev libftdi1

Now Ubuntu should recognize the programmer when it is plugged in. But, by default it requires root privileges to use. Therefore, we need to set up a udev rule to change the permissions.  This rule assigns the device to the plugdev group – which was introduced in Linux for hot-pluggable devices – and then gives the group read and write access.  Make sure your user is in the plugdev group; my user was in it by default.

  1. Create a file /etc/udev/rules.d/olimex-arm-usb-tiny-h.rules
  2. Put this single line in the file:
    SUBSYSTEM=="usb", ACTION=="add", ATTR{idProduct}=="002a", ATTR{idVendor}=="15ba", MODE="664", GROUP="plugdev"

Install OpenOCD

  1. Change into the directory where you will download and build OpenOCD. I use my home folder:
    cd ~
  2. You can download the distribution from the OpenOCD website, but I prefer to get the source code from its git repository.  If you go down this route, you will need to have some extra packages:
    sudo apt-get install git libtool automake texinfo
  3. Get the source code
    git clone git://openocd.git.sourceforge.net/gitroot/openocd/openocd
    cd openocd
  4. Check the tags to see what versions are available (you can also reference the OpenOCD website):
    git tag -l

    The version distributed with my programmer was 0.3.0 , but that is way out of date at this point.  At the time of writing, the lastest version is 0.6.0, so that’s what I use:

    git checkout v0.6.0
  5. Do the build – this uses the libFTDI drivers we installed earlier:
    ./bootstrap
    ./configure --enable-maintainer-mode --enable-ft2232_libftdi
    make
    sudo make install

Test OpenOCD

Make sure the programmer is plugged in to the computer and into a board/microcontroller, and then run openocd (use the appropriate target for your microcontroller).  If there is a problem connecting to your programmer, you should quickly get an error message about not being able to open the ftdi device:

openocd -f interface/olimex-arm-usb-tiny-h.cfg -f target/stm32f1x.cfg

Troubleshooting tips

  • If openocd cannot find your FTDI device, run openocd as root (e.g. using sudo).  If that works, then you have a permission issue.
  • Run this command to view attached USB devices:
    lsusb

    You should see something similar to this line:

    Bus 001 Device 002: ID 15ba:002a Olimex Ltd. ARM-USB-TINY-H JTAG interface
  • Devices can be found under /dev/bus/usb/001/004 (in this example 001 is the bus and 004 is the device).  If the udev rule is working, this device file should have permissions as described above.  If the rule is not working, the device file will belong to the root group.
  • A /dev/ttyUSB device is not created by default, and openocd does not need it if it is using libFTDI (the libFTDI driver does not rely on the kernel drivers).
  • Run this command to get detailed attributes for a device – useful for modifying or troubleshooting a udev rule (substitue your bus and device number):
    udevadm info -a -p  $(udevadm info -q path -n /dev/bus/usb/001/004).
  • Run this command to test your udev rule against the device and see why it does not work (substitue your bus and device number):
    udevadm test /dev/bus/usb/001/004
  • The OpenOCD interface and target cfg scripts are located under this folder by default: /usr/local/share/openocd/scripts

Problem Using Windows Encryption with Dropbox Folder

I have Dropbox installed on my Windows 7 PC, and decided to encrypt the Dropbox folder for added protection.  While TrueCrypt seems popular, I decided to use Windows’ built-in encryption.  After some time, I noticed that some of the files in my Dropbox folder were not encrypted.  This was odd since the default Windows behavior is to automatically encrypt new files placed inside an encrypted folder.  Eventually, I figured out that when Dropbox creates a file (by syncing it from another PC), the new file is not being encrypted.  I tried the following tests:

  1. I created a new file locally under the Dropbox folder.  It was automatically encrypted.  This is normal behavior.
  2. I copied an existing local file from another folder into the Dropbox folder, it was also automatically encrypted.  Again, this is normal behavior.
  3. I created a file on another machine, and let Dropbox sync it to my Windows 7 PC.  The synced file was not encrypted.  This is not what I expected to happen – I expected it to be encrypted like in tests 1 and 2.

The folder looks like this after the tests (Windows highlights the encrypted folders green):

My interim solution is to occasionally re-encrypting the entire folder at the command line using this command:

cipher /E /S:c:\User\XYZ\Dropbox

After which I am supposed to run this command for good measure (I understand this command wipes the free space on the whole C: drive):

cipher /W:c:\Users\XYZ\Dropbox

It is recommended that you exit Dropbox during this process to free any file locks it may have (don’t forget to start it back up afterwards).  This is my first experience with the cipher command, but it appears to work.  It would be nice to be able to stop and start Dropbox from the command line as well, but I haven’t found a good way to do it yet.

I contacted Dropbox support, and they recommended that I just use volume-based encryption (like TrueCrypt) and avoid Windows encryption altogether.

Car Amplifier Reverse Polarity Repair

I wrote this up a long time ago, but never finished it until now.  I took pictures, but unfortunately it looks like I lost them during a hard drive failure.  But, here it goes anyway…

I recently fixed a broken car amplifier (Pyle brand) for a family member, and I wrote this up to document the results.

Background

He was powering up the car amplifier after doing some re-wiring, and smoke came out of the amplifier.  The amplifier no longer worked after this.  He had earlier blown the fuses (there are two 30 amp fuses in parallel) and had bypassed them by wrapping a wire around the blades of one of the fuses.

Diagnosis

After opening up the case, I saw that a diode had overheated and cracked open.  A fairly wide circuit board trace leading away from the fuse block had also overheated and had broken.  The broken diode was part of a reverse polarity protection circuit.  The circuit is shown here:

When reverse polarity is applied to the amplifier power terminals, the diode protects the amplifier in two ways:

  1. The diode clamps the voltage on the positive bus so that it does not swing too far negative (I would guess that it would not get below -1 or -2 V).
  2. When the battery tries to lower the voltage further, a large current starts flowing through the diode.  This quickly causes the fuse to blow.  The blown fuse prevents damage to the diode or to the rest of the amplifier.

The family member said that he hooked up the amplifier correctly, but the blown diode tells a different story!  To be kind, there is a very small chance that the diode just happened to fail on its own at that particular moment…

I assume the sequence of events went like this:

  1. Since the fuse had been bypassed, the diode continued to conduct longer than designed.  It overheated and failed closed (i.e. it shorted and became like a wire with very low resistance in both directions).  The diode is a 1N5401, which is only rated for 3 amps.
  2. With the diode shorted, a high current continued to flow through the positive terminal of the amplifier, through the fuse block, through the diode, and out through the amplifier’s ground terminal.
  3. The circuit board trace overheated and broke.

In a round-about way, the protection circuit worked.  The diode clamped the voltage and cut off power, and the circuit board trace acted as the fuse.  If the diode had failed open, the amplifier probably would have had more extensive damage.

On a side note: I am impressed with the circuit board design – there is a black mask on the top of the circuit board which mirrors the copper on the bottom.  This makes it easy to trace the circuit just by looking at the top component side.

Repair

The repair was straight-forward.

  1. I soldered in a replacement diode from Radio Shack.
  2. To repair the circuit board, I first cut off the loose trace (which had come off the board after burning) with an exacto knife.  This was probably unnecessary, but I was nervous that the loose copper foil might eventually weaken or break.
  3. I cut one of the blades off one of the blown fuses.  It was about the right width and length to replace the missing trace.
  4. I soldered the blade across the circuit board gap.  I used a high wattage soldering gun for this.
  5. I replaced the fuses.

After re-assembling the amplifier, it tested out OK and is working fine.

Setting Up a Home Server with Ubuntu 11.10 – Part 3

Introduction

I am continuing a series of blog entries documenting how I set up my home server. Part 1 contains a description of what I am trying to accomplish and instructions for doing the initial install. Part 2 explains how to set up MythTV.  In this part, I set up some miscellaneous components.

Automatic Updates

I like to set up the server so that security updates are automatically installed. And I get notifications of non-security updates, which I manually install.

The following guides came in useful:

First, let’s setup the unattended-upgrades package which will do our automatic updates:

  1. sudo apt-get install unattended-upgrades

Set up the actions that are performed automatically, and set how often they are performed:

  1. sudo $EDITOR /etc/apt/apt.conf.d/10periodic
  2. Set the APT::Periodic::Download-Upgradeable-Packages line to "1". This will download updateable packages daily. Then, when you do manual updates, you will not have to wait for them to download.
  3. Set the APT::Periodic::AutocleanInterval line to "7". This will do an auto-clean every week. This cleans up packages that are no longer being used.
  4. Add a line:
    APT::Periodic::Unattended-Upgrade "1";

    This means automatic updates will be performed daily.

Configure the automatic updates:

  1. sudo $EDITOR /etc/apt/apt.conf.d/50unattended-upgrades
  2. The Unattended-Upgrade::Allowed-Originsblock enables specific types of updates. Updates that are commented out will not be automatically run (this doesn’t affect manual updates, though). I make sure that this line is the only one enabled:
    "${distro_id} ${distro_codename}-security";

    This enables security updates. If you are adventurous, you can uncomment other lines to enable other updates. These other updates are commented out because they are more likely to break something – they are best done manually so you can batch them together and plan them appropriately.

  3. Uncomment this line so that unattended-upgradescan email you:
    Unattended-Upgrade::Mail "root@localhost";

Now let’s set up apticron to send notifciations of pending updates. This is how I know what non-security updates I need to manually apply:

  1. sudo apt-get install apticron
  2. sudo $EDITOR /etc/apticron/apticron.conf
  3. By default, apticron uses the results of /bin/hostname --all-fqdns to get the host name. On my server, this returns nothing (I’m guessing because it is trying to do a reverse lookup and I don’t have a valid domain). So I need to uncomment this line:
    SYSTEM="foobar.example.com"

    Instead of hardcoding a hostname, though, you can use the hostname command by changing the line like this:

    SYSTEM=`hostname`

By default, you will get daily notifications. I don’t want notifications that frequently. As far as I can tell, this interval is hardcoded in the script itself, so I change the script:

  1. Backup the script:
    sudo cp /usr/sbin/apticron /usr/sbin/apticron.bak
  2. sudo $EDITOR /usr/sbin/apticron
  3. Find this line:
  4. test "x$( find $tsfile -mmin -1381 2>/dev/null )" = "x$tsfile" && exit 0

    And change -mmin -1381 to -mtime -X, where X is the number of days to wait between emails. I set mine to -mtime -7, which means weekly emails.

The apt-listchanges package gets installed with apticron. This causes the apt-get dist-ugprade command to display all updates, with descriptions. It makes you hit a key after each screen, which can take a while. Let’s change it:

  1. sudo $EDITOR /etc/apt/listchanges.conf
  2. Comment out this line (put a “#” in front of it):
    frontend=pager

    And add this line:

    frontend=text

    This means that apt-get will still print out the changes, but won’t pause after each screen full. apt-listchanges will still email you the changes that were applied, so you will have a record. More options are listed here.

BOINC

BOINC is a program that allows you to take on slices of distributed computing projects. There are many projects that run on BOINC – I use World Community Grid. I also used to run GIMPS, which has its own client program to install. Keep in mind that these programs will heat up your CPU, drive up it’s power usage, and possibly shorten it’s life. It will also act like a heater. Although during cold weather, this can be a good thing – if your going to run a heater you might as well run one that has additional benefits.

I used this guide during the setup.

  1. sudo apt-get install boinc-client boinc-manager
  2. At this point, I have already registered on the World Community Grid.
  3. Run the setup utility. Note this pops up a graphical window, which means you need to SSH into your server with X11 forwarding enabled (see Part 1). Note that on my server, I get some garbled windows. I’m not sure if this is just my computer, or if everyone has the same problem. But I am able to get through.
  4. Start the manager.
    sudo boincmgr
  5. I select “Add project”.
  6. I select World Community Grid.
  7. I login using my World Community Grid account.
  8. Finish.
  9. I still have a window open, so I close it out (it is garbled, I can’t tell what it says). I have a bunch of errors on the command line about not being able to load images, so I have to Ctrl-C to kill it. But when I start boincmgr back up, the window is no longer garbled,a nd all seems well.
  10. When you go into the manager, you can monitor progress and change preferences. Even without the manager running, BOINC stays in the background. You can see by running this:
    top

    But, don’t worry, it is supposed to throttle back when you are using the CPU for other tasks and isn’t supposed to interfere.

  11. After a while, I can see results by logging on to the World Community Grid webpage.

Hardware Monitoring

I like to be able to check the CPU temperature, especially since I know it will be running hot from BOINC. So, I set up the lm-sensors package, which not only gives CPU information, but gives other hardware information as well. You will be accessing hardware and loading kernel modules, which can be a bit risky. Skip this if you don’t want to take the risk:

  1. sudo apt-get install lm-sensors
  2. To view the current sensor data:
    sensors

    This will only show sensors for which drivers are already installed.

  3. To probe the hardware and find out what additional drivers should be installed (be careful since this probes hardware – it will prompt you before performing tests, and give you an idea of how risky it is):
    sudo sensors-detect

    I answered YES to all questions, except for the one asking if you would like to automatically update /etc/modules. I prefer to do that myself.

  4. You will be given you a section between the “cut here” lines that you can copy and paste into your /etc/modules file (reboot once you are done for them to take effect). You can also use modprobe to immediately load modules temporarily for testing (replace modulewith the module name):
    sudo modprobe module

BitTorrent

Transmission

I use Transmission as my BitTorrent client. Transmission has a web client with which it can be managed.  This makes it ideal for running on a server.  Let’s set it up:

  1. sudo apt-get install transmission-daemon
  2. Add your admin user to the debian-transmission group:
    sudo adduser adminuser debian-transmission

    Log out and back in for the group change to take effect.

  3. Stop transmission. You always need stop it before changing the settings.json configuration file – when Transmission shuts down, it rewrites the file with the original settings it loaded when starting up.
    sudo /etc/init.d/transmission-daemon stop
  4. Edit the configuration file:
    sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  5. Changing the rpc-whitelist-enabled entry to false will allow you to access the web client from any PC. This will allow you to access the website from any computer. Alternatively you can grant access to individual PCs by adding them to the rpc-whitelist entry.
  6. The rpc-username entry contains the userid you use to log into the Transmission web client. It defaults to transmission.
  7. The rpc-password entry contains the encrypted password. It defaults to transmission also. To change the password, type a plaintext password in between the quotes – when transmission starts it will automically encrypt it for you.
  8. The rpc-port entry sets the port on which the web client listens. It defaults to 9091.
  9. Start Transmission back up:
    sudo /etc/init.d/transmission-daemon start
  10. Go to the transmission web client at:
    http://serverip:9091
  11. Log in.
  12. You can set some of the preferences from within the web client, which will automatically be applied in the configuration file for you. I suggest that you at least put some limits on the upload and download speeds.

I like to set up a separate partition for the torrent downloads.  I assume you haven’t download any torrents into the downloads folder yet:

  1. Create an LVM partition as described in Part 1. I call the partition torrentdownloads.
  2. Make a note of the permissions on the existing Transmission download folder:
    ls -dl /var/lib/transmission-daemon/downloads
  3. sudo $EDITOR /etc/fstab
  4. Add this line to mount the new partition in Transmission’s default location (replace the <tab>’s with actual tabs):
    /dev/mapper/mainvg-torrentdownloads<tab>/var/lib/transmission-daemon/downloads<tab>ext4<tab>defaults<tab>0<tab>0
  5. sudo mount /var/lib/transmission-daemon/downloads
  6. Change the permissions on the new partition to match the old one:
    sudo chmod 4775 /var/lib/transmission-daemon/downloads
    sudo chown debian-transmission:debian-transmission /var/lib/transmission-daemon/downloads

Now you can try go to the web client and try adding a new torrent.

SSL with stunnel

Transmission does not support SSL internally (i.e. you need to use an http URL and cannot use https). So, if you want SSL, you need to use a reverse proxy server. Ultimately, it sounds like using the Apache web server would be a good idea. But for now, I will use stunnel. This will accepts https requests on a port of your choosing, and forwards the requests to transmission on its http port (the traffic is internal to the server, so it is supposedly secure).

I used these guides:

some guides usefuly for creating certficates are:

Let’s install stunnel and set it up:

  1. sudo apt-get install stunnel
  2. Edit the configuration file:
    sudo $EDITOR /etc/default/stunnel4
  3. Set ENABLED to 1.
  4. Create a certificate – you need to generate a .pemfile. I generate a self-signed certificate (the guides I mentioned earlier explain how to do this). Note that with a self-signed certificate, you will need to set up your web browser to accept the certificate. Here is the command I use to generate the self-signed certificate:
    sudo openssl req -newkey rsa:2048 -x509 -days 365 -nodes -out /etc/stunnel/transmission.pem -keyout /etc/stunnel/transmission.pem
  5. Set the persmissions on the new certficate. We need to protect this certificate, especially since it does not have a password:
    sudo chmod 600 /etc/stunnel/transmission.pem
    sudo chown root:root /etc/stunnel/transmission.pem
  6. Create a configuration file for the Transmission SSL tunnel:
    sudo $EDITOR /etc/stunnel/transmission.conf
  7. Add this text inside the file (in this case I make the SSL port 9092):
    [transmission]
    cert = /etc/stunnel/transmission.pem
    accept = 9092
    connect = 9091
  8. Start stunnel (it shouldn’t be running yet):
    sudo /etc/init.d/stunnel4 start
  9. If it doesn’t start, you can look at syslog to try and troubleshoot:
    sudo cat /var/log/syslog
  10. Try accessing the new SSL port at:
    https://serverip:9092/transmission/web/

    With https, transmission is finicky about the URL, so it needs to be exact (see this thread).

Now you can make the 9091 port inaccessible from outside the server:

  1. sudo /etc/init.d/transmission-daemon stop
  2. sudo $EDITOR /var/lib/transmission-daemon/info/settings.json
  3. Change rpc-whitelist-enabled back to true.
  4. rpc-whitelist should be set to 127.0.0.1
  5. sudo /etc/init.d/transmission-daemon start

Try the

http://serverip:9091

link again – it should give you a “Forbidden” error.

Web Server

I like to set up my web server so that it has two ports:

  • Default http port 80 – I do port forwardng on my router to expose this externally. And then I access it using a domain name from DynDNS Remote Access – this gives me a domain name I can use to access my router/server. My router is set up to automatically update the DNS entry with the latest IP address. I don’t use the public web server often, but I like to have it ready for when I do need it.
  • A second private http port – only accessible within my network.

Let’s install the web server (if you installed MythTV earlier in Part 2, then this is already installed:

sudo apt-get install apache2

Check it out at this URL:

http://serverip

You should get Apache’s “It Worked!” webpage.

But, if you have MythWeb installed, instead it will redirect to the MythWeb web client. Let’s disable it and restore the default site – this will give us a baseline to start from:

  1. Disable the MythWeb site:
    sudo a2dissite default-mythbuntu
    sudo a2dissite mythweb.conf
  2. Enable the defautlt site:
    sudo a2ensite default
  3. Reload the settings:
    sudo service apache2 reload
  4. Check the site again, it should be the default again.  You may need to delete your temporary internet files, though, before it will work.

Now let’s disable the default sites and re-arrange things.

  1. sudo a2dissite default
  2. Make sure no other sites are enabled – /etc/apache2/sites-enabled should not contain anything:
    ls /etc/apache2/sites-enabled

Let’s re-arrange the /var/www folder. I make a different folder underneath /var/www for each port. I got this idea from VirtualHost examples where a directory is created for each virtual host.

  1. Create the public folder:
    sudo mkdir /var/www/public
  2. Put whatever you want in the public folder, or just put the default index.html there like so:
    sudo cp /var/www/index.html /var/www/public

Repeat this for the private site. I name the private folder private.

Create the public site configuration:

  1. Create the config file:
    sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/public
  2. sudo $EDITOR /etc/apache2/sites-available/public
  3. Change all /var/www references to /var/www/public.
  4. Enable the new public site:
    sudo a2ensite public
  5. Reload the settings
    sudo service apache2 reload

Test your website and you should get your public web page.

Now set up your private port:

  1. sudo $EDITOR /etc/apache2/ports.conf
  2. Add these lines (xxxshould be replaced with your private port number):
    NameVirtualHost *:xxx
    Listen xxx

Set up your private site:

  1. Follow the same instructions as for the public one, using a site name of private. In addition, when editing the config file, change
    <VirtualHost *:80>

    to

    <VirtualHost *:xxx>
  2. Restart Apache – it looks like this is necessary before it will start listening on the new port:
    sudo service apache2 restart

Test your website using the following link:

http://serverip:xxx

and you should get your private web page.

If you have MythWeb installed, re-enable it, but under the private port:

  1. Move the MythWeb folder to its new home:
    sudo mv /var/www/mythweb /var/www/private/mythweb
  2. Create a new version of the mythbuntu configuration file:
    sudo cp /etc/apache2/sites-available/default-mythbuntu /etc/apache2/sites-available/private-mythbuntu
  3. sudo $EDITOR /etc/apache2/sites-available/private-mythbuntu
  4. Change <VirtualHost *:80> to <VirtualHost *:xxx>
  5. Remove this line (this does the redirect):
    DirectoryIndex mythweb
  6. Change all /var/www references to /var/www/private/mythweb.
  7. Create a new version of the mythweb.conf configuration file:
    sudo cp /etc/apache2/sites-available/mythweb.conf /etc/apache2/sites-available/private-mythweb.conf
  8. sudo $EDITOR /etc/apache2/sites-available/private-mythweb.conf

    Change all /var/www/mythweb references to /var/www/private/mythweb

  9. Enable the MythWeb site:
    sudo a2ensite private-mythbuntu
    sudo a2ensite private-mythweb.conf
  10. sudo service apache2 reload

Now test the mythweb site, using the following link:

http://serverip:xxx/mythweb

And you are done with the basic setup.

You may get an error from Apache: “apache2: Could not reliably determine the server’s fully qualified domain name, using 127.0.0.1 for ServerName”. To fix this problem, do the following (as described here):

  1. sudo $EDITOR /etc/apache2/httpd.conf

    Add this line (replace xxx with your server name):

    ServerName xxx
  2. sudo service apache2 restart

You shouldn’t get the error anymore.

Git

I use Git as my source control software. This is probably only something that software developers would be interested in. I set up several pieces of server software used to interact with Git.

Gitosis

Gitosis not only allows you to clone repositories, but also allows push your changes back to the repositories. This wiki has a pretty good summary of what it does. I used the Ubuntu Git Community Documentation to set it up.

Let’s install it:

sudo apt-get install git-core gitosis

I like to create a spearate partition for the git repositories. I originally tried creating the partition for the the /srv/gitosis/repositories folder. But, Gitosis was getting confused with the lost+found folder that is created inside a partition. So, I create the partition for the /srv/gitosis folder instead.

  1. Create the LVM partition as described in Part 1.  I name the partition gitosis.
  2. Mount the new partition in a temporary location (we need to copy the Gitosis folder structure in the new partition):
    sudo mkdir /mnt/gitosistmp
    sudo mount -t ext4 /dev/mainvg/gitosis /mnt/gitosistmp
  3. Compare the new and old folder permissions:
    ls -ld /srv/gitosis
    ls -ld /mnt/gitosistmp
  4. Set the permissions on the new folder:
    sudo chown gitosis:gitosis /mnt/gitosistmp
  5. Copy the directory structure from Gitosis’ folder. I set the rsync options to copy pretty much everything.
    sudo rsync -avxHAXS /srv/gitosis/ /mnt/gitosistmp
  6. Now remove the existing folder structure:
    sudo rm -r /srv/gitosis/*
  7. Unmount the new partition from its temporary location:
    sudo umount /mnt/gitosistmp
    sudo rmdir /mnt/gitosistmp
  8. sudo $EDITOR /etc/fstab

    Add this line (replace the <tab>’s with actual tabs):

    /dev/mapper/mainvg-gitosis<tab>/srv/gitosis<tab>ext4<tab>defaults<tab>0<tab>0

    And mount it:

    sudo mount /srv/gitosis

Now you can initialize your repository. This is pretty well documented in various places. The Ubuntu Git Community Documentation has a pretty good guide. And the Ubuntu Generating RSA Keys Guide explains how to generate the key on the client.

Git Daemon

This is the daemon that allows you to clone a git reposity using the git:// protocol.

These were guides I used to set it up:

Let’s install it. I assume you already have Gitosis installed and are using the /srv/gitosis/git folder to store your repositories.

  1. sudo apt-get install git-daemon-run
  2. Edit the script that starts the daemon:
    sudo $EDITOR /etc/sv/git-daemon/run
  3. Change base-path from /var/cache to /srv/gitosis/git.
  4. Remove the /var/cache/git argument at the end
  5. If you want to make all repositories available, add --export-all option to the git-daemon command line.Otherwise, you need to create a git-daemon-export-okfile in each repository you want to make available:
    sudo touch /srv/gitosis/git/myrepository.git/git-daemon-export-ok
  6. Now kill the daemon and it should respawn with the new settings:
    sudo ps -A | grep git

    Run this with xxx replaced by the pid of the daemon:

    sudo kill xxx

Now you can test it out by cloning a repository.

ViewGit

ViewGit is a web client that allows you to browse your repositories with your web browser. The Ubuntu Git Community Documentation has a section about installing it. My instructions differ a somewhat, but they are pretty similar. Let’s install it:

Download and extract the viewgit tar file into your home directory. ViewGit is at version 0.0.6 as of this writing.

  1. Install necessary packages (i assume you already have apache2installed):
    sudo apt-get install libapache2-mod-php5 php-geshi
  2. cd ~
    wget http://downloads.sourceforge.net/project/viewgit/viewgit/0.0.6/viewgit-0.0.6.tar.gz
    tar -xvf viewgit-0.0.6.tar.gz
  3. Create the configuration file:
    cp viewgit/inc/config.php viewgit/inc/localconfig.php
  4. Edit the configuration file:
    $EDITOR viewgit/inc/localconfig.php
  5. Add this line in the projects array (the comma is important):
    'myrepository' => array('repo' => '/srv/gitosis/git/myrepository.git'),
  6. Since you installed the php-geshi package, you can enable GeSHI highlighting. Change the $conf['geshi'] setting to true, and uncomment these two lines:
    $conf['geshi_path'] = 'inc/geshi/geshi.php';
    $conf['geshi_path'] = '/usr/share/php-geshi/geshi.php'; // Path on Debian

    I’m still learning about this highlighting option, so I’m not sure what additional configuration should be done (.e.g. the $conf['geshi_line_numbers'] setting).

Move ViewGit into your web folder and set permissions. I assume here that you have a private web folder set up like I did under the Web Server section.

  1. sudo mv viewgit /var/www/private
  2. sudo chown -R root:root /var/www/private/viewgit
  3. Add the www-data user to the gitosisgroup – this ensures apache has access to the gitosis folders:
    sudo adduser www-data gitosis

Now test ViewGit by going to the following URL:

http://serverip:9000/viewgit

Disk Usage Alerts

I like to set up a job to monitor disk usage and email me if it gets low. I use a Perl script based on the one in this blog. I posted my version in the comments section of the blog. I assume you will be using this script.

  1. Place your script in a file named /etc/cron.daily/diskspacecheck.
  2. In order to use the Perl dfcommand, you need to install the Perl diskspace package:
    sudo apt-get install libfilesys-diskspace-perl
  3. Make the script executable:
    sudo chmod +x /etc/cron.daily/diskspacecheck

Keep in mind that this Perl script can only check filesystem sizes (i.e. those shown when you run the df command). One way you can test the script is by adjusting the thresholds such that they will trigger an email.

What’s Next

I’m done for now, but there are still some things I want to set up:

  • Web search capability of my file shares. I have been looking into using Regain.
  • Auotmatic incremental backups on a daily basis.
  • I want to look into using monitoring packages. The Ubuntu Server Monitoring guide discusses a couple.
  • Need to research security more – for example maybe I should set up a firewall. A coworker suggested using UFW.
  • The Ubuntu Server Guide has a lot more ideas…

Setting Up a Home Server with Ubuntu 11.10 – Part 2: MythTV

Introduction

I am continuing a series of blog entries documenting how I set up my home server.  Part 1 contains a description of what I am trying to accomplish and instructions for doing the initial install.  In this part, I set up the MythTV Digital Video Recorder software. I’ve found this part particularly tedious, and the most frustrating part of setting up the server, since it requires hardware setup. This is one of those times where I strongly suggest you do a test install on your PC first, just to get the hang of it. I still don’t have a thorough understanding of all this, but was able to fumble my way through the setup.

My recording situation is probably not common. The first 100 or so cable channels are included with my apartment rent. The broadcast channels (i.e. those you can get with an antenna) are digital, but the rest of the channels are still analog – you need to get a cable box in order to get the digital versions. So, I still need both a digital and an analog tuner. I have two video capture cards installed:

  • pcHDTV HD-5500 – This can tune both digital and analog channels, but not at the same time.  It looks like the digital tuner can record more than one program at a time – I’m not sure the details about how this works.
  • Hauppauge WinTV-HVR-1600 – This can tune digital and analog channels at the same time (it has two cable inputs, one for each). Unfortunately, I had problems with the digital tuner tuning properly, so I am not using it. On the positive side, the card has an MPEG-2 encoder built in for the analog signal, which means less load on my CPU.

I am setting up a headless server, so I will only install the MythTV backend. The videos will be watched on a separate PC, which can either use the MythTV frontend, or stream the video via MythWeb using a standard web browser and video player.

Here are some guides I used during the setup:

Testing the Capture Cards

Before I set up MythTV, I find it worthwhile to test the video capture cards. Some video capture cards have less than perfect Linux support (although I can see the developers have put a huge amount of work into developing drivers, the hardware is often closed source).  Getting MythTV working and getting the hardware working are both frustrating enough when done separately, much less at the same time. Getting the video cards working outside MythTV also helped broaden my understanding of the hardware and drivers.

There are different types of capture cards which are summarized on MythTV’s capture card wiki page. I imagine that only a small subset of people receive analog broadcasts anymore. But an analog card could still be useful if you are trying to record from a VCR or camcorder,or if you are getting an analog signal from a cable box.

There are many programs out there for playing the video from the capture cards, but I use MPlayer, so let’s install it with this command:

sudo apt-get install mplayer

I also used MPlayer on my client PC to play back recorded files.  Note that MPlayer has many options, and there are many other utility programs out there, so there are different ways to do the test. For example, you can tune your card using MPlayer, but I use an external program.

In order to do the tests via a remote connection, you will need to use SSH with X11 Forwarding enabled, as explained in Part 1. Keep in mind that the video will be slow and you will not get sound (I have read there are ways to get sound via SSH, but I haven’t tried yet).

You will be dealing with devices in the /dev folder. To figure out which device is which, you can use this command to query the specifications for that device (xxx is the device file):

udevadm info -a -p $(udevadm info -q path -n /dev/xxx)

You can also look at your boot log to see if a particular device was loaded:

dmesg

To look for a particular piece of hardware (in this example, the cx88 chipset):

dmesg | grep -i "cx88"

Generally, you will look for chipsets that are included on your capture card. As with all computer hardware, the chipset’s manufacturer/ model number will likely not be the same as the card’s manufacturer/model number. You will need to do some research to figure out which chipsets are used on your particular card.

Digital Tuner Card

I imagine most people recording TV will use a digital card, due to the cutover to digital broadcasts. There are different digital standards internationally – mine is using the United States standard. And there are different standards based on whether you receive your TV via cable (QAM) or via an antenna (ATSC). In my case, I am receiving via cable.

The digital video capture card devices can be found under /dev/dvb/adapterX, where X is an adapter number (you will have multiple adapters if you have multiple cards installed). I use the udevadm command line I gave earlier, running it against the /dev/dvb/adapterX/dvr0 device. I know my pcHDTV card when I see this output:

looking at parent device '/devices/pci0000:00/0000:00:0e.0/0000:02:0a.2':
  KERNELS=="0000:02:0a.2"
  SUBSYSTEMS=="pci"
  DRIVERS=="cx88-mpeg driver manager"
  ATTRS{vendor}=="0x14f1"
  ATTRS{device}=="0x8802"
  ATTRS{subsystem_vendor}=="0x7063"
  ATTRS{subsystem_device}=="0x5500"

Let’s start testing the card(s):

  1. Install the utilties:
    sudo apt-get install dvb-apps
  2. You first need to scan the channels using this command (replace the X after the -a option with the adapter device number).
    sudo scan -a X /usr/share/dvb/atsc/us-Cable-Standard-center-frequencies-QAM256 > ~/dvb_channels.conf

    This takes a while, and it’s normal to get many “tuning failed” messages.

  3. I have the problem described on this wiki page. I’m not sure why it is (I imagine it has something to do with my cable provider), but my channel numbers are in brackets, and they repeat. so I need run the script on the wiki page:
    perl -pe 's/^.{6}/++$i/e;' ~/dvb_channels.conf > ~/dvb_channels_fixed.conf
  4. If you look in the channels file you will see a number of lines like this:
    1:189000000:QAM_256:0:441:10

    with the first number being the channel number. In my case, these channel numbers are arbitrary since I ran the Perl script. And I don’t know enough to figure out how to tell which line corresponds to which channel number on my TV.

  5. Now you can start streaming the video from the card:
    sudo mplayer /dev/dvb/adapterX/dvr0
  6. And then you can tune the card while I watch it.  In fact, you probably won’t get any video until you tune to a valid channel.  I use the azap command (there are other commands you can use depending on the digital tuner type you have and what broadcast standard is used). This tunes in channel 1 from the channels config file (run this in a separate SSH session):
    sudo azap -c ~/dvb_channels_fixed.conf -a X -r 1

    The tuner will continue running, showing signal quality information.  Ctrl-C out when finished.

  7. Many of the channels listed in the conf file did not work for me, so I went through the channels starting at channel 1 and moving up to find a working channel. Note that at one point, I received a message “ERROR: error while parsing modulation (syntax error)” – so I had to remove the line for that channel (I couldn’t tune to any channels after that one until I removed the line). Once I found a channel, I saved the line in its own conf file for future use. It’s a cumbersome way to do things, but I figured once I found a channel, I wouldn’t have to do it again.
  8. Once you find a good channel, you can capture it to a file:
    sudo mplayer /dev/dvb/adapterX/dvr0 -dumpstream -dumpfile mydvbfile.ts

    Again you can tune using a separate session. Ctrl-C when you are finished recording.  And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

Analog Hardware Encoder Card

The Hauppauge card I have has a built-in MPEG-2 encoder. This MPEG stream, which contains the audio as well as the video, can be accessed via a /dev/video device. The card has multiple video devices, which represent the different physical inputs on the back of the card. Let’s test the card(s):

  1. Install the utilities:
    sudo apt-get install ivtv-utils
  2. Start playing the video stream in a window (X is the device number):
    sudo mplayer /dev/videoX
  3. And then you can tune the card while you watch it (keep in mind that the channel can take a while to chane in your window, since there is a long lag). This will change to channel 10, assuming a US cable frequency table. Run this command in a separate SSH session:
    sudo ivtv-tune --device=/dev/videoX --freqtable=us-cable --channel=10
  4. You can now capture to a file:
    sudo mplayer /dev/videoX -dumpstream -dumpfile ~/myanalogvideo.ts

    Again you can tune using a separate session. Ctrl-C when you are finished recording.  And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

Analog Framebuffer Card

My pcHDTV card provides a video stream in unencoded form. You have three devices, one for video (/dev/videoX), one for video blanking (/dev/vbiX), and one for audio. The audio device used to be /dev/dspX, but you no longer have that device now that Ubuntu has stopped supporting the OSS sound drivers. Now, you need get the sound via the ALSA framework. You need to look at the cards registered with ALSA and figure out which one is yours (again, you will need to know what audio chipset your card is using):

cat /proc/asound/cards

I know my pcHDTV card when I see this line:

1 [CX8801         ]: CX88x - Conexant CX8801
Conexant CX8801 at 0xee000000

Let’s test the card(s):

  1. Install the required utilities:
    sudo apt-get install ivtv-utils mencoder
  2. Start playing the video stream in a window (X is the device number):
    sudo mplayer tv:// -tv driver=v4l2:device=/dev/videoX

    This is just showing the video stream, which does not include any sound, even if you could hear sound over your SSH session.

  3. Tune the card while you watch it. This will change to channel 10, assuming a US cable frequency table. Run this command in a separate SSH session:
    sudo ivtv-tune --device=/dev/videoX --freqtable=us-cable --channel=10
  4. Let’s record the video and audio. This is more complicated than for a hardware encoder card, since you need to encode the video and audio yourself. In the adevice part, substitute Y with the card number you got from /proc/asound/cards earlier (in my case, this is 1 based on the output I show above):
    sudo mencoder tv:// -tv driver=v4l2:device=/dev/videoX:alsa=1:adevice=hw,Y.0 -oac pcm -ovc lavc -lavcopts vcodec=mpeg4:vpass=1 -o ~/test.avi

    Again you can tune using a separate session.

  5. You can monitor the CPU usage of mencoder. Encoding is a CPU intensive job, and this could be a concern with an older PC. I use this command to show the CPU usage of the most CPU-intensive processes, of which mencoder is the most intensive:
    top
  6. Ctrl-C when you are finished recording. And then you can transfer the file to your local PC, and watch it there (you will be able to hear sound).

I don’t test the VBI device. As far as I can tell, this is useful because it contains extra data, such as closed captioning.

udev Rules

It is tedious to use device numbers to identify your devices. And there’s also the possibility that the numbers could change. You can use a udev rule file to set up sensibly named symlinks to your device files. For example, instead of using /dev/dvb/adapter1 to get at my pcHDTV card, I create a symlink called /dev/dvb/adapter_pchdtv. Basically, to make a rule, you need to get information from the udevadm command I gave earlier – you need to find information that uniquely identifies a particular card. Then you need to write the rules. Explaining how to write rules is way beyond the scope of this writeup. But there are many guides, including this one.

Here’s how to create the rule file:

  1. sudo $EDITOR /etc/udev/rules.d/10-mythtv.rules
  2. Here is a sample of what I put in my rule file:
    # /etc/udev/rules.d/10-mythtv.rules
    
    # *** PCHDTV Digital ***
    SUBSYSTEM=="dvb", DRIVERS=="cx88-mpeg driver manager", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter_pchdtv/%%s $${K#*.}'", SYMLINK+="%c"
    
    # *** PCHDTV Analog ***
    #Note the brackets in ATTR{name} could not be matched because these
    #signify a character range in the match string.  Had to use * instead.
    #Could not figure out how to escape the brackets.
    SUBSYSTEM=="video4linux", ATTR{name}=="cx88* video (pcHDTV HD5500 HD*", DRIVERS=="cx8800", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", SYMLINK+="video_pchdtv"
    SUBSYSTEM=="video4linux", ATTR{name}=="cx88* vbi (pcHDTV HD5500 HDTV*", DRIVERS=="cx8800", ATTRS{subsystem_vendor}=="0x7063", ATTRS{subsystem_device}=="0x5500", SYMLINK+="vbi_pchdtv"
    
    # Note that you cannot match attributes from more than one parent at a time
    # This line is no longer necessary, since we no longer have a dsp device.
    #KERNEL=="dsp*", SUBSYSTEM=="sound", ATTRS{id}=="CX8801", SYMLINK+="dsp_pchdtv"
    
    # *** Hauppauge Digital ***
    SUBSYSTEM=="dvb", DRIVERS=="cx18", ATTRS{subsystem_vendor}=="0x0070", ATTRS{subsystem_device}=="0x7404", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter_hvr1600/%%s $${K#*.}'", SYMLINK+="%c"
    
    # *** Hauppauge Analog ***
    SUBSYSTEM=="video4linux", ATTR{name}=="cx18-0 encoder MPEG*", DRIVERS=="cx18", ATTRS{subsystem_vendor}=="0x0070", ATTRS{subsystem_device}=="0x7404",  SYMLINK+="video_hvr1600"
  3. Once you save the file, you can do a test. You need to find your device(s) under the /sys/class folder structure, which is a virtual filesystem where you can see raw device/driver information. Then run the following commands against those files. I give two examples, one for my analog card and one for my digital card:
    sudo udevadm test /class/video4linux/video0
    sudo udevadm test /class/dvb/dvb0.dvr0

    The command should print out a bunch of information. Then when you look in your /dev folder you will see the new symlinks to your devices, as specified by the rules. And you can test out the new symlinks using MPlayer. If there’s a problem, you’ll have to continue to tweak your udev file. Once you have it working, I suggest you reboot and then recheck to make sure all the symlinks show up.

Installing MythTV

Finally, we can start installing MythTV. The MythTV wiki’s backend configuration guide came in useful for this part.
I like to install some of the more universally used packages separately.

  1. The first is the MySQL database:
    sudo apt-get install mysql-server mysql-client

    You will need to select a root password during the install.

  2. The second package is the Apache web server:
    sudo apt-get install apache2

Now install MythTV:

  1. sudo apt-get install mythtv-backend-master
  2. When asked “Will other computers run MythTV?”, I answer Yes. I don’t normally use a frontend, but would like to be prepped for one, and I also use the frontend initially for testing. It appears the way a frontend connects is to first log in to the MySQL database running on the backend, from which it gets the connect info for the MythTV daemon itself. Then, it logs into the MythTV daemon. Trying to open up the access later is a big hassle which requires granting more permissions in the database. On the other hand, this is a security risk. There is a way to tighten the MySQL database down so that only certain PCs can get into it, as outlined here.  You can also do some security tightening using a firewall.
  3. I answer No when asked “Would you like to password-protect MythWeb?”. It would be more secure to have a password, but the last time I tried to do this, I found that video streaming clients do not handle passwords well.  Ubuntu’s MythWeb guide has more information about setting up and protecting MythWeb.
  4. I answer No when asked “Will you be using this webserver exclusively with mythweb?”

I like to set up a separate partition for MythTV,  although I still mount it in MythTV’s default folder: /var/lib/mythtv.  I use the XFS filesystem as suggested here.

  1. sudo apt-get install xfsprogs xfsdump
  2. Create the LVM partition as described in Part 1.  I name the partition mythtv.  But instead of using mkfs.ext4, I use mkfs.xfs:
    sudo mkfs.xfs /dev/mainvg/mythtv
  3. We need to mount the partition in a temporary area (we need to copy the MythTV directory structure into the new partition).
    sudo mkdir /mnt/mythtvtmp
    sudo mount -t xfs /dev/mapper/mainvg-mythtv /mnt/mythtvtmp
  4. Make sure the /mnt/mythtvtmp folder has the permissions to match /var/lib/mythtv:
    ls -ld /var/lib/mythtv
    ls -ld /mnt/mythtvtmp
  5. Copy the directory structure from MythTV’s default folder. I set the rsync options to copy pretty much everything.
    sudo rsync -avHAXS /var/lib/mythtv/ /mnt/mythtvtmp

    Then verify the folders are in their new location.

    ls /mnt/mythtvtmp
  6. Now remove the existing folder structure:
    sudo rm -r /var/lib/mythtv/*
  7. Unmount the partition from its temporary location:
    sudo umount /mnt/mythtvtmp
    sudo rmdir /mnt/mythtvtmp
  8. sudo $EDITOR /etc/fstab

    Add this line (replace the <tab>’s with actual tabs):

    /dev/mapper/mainvg-mythtv<tab>/var/lib/mythtv<tab>xfs<tab>defaults<tab>0<tab>0

    And mount it:

    sudo mount /var/lib/mythtv

Configuring MythTV

I am assuming you have an account with Schedules Direct, from which MythTV can get it’s TV schedule. You have to pay a monthly fee. I’m not sure if there are any other legitimate sites, but I find Schedules Direct well worth the cost.

Start the setup program:

  1. sudo mythtv-setup
  2. Answer Yes when asked “Would you like to automatically be added to the group?”
  3. Answer Yes at the “Save all work and then press OK to restart your session” prompt.
  4. Answer OK at the “Please manually log out of your session for the changes to take effect” prompt.
  5. Log out and back in again.
  6. sudo mythtv-setup
  7. Answer Yes when asked “Is it OK to close any currently running mythbackend processes?”
  8. You should get a graphical set up window.

General Setup

  1. Select “1. General”.
  2. I leave the IP address as the address of the server. If no other PCs will run a front end, then you can make it 127.0.0.1 .
  3. Select a Security PIN.
  4. Next.
  5. Next.
  6. Uncheck “Delete files slowly”. You shouldn’t need this for an XFS filesystem.
  7. Hit Next until you are on the last screen. Check the “Automatically run mythfilldatabase” checkbox. mythfilldatabase is what downloads the listings from Schedule Direct.
  8. Finish.

Capture Card Setup

  1. Select “2. Capture Card”.
  2. Create a “New capture card” for each card you have. Remember to use the symlink devices that you created earlier.

For a DVB card use the following settings:

  • Card type: DVB DTV capture card
  • DVB device number: /dev/dvb/adapterXXX/frontend0

For an analog hardware encoder card use these settings:

  • Card type: IVTV MPEG-2 encoder card
  • Video device: /dev/videoXXX
  • Default input: Select an input – I use Tuner 1

For an analog framebuffer card, use these settings:

  • Video device: /dev/videoXXX
  • VBI device: /dev/vbiYYY
  • Audio device (X is the ALSA device number): ALSA:hw:X,0
  • Force audio sampling rate: I set this to 48000 (this is recommended for the pcHDTV analog card).  Otherwise you can leave it set to None.
  • Default input: Select your input – I use Television

For my pcHDTV card, I cannot record from the digital and the analog tuners at the same time.  When setting up the pcHDTV digital tuner, I need to go into “Recording Options”, and make sure the following are set properly:

  • “Open DVB card on demand” should be checked.
  • “Use DVB card for active EIT scan” should be unchecked.

Otherwise, when I am using the analog tuner, I start getting static.

If you are running in a VM:

  1. Put video files somewhere on your server (I put them in my home folder). I used the .ts files I captured on real hardware during testing.
  2. When adding the capture card, select a card type of Import test recorder.
  3. Tner the file’s path.

Video Source Setup

Before you do this, you will need to have your lineups set up inside Schedules Direct. I have one lineup for the digital channels, and one for the analog channels. Make sure to edit the lineups to remove any bogus channels that you don’t actually have.

  1. Select “3. Video sources”.
  2. Create a “New video source” for each lineup.
  3. I name my video sources “Analog” and “Digital” based on the lineup types.
  4. I keep the Listings grabber set to North America (SchedulesDirect.org).
  5. Enter your Schedules Direct login information.
  6. “Retrieve Lineups”
  7. In Data Direct Lineup, select the right lineup.
  8. Set the “Channel frequency table”. You don’t need to if it is the same as the one in General settings, but I always set it anyway. For the Digital video source, I set it to us-bcast, and for my Analog video source, I set it to us-cable.

Input Connections

The first time you set up MythTV on your server, it is probably best to set up one input connection at a time. Give one connection a try, then remove that connection, and add the next one. Fortunately, MythTV will remember the fetched/scanned channels when you re-create a connection that you previously removed.

  1. Select “4. Input connections”.
  2. Map each card/input to a video source. Note that analog cards will have multiple inputs to select. If you don’t use a particular input, leave it mapped to None.

To map a card/input, first select it, and then select a video source.

For an analog card/input, I don’t need to scan channels. I just use the channels form the lineup by selecting “Fetch channels from listing source”.

For a digital card:

  1. Select “Scan for channels” (it appears the scan is mandatory).
  2. In my case, I select a frequency table of “Cable” and leave the defaults for the rest.
  3. Select Next.
  4. Now wait while the channels are scanned. This takes a while. It should filter out any channels that are scrambled.
  5. I get multiple prompts regarding non-conflicting channels. I choose to “Insert all” non-conflicting channels
  6. When I get conflicting channels, I update them (I pick an arbitrary high channel number). I assume these channels will not work and I’ll probably end up deleting these later, but I am more comfortable keeping them for now.
  7. Finish.
  8. I do not select to “Fetch channels from listing source”.
  9. Select a starting channel.
  10. Next.
  11. Finish.

Again, for the pcHDTV card, I need to prevent it from record from the analog and digital sources at the same time.  I need to do the following:

  1. When inside the Input Connections screen, select “Create a New Input Group”.
  2. For both the digital and analog tuners, specify this group in the “Input group 1” field. With both cards in the same group, MythTV does not try to use them at the same time.

I have two analog tuners, but I always want my Hauppauge hardware encoder tuner to take precedence – I only want to use the pcHDTV framebuffer tuner if the Hauppauge tuner is busy.  This is because the pcHDTV tuner requires more CPU resources, and also because the video that MythTV encodes from the pcHDTV card less flexible than the MPEG stream from the Hauppauge card (see below for more details).  To do this, I add the Hauppauge card first when I am setting up the Input Connections. This gives it a lower ID in the database, which means MythTV uses it first.  Crude, but it seems to work.

Channel Setup

  1. Select “5. Channel Editor”.
  2. I get a bunch of channels without names when I do a digital scan, so I clean them up. Cleaning these can be tedious. You can wait until you get the frontend set up and then flip through in Live TV mode and see which ones work. But I’ve found that tuning to a bad channel can cause the frontend to crash or hang. I get some good channels that don’t show up in Schedule Directs listing, so the schedule isn’t reliable. And the channels that the scan finds don’t exactly match the ones my TV tunes to. I ended up just deleting any digital channels that don’t have a name. I figure if they don’t come up with a name, they are not of interest, and are more than likely “bad” channels.
  3. As far as the analog channelse go, I can add any analog channels that are missing from the schedule (since we didn’t do a scan, we will only have channels in the listing).

I had one odd situation: Schedules Direct shows a channel in my analog lineup, but it doesn’t show the channel in my digital lineup.  In reality, they are the opposite – I have the digital channel, but not the analog version.  To fix this, I manually added the digital channel, and then I set it’s XMLTV ID to the same value as the analog channel (I got the ID from Schedules Direct, and it also shows up in MythTV’s channel setup).  This associates the schedule from the analog channel with the digital channel.  I would like to remove the analog channel, but if I do that it will not download the schedule, which is needed for the digital channel.  I need to look into it some more, but for now I just leave the analog channel and remember that I can’t use it…

Inside the Channel Editor, I also download icons for the channels:

  1. Select “Icon Download”
  2. Select “Download all icons…”
  3. The icons will be downloaded. For some channels, you will need to select which icon is the correct on, or you can skip if none are correct (or none are shown).  This takes a while.

Finishing the Setup

Now you can exit the setup program by hitting the Escape key. Answer Yes when asked “Would you like to run mythfilldatabase?” This will load the schedules to your PC.

Managing MythTV

Later on, if you need to stop or start the MythTV backend, use these commands:

sudo stop mythtv-backend
sudo start mythtv-backend

If you want manually do a schedule update:

sudo mythfilldatabase

Using MythTV

Like setting up the backend, setting up a frontend to watch the video can be difficuly.  You have several different options, which I explain here.

MythTV Frontend

This is MythTV’s official frontend.  Even though I don’t plan on using it over the long-term, I still find it usefult to test and troubleshoot my backend setup.  I install the frontend on another Ubuntu PC (the package is mythtv-frontend).  At some point, I may look into setting it up on a Windows PC, which is possible, but appears to be a more difficult process.

Before trying MythTV, I try connecting with MySQL, since this is the first thing that MythTV does.

  1. Get the database login info. On the server, look at the MySQL config file:
    sudo cat /etc/mythtv/mysql.txt
  2. Then run this command on a client machine (I assume you are on an Ubuntu machine and have the mysql-client package installed):
    mysql -u mythtv -h serveripaddress -p mythconverg

    You will need to enter the password, at which time you should be logged in.

Now, try running the frontend. Inside Watch TV, use these keyboard commands. You can switch cards/inputs by pressing “Y” and “C”. But if you have multiple cards of the same type (e.g. if you have two digital input cards), it doesn’t seem to allow you to switch between the two of them so that you can test them both. A trick I’ve used to switch to a different card is to press “R” so that it will start recording, and then change to a different channel, and it will start using your other card. I’ve found the frontend flaky, though, and sometimes I can’t manually stop the recording… I have to cancel it in the recording area, but then sometimes I have to restart MythTV backend to really make it stop recording.

MythWeb

To access the MythWeb page, go to http://ipaddress/mythweb. I find the web page to be more reliable then the frontend. And you don’t need to install anything on your local PC, as long as it has a media program that can stream video.  But, the page is only useful for managing and watching recordings – you can’t watch live TV with it.

Make sure to check out the Backend Status tab, which can come in handy during troubleshooting.

I have an issue when I select a program to record – it shows an error and takes a long time for the details to come up.  But after my first recording, the issue seems to go away.  The problem is described in this thread.  I haven’t had a chance to look into it yet…

I have trouble playing the streaming video with Windows Media Player.  VLC media player seems like the most popular, although I had problems playing the .nuv formatted video that MythTV encodes from the pcHDTV “analog framebuffer” tuner (the digital tuner and the analog hardware encoder card both create MPEG videos, which seem to be better supported).  Mplayer (I used the SMplayer frontend) works well with the .nuv videos, but it doesn’t appear to support the streaming video.

MythTV Player

I recently discovered the MythTV Player, which is a lightweight alternative to the MythTV frontend.  And, it works well in Windows (MythTV frontend is supported on Windows, but the compile and setup process seemed intimidating).  I have used version 0.7.0 of MythTV Player to play recorded videos, and so far it is working well (the user interface is a little glitchy, though).  And, it plays all of my recordings, including  the .nuv videos that VLC had trouble with.  But, it appears to be only for watching video – it doesn’t look like you can use it to set up recordings, etc..  So, I still need to use MythWeb or the MythTV frontend for that part.