Friday, November 13, 2009

Scratching the Linux Itch, pt. II

Finally, we get started building the system in Chapter 5. As mentioned in the last important point here, for every package done in Chapter 5, you need to do the following commands, as the 'lfs' user:



$ cd $LFS/sources
lfs:/mnt/lfs/sources$ tar xjvf binutils-2.20.tar.bz2
lfs:/mnt/lfs/sources$ cd binutils-2.20


If the downloaded package is a .gz file, then instead of 'xjvf', you do use 'xzvf' to untar it. 'j' uncompresses bz2 files while 'z' uncompresses .gz files. The untarring command is implied before you start each sub-chapter. I also finally figured out how to time multiple commands using the GNU time command (which has a nicer, seconds-only format) rather than the builtin bash 'time' command:



$ \time -f "\\n================\\nTotal time: %e seconds\\n================" bash -c "../binutils-2.20/configure \
--target=$LFS_TGT --prefix=/tools \
--disable-nls --disable-werror && make && make install"

[.... many many commands later ....]

================
Total time: 146.24 seconds
================
$ \time -f "\\n================\\nTotal time: %e seconds\\n================" bash -c " ../gcc-4.4.2/configure \
--target=$LFS_TGT --prefix=/tools \
--disable-nls --disable-shared --disable-multilib \
--disable-decimal-float --disable-threads \
--disable-libmudflap --disable-libssp \
--disable-libgomp --enable-languages=c && make && make install"

[....]
================
Total time: 678.12 seconds
================


I use \time to make sure I'm not using any builtin or alias command - this picks up GNUs /usr/bin/time (read more about finding the correct time in an earlier post). The -f option lets you specify a printf-like statement for formatting the output, so in this case I want it to just show seconds (%e). The real tricky part was how to get the multiple commands to run as one command, as time just times one single command. By invoking bash with its -c option, and quoting the entire rest of the line, we get it all together as one.



Oh, I also forgot to remove the source and build directories as specified in the book, so I needed to go back and do that for gcc and the binutils. And for the API headers, I understood it to mean that I needed to untar the linux-2.6.31.5.tar.bz2 file. And actually, that's one of the gotchas mentioned in the Linux Format's article:



Kernel API headers


A common mistake is to expect the kernel API headers to be in their own package. This is not the case - you will need to extract the kernel source package (usually of the form linux-2.6.x.tar.bz2) and then move into the extracted directory to follow the steps in the ebook.



For the binutils Pass 2 build, I needed to tweak the line a bit because of quoting problems:



 $ \time -f "\\n================\\nTotal time: %e seconds\\n================" bash -c "CC=\"$LFS_TGT-gcc -B/tools/lib/\" \
AR=$LFS_TGT-ar RANLIB=$LFS_TGT-ranlib \
../binutils-2.20/configure --prefix=/tools \
--disable-nls --with-lib-path=/tools/lib && make && make install"

[.....]
================
Total time: 135.00 seconds
================


Note the \" ...\" for the CC= line. This is so that it gets passed along correctly to the bash shell that gets run. I do wonder if the SBU time used should include the last 2 make commands specified on the binutils, Pass 2 page though. To get the current status of the SBUs for my system, you can see it here.



And I do know what LXF means when one of their caveats says to be careful with getting lulled into feeling it is always ./configure --prefix=/tools! I have changed my sample timing command to be:



\time -f "\\n================\\nTotal time: %e seconds\\n================" bash -c "./configure --prefix=/tools && make && make install"


I have this in a file and I just copy / paste while running it in an Emacs shell. But every now and then, a configure slips in that wants just a little more...



I finished up with Chapter 5: Constructing a Temporary System without too many problems. I've really been enjoying the in depth exploration of exactly what gets built and why. I do wonder about some of the unapplied patches though. Perhaps because it is just the "bootstrap" system, they aren't important at this point. I also thought it very quaint that the book worries about disk space and gives advice on how to save a whole 95mb by stripping the apps and removing some doc files:)



Edit: turns out the manual, not surprisingly, does talk about the unapplied patches here::

Several of the packages are patched before compilation, but only when the patch is needed to circumvent a problem. A patch is often needed in both this and the next chapter, but sometimes in only one or the other. Therefore, do not be concerned if instructions for a downloaded patch seem to be missing



The series so far:



  1. Scratching the Linux Itch, pt. I

  2. Scratching the Linux Itch, pt. II




Thursday, November 12, 2009

Scratching the Linux Itch, pt. I

So I'm going to try a Linux From Scratch build and installation, just because I need another pointless project to write about :)



Here is the result of my version-check.sh script on my openSUSE 11.0 system:




bash, version 3.2.39(1)-release
/bin/sh -> /bin/bash
Binutils: (GNU Binutils; openSUSE 11.0) 2.18.50.20080409-11.1
bison (GNU Bison) 2.3
/usr/bin/yacc -> /usr/bin/yacc
bzip2, Version 1.0.5, 10-Dec-2007.
Coreutils: 6.11
diff (GNU diffutils) 2.8.7-cvs
find (GNU findutils) 4.4.0
GNU Awk 3.1.5h
/usr/bin/awk -> /bin/gawk
gcc (SUSE Linux) 4.3.1 20080507 (prerelease) [gcc-4_3-branch revision 135036]
GNU C Library stable release version 2.8
GNU grep 2.5.2
gzip 1.3.12
Linux version 2.6.25.20-0.5-pae (geeko@buildhost) (gcc version 4.3.1 20080507 (prerelease) [gcc-4_3-branch revision
135036] (SUSE Linux) ) #1 SMP 2009-08-14 01:48:11 +0200
m4 (GNU M4) 1.4.11
GNU Make 3.81
patch 2.5.9
Perl version='5.10.0';
GNU sed version 4.1.5
tar (GNU tar) 1.19
makeinfo (GNU texinfo) 4.11
Compilation OK


So it looks pretty good so far. It's always scary when reformatting an existing partition. I have 3 primary 50gb partitions on a 250gb hard drive used to play with Linux distros. The fourth partition is an extended one with 2 logical partitions, a 90gb /home partition and a 2gb Swap. One of the primary partitions houses my regular distro, openSUSE 11.0, while the other 2 currently have an openSUSE 11.1 and an Ubuntu 9.04, neither of which I've used recently. So I'll just use the Ubuntu one, which is /dev/sdb3, and already created, so I just need to reformat it:



# mke2fs -jv /dev/sdb3
mke2fs 1.40.8 (13-Mar-2008)
Warning: 256-byte inodes not usable on older systems
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2883584 inodes, 11520613 blocks
576030 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
352 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


So far so good. Mount it on /mnt/lfs and we're good to go. On to chapter 3.



One small confusion about getting the matching packages is that while the manual says that an easy way to download all the packages is "by using wget-list as an input to wget", only they don't explain what that means. So here's what I did:



# cd $LFS/sources
# wget -i wget-list

[... 77 downloads later ...]

FINISHED --2009-11-12 09:22:42--
Downloaded: 77 files, 257M in 39m 57s (110 KB/s)


Actually, it turns out there is a slight clarification in the FAQ here, although they still don't clearly mention that you should cd into the $LFS/sources folder before doing the wget command. Maybe getting a little too anal but I think every little bit helps. So it took about 40 minutes to download all 77 packages. Now, onto Chapter 4, Final Preparations.



My next bit of anal retentiveness comes in the section 4.4, Setting Up the Environment. The book suggests a way to create a .bash_profile that restarts the bash shell in a "good" (ie, clean) working environment. But it doesn't really help in my openSUSE distro, because there is a system wide /etc/bash.bashrc that gets executed, "cluttering" up the environment even when the system wide /etc/profile isn't executed. So I modified the .bash_profile to be:



exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash --norc ./bash-startup


The --norc prevents both the system and the user .bashrc file from executing. And I copied the .bashrc file they say to create to bash-startup, and appended a couple of lines, leaving me with:



set +h
umask 022
LFS=/mnt/lfs
LC_ALL=POSIX
LFS_TGT=$(uname -m)-lfs-linux-gnu
PATH=/tools/bin:/bin:/usr/bin
export LFS LC_ALL LFS_TGT PATH

# my added lines to create a brand new, empty shell:
export PS1='\u:\w\$ '
exec bash --norc --noprofile


So I added the PS1 export and then re-exec bash, this time with neither the rc files or the profile files getting executed, just to be safe. Even then, there are a bunch of environment variables, but I think they are just ones that bash sets up for itself:



$ su - lfs
Password:
lfs:~$ set
BASH=/bin/bash
BASH_ARGC=()
BASH_ARGV=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="3" [1]="2" [2]="39" [3]="1" [4]="release" [5]="i586-suse-linux-gnu")
BASH_VERSION='3.2.39(1)-release'
COLUMNS=80
DIRSTACK=()
EUID=1003
GROUPS=()
HISTFILE=/home/lfs/.bash_history
HISTFILESIZE=500
HISTSIZE=500
HOME=/home/lfs
HOSTNAME=touch
HOSTTYPE=i586
IFS=$' \t\n'
LC_ALL=POSIX
LFS=/mnt/lfs
LFS_TGT=i686-lfs-linux-gnu
LINES=24
MACHTYPE=i586-suse-linux-gnu
MAILCHECK=60
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/tools/bin:/bin:/usr/bin
PPID=16656
PS1='\u:\w\$ '
PS2='> '
PS4='+ '
PWD=/home/lfs
SHELL=/bin/bash
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
SHLVL=1
TERM=vt100
UID=1003
_=bash
lfs:~$


Now that the environment is set, time to move on to Chapter 5.



The series so far:



  1. Scratching the Linux Itch, pt. I

  2. Scratching the Linux Itch, pt. II




Tuesday, November 10, 2009

Linux Format 125

Linux Format Dec 2009 Cover

My favorite Linux magazine, by far, is Linux Format, a UK based magazine that is just chock-a-block full of great Linux info. They also run the informative Tux Radar blog. So I thought I would just give a rundown each month of the highlights from the most recent issue.


In issue 125, dated December 2009, the cover story is Remix Linux and there's some great stuff in there about creating your own personal distro. They start with apps that let you create your own mix, working from the Ubuntu Customization Kit (or UCK for short), thru SUSE Studio, and include quick shoutouts for Revisor (the Fedora UCK equivalent) and InstalLinux.com, a cool web site for building a distro.


The next step is to build your own Arch Linux version. Looks involved but you get pretty much complete control. And speaking of hard core, the final step is a project I've always had on my back burner - Linux From Scratch. Here you build it all from sources, beginning with GCC. Follow along with the ebook and see what happens! Better than a video game, even.


My favorite sections, being the "app-aholic" I am, are the Reviews and the Hotpicks, along with the Roundup, where they examine a bunch of apps dedicated to one purpose. This month, they looked at collection managers, and, being the packrat I am, I'm hoping to give this month's winner, GCStar a whirl, esp. after finding it in the openSUSE 11.0 KDE4 repository. I also tried to compile Choqok, a Twitter client, directly from the development snapshot, but my current installation as KDE3 just doesn't seem to like compiling this KDE4 app. Other interesting apps this month include Bilbo (a blogging client now called Blogilo), Booh (generate static web albums ready to upload straight to your website), Jampa (Yet Another music player), Rednotebook (a diary, journal and notebook), and KMyMoney (a personal finance package). I played a bit with KMyMoney (again) but got stumped when I couldn't figure out how to import .OXF files. Also disappointing there's no kind of web connect, but maybe I'll get back to it.


There's also several great tutorials, including writing a Python script to get Google data and getting your backups straightened out using Backup PC (although I've always leaned towards Bacula myself).


All in all, a great deal, even for US$99 (go here for that deal). You get 13 issues a year, plus a jam packed DVD in each issue, plus access to all the back issues in PDF form. The best deal in Linux!



Thursday, July 23, 2009

Mysterious Icewind Port

This doesn't really have anything to do with Linux, but I needed to put this down somewhere so maybe it would help someone else. My friend and I like to play "hardcore" computer RPG games, cooperative style, across the Internet. We have a weekly session, playing for a couple of hours. It's the only way, really, that I would ever complete a game, as I just don't have the persistence to do it myself. I play a bit, and pretty much irregardless of how good or bad the game is, I'm always flying off, checking the next game.



Anyway, we finished Neverwinter Nights (we don't mind playing lagging edge games) and went on to play Sacred, which we picked up for cheap at the wonderful GoG (Grand Old Games) site. It was fun for the first few weeks, but then there started to be problems with the lobby server, as it isn't a direct connect game. It had become a bit of a grind, anyway, so we started looking about for a new game to play.



Unfortunately, classic, hard-core, cooperative RPG games are few and far between these days. The definitive site for all thing co-op, Co-optimus has almost no entries at all for even an action RPG to play co-op. Not sure why the well has dried up. So we decided to go really olde school and try the classic Baldur's Gate. Many years ago, in a galaxy far far away, I actually played this a bit across the 'net, but I've never really played it all that much. As I had two copies of it, I gave one to my buddy and we sat down to play it.



But we played the "Figure Out The Router/Firewall" game instead. For the next 2 hours, we tried to figure out why I couldn't connect to his server. This worked fine for both NWN and Sacred, but whenever I would type in his IP address, BG would just pause for a long time, and the come back with "Cannot Connect To Game Session". As my router is a virtually impossible to configure Juniper Netscreen router, he was hosting the games on his side. He made sure all the ports mentioned were opened, all to no avail. As well as the "official" ports (2300-2400 & 6073), someone also mentioned ports 1470, 15000, and, most importantly, ports 47624-49672. Not sure about those, but port 47624 is the DirectPlay port, so I'm not sure why this doesn't show up in the official docs.



But despite all these efforts, we just couldn't get our Baldur's Gate games to connect. So last night, we tried Icewind Dale. Still a wonderful RPG (in fact, I like it more than Baldur's Gate, despite, or perhaps because of, its more linear and hack-n-slash nature), but a slightly newer engine, esp. when updated with the Heart of Winter expansion and its Trials of Luremaster patch/expansion. So we were hoping the updated engine would be more cooperative.



But, alas, we were wrong. I still got the dreaded "Cannot Connect" error. So we then tried Hamachi, a free VPN software. Still no go. As he's a network wonk, I was sure the firewall and router was correct on his end. And I don't run a firewall at all (don't really need one if you are behind a NAT router), so it had to be my "user unfriendly pain in the butt to configure" Juniper Netgear NS5GT router. I fired up Wireshark (nee Ethereal) and began sniffing packets.



I hosted the game on another internal computer and sniffed the traffic, comparing it to the packets I was sending and receiving to my friend's machine. It was then I noticed something odd - my machine would send out a DirectPlay packet on port 47624, but it would get back the response on port 2300 on my internal connection but, of course, it wouldn't get it back from my friend's computer, because routers will allow a response on the same port but not a different one.



So I rolled up my sleeves, used some Black Belt Google Fu and found this page that describes the dozen or so steps(!) you need to go through to get it to work. I added 2300 & 47624 to my "Infinity Engine" Custom Service Object, and set the policy to forward it to my game playing machine. This got me connected but not for long. Turns out, I needed to add 2350 as well. I tried to add 2300-2400, as recommended somewhere, but the Juniper Netgear complained about not enough ports in the pool, whatever the hell that means.



And now it works. So, the basic fix is to make sure that both players port forward UDP and TCP traffic for the ports mentioned here (2300-2400, 6073) and 47624 (the DirectPlay port). I'm not sure what 6073 is for, as I don't think either of us have it set up - maybe that's an old DirectPlay service port? Yup, it is according to this (DirectPlay 8), while it say 47624 is "directplaysrvr". Not sure what the difference is, unless one is a fallback for the other. At the very least, forward ports 2300 & 2350 - I think it uses the second one by default and will only use a different one if it finds that port is busy.



Thursday, July 2, 2009

Creator and emacsclient

Some notes on using emacsclient on KDE, as I'm trying to integrate it with Qt Creator, because Qt Creator doesn't have Emacs key bindings (so far, my biggest gripe). There's an albeit painful keystroke to pass the current file off to an external editor, so I'm trying to get it to work with emacsclient. So far, here's my external editor command:



emacsclient -n +%l:%c %f



The -n tells it to not wait for the server to relinquish control of the file - I just want to edit it. +%l:%c says set the cursor at the line,column specified. The Info and Man pages for emacsclient don't specify this correctly, as they are both missing the colon.



Now I'm trying to figure out how to bring it to the front. Burying Emacs behind a bunch of windows isn't much help. I did come across a pretty cool KDE keyboard shortcut - Ctrl-Alt-A - which brings to the front the window "demanding attention", but that's not all that much help. Unfortunately, it doesn't look like a problem that has been solved for these window managers that don't allow "focus stealing".





Tuesday, June 23, 2009

Information Please

An article on Lifehacker featured an interesting utility called iotop, which gives a birds eye view of what your hard drive is up to. The commentors mentioned a couple of other interesting tools, although many seemed to be confused as to what iotop is measuring as opposed to some of the other utilities. I thought I'd do a quick write up about a few of these command line machine status reporters.



The most basic status tool is, of course, ps. In number of options, it competes with ls for most in the Linux manual! This is because it has 3 different command "modes" - UNIX, BSD and GNU versions, and all are usually mutually incompatible. ps gives you a basic snapshot of what programs are being run.



While ps is always around, the rest of these usually need special installation, although the next one, top, is almost always around. top adds more information but, most importantly, runs in a curses display, so it stays around and updates itself, giving you a constant view of what your CPU is up to. You can see it change dynamically as programs work your CPU harder. Use the 'h' key to display a help screen, giving you more sorting and display options.



View top in action


The most flexible version of ps is htop. This gives you even more information, more sorting options and an even better curses display. Again, use the 'h' command to get a full help display. I especially like htop for how it displays the complete command line.



View htop in action


iftop uses a top-like display to show you what is going on in your network (your interface). It displays the various network requests, who is making them and how much is coming and going on each. This is an especially useful command on a server, as you can see what your web server is working on, and keep useful totals.


View iftop in action



iotop again uses a top-like display to give you a real time, in-depth view of disk I/O. This is an especially useful diagnostic tool if you notice your hard disk is "thrashing" - ie, the red access light is doing a disco-like strobe effect and you're wondering who is doing all that hard drive dancing.



View iotop in action


saidar is another monitoring tool that shows a nice overview of everything going on in your computer. Useful information like CPU load, swap usage, disk space, and network traffic. Good quick clean overview.


View saidar in action


Next time, perhaps I'll take a look at some of the graphical system monitoring tools available for KDE and GNOME. If you have any other favorite terminal-based monitors, please mention them in the comments.




Wednesday, June 10, 2009

Screencasting the Creator

I'm trying out Nokia's fancy new Qt Creator IDE for developing Qt 4.5 applications. Looks pretty nice, even if Qt itself is a little fugly, given all the weird macros it uses to get its work done. Maybe this IDE will shield me from the worst of it?



Anyway, I'm going through the recommended book (C++ GUI Programming with Qt 4, Second Edition) on Safari and am finding it a bit of a chore. It is working with the older GUI design tool, Qt Designer, and not Qt Creator. As far as I can tell, there isn't much introduction material for Qt Creator. So I'll try and puzzle out the answer to some of the oddities and document them here.


So what I've done is create my very first screencast! I am using recordMyDesktop via its GUI interface qt-recordmydesktop, which works pretty well. I had the usual audio hassles, trying to figure out how to set up the mic for recording. I could never get it to use my USB headset, but luckily this lovely Plantronics USB headset comes with regular 3.5mm jacks that plug into a USB adapter, so by plugging the jacks into the back of the audio board, I could get it to record after trying various options in KMixer.


Coincidentally enough, the most recent issue of Linux Format magazine has a Roundup of screencasting tools. Their pick is the closed source DemoRecorder but it doesn't seem to offer anything that the free recordMyDesktop offers besides the ablity to save in multiple formats. recordMyDesktop only does Ogg Vorbis video, but all the upload sites convert it to Flash video anyway, so I don't really care.



I had a few problems getting the screencast to work. First off is a bug in Qt Creator, where if you rename the main widget, the compiler gets all weirded out. Took me a while to figure that one out. The workaround is to rename it from the Project creation dialog.



Then I was having problems getting the post-recording conversion to work. It would only do some of it, or not a complete audio track. Not sure what the work around was for that. I either got lucky finally or, by just leaving the computer alone until the process was done made it work. Finally, I got the 6 minute screencast to work.



I am using blip.tv to host the video. I've heard it does the best job at conversion and has the fewest restrictions on formats, as well as giving the most options for display. So, without further ado, is my very first screencast: