Monday, December 14, 2009

The Linux Command Line - First Edition Is Now Available!

After a two year effort, I have released the first edition of "The Linux Command Line" in all of its 522 page glory!

I want to give special thanks to Mark Polesky for his extraordinary review and test of the book, as well as the rest of the review team: Jesse Becker, Tomasz Chrzczonowicz, Michael Levin, and Spence Miner.

Special thanks also go out to Karen Shotts for editing all of my so-called English.

To download the free PDF version of the book, go to https://sourceforge.net/projects/linuxcommand/files/ and get the file TLCL-09.12.pdf.

You may also purchase a printed version of the book in large, easy-to-read format (which is, in fact, very handy) by going to: http://www.lulu.com/content/paperback-book/the-linux-command-line/7594184

Enjoy!

Thursday, November 19, 2009

The Linux Command Line - Fourth Draft Now Available

The fourth draft of the book is now available. This version contains almost all the the review feedback received so far and has been edited through the final chapter. This draft does not yet include an index, but is otherwise close to being finished.

The new draft, named TLCL-09.11.pdf, is available here.

Enjoy!

Saturday, October 3, 2009

The Linux Command Line - Third Draft Now Available

The third draft is now available. This version features reformatted and captioned tables, some of the changes suggested by the review team (more to follow), and a number of small additions. It also includes the edited versions of the first 18 chapters.

If you are working on the review, please switch to this version. The new draft, named TLCL-09.10.pdf, is available here.

Thanks for your help!

Friday, August 14, 2009

The Linux Command Line - Second Draft Now Available

Hi Everyone,

I just posted the second draft of my book. This incorporates items from my "to do" list. It does not yet include any changes from the review team. Reviewers may switch to this version if they wish (just indicate that you are reviewing version 09.08 of the book) but may also continue with the first draft. The changes in this version are not extensive but it should read a little better. A few new items were added, and the table of contents now provides links to the individual chapters making navigation somewhat easier. Enjoy!

The new PDF is named TLCL-09.08.pdf and is available here.

Wednesday, July 22, 2009

I'm Looking For Reviewers

Sorry about my long absence, but as you will see, I have an excuse:

Hello All,

I have just finished the first draft of a book I'm writing titled, "The Linux Command Line" to be released under a Creative Commons license. With the initial writing completed, it's time for some editing and review. That means I'm looking for folks willing to perform some reviewing. In particular, I need technical reviewers who can gently point out my technical and historical errors, and I need less experienced users who can find areas where my explanations are unclear. Don't worry about grammar and spelling and such, I have a "real" editor for that. At this stage I need gurus and sample users to test this thing.

The book is fairly long (about 475 pages) so a good review will take some time and effort on the part of any volunteers. If you make a serious contribution, I will add your name to the list of contributors in the "Acknowledgments" section of the first chapter. Don't laugh, that's all I got for doing a technical review on O'Reilly's "Bash Cookbook."

Unlike Bash Cookbook however, my book will be freely distributable in PDF format but I am reserving the right to sell printed versions.

Please feel free to take a look at the draft. It can be downloaded from my Sourceforge site at:

http://downloads.sourceforge.net/sourceforge/linuxcommand/TLCL-09.07.pdf?use_mirror=master

If you decide that you'd like to help out with the review, let me know and I will provide further details.

Many thanks!

Thursday, April 9, 2009

"We're Linux" Video Finalists

The Linux Foundation has announced the finalists in their "We're Linux" competition. While I consider all the entries rather weak, my favorite is this one:

Project: Building An All-Text Linux Workstation - Part 7

Today, we will finish up with printing by taking a look at the command line tools provided by CUPS.

CUPS supports two different families of printer tools. The first, Berkley or LPD comes from the Berkley Software Distribution (BSD) version of Unix and the second is SysV from the System V version of Unix. Both families include comparable functionality, so choosing one over the other is really a matter of personal taste.

Setting A Default Printer
A printer can be set as the default printer for the system. This will make using the command line print tools easier. To do this we can either use the web-based interface to CUPS at http://localhost:631 or we can use the following command:

lpadmin -d printer_name

where printer_name is the name of a print queue we defined in Part 6 of the series.

Sending A Job To The Printer (Berkley-Style)
The lpr program is used to send a job to the printer. It can accept standard input or file name arguments. One of the neat things about CUPS is that it can accept many kinds of data formats and can (within reason) figure out how to print them. Typical formats include PostScript, PDF, text, and images such as JPEG.

Here we will print a directory listing in three column format to the default printer:

ls /usr/bin | pr -3 | lpr

To use a different printer, append -P printer_name to the lpr command. To see a list of available printers:

lpstat -a

Sending A Job To The Printer (SysV-Style)
The SysV print system uses the lp command to send jobs to the printer. It can be used just as lpr in our earlier example:

ls /usr/bin | pr -3 | lp

however, lp has a different set of options. For example to specify a printer, the -d (for destination) option is used. lp also supports many options for page formatting and printer control not found with the lpr command.

Examining Print Job Status
While a print job is being printed, you may monitor its progress with the lpq command. This will display a list of all the jobs queued for printing. Each print job is assigned a job number that can be used with to control the job.

Terminating Print Jobs
Either the lprm (Berkley) or cancel (SysV) commands can be used to remove a job from a printer queue. While the two commands have different option sets, either command followed by a print job number will terminate the specified job.

Getting Help
The following man pages cover the CUPS printing commands

lp lpr lpq lprm lpstat lpoptions lpadmin cancel lpmove

In addition, CUPS provides excellent documentation of the printing system commands in the help section of the online interface to the CUPS server at:

http://localhost:631/help

Select the "Getting Started" link and the "Command Line Printing And Options" topic.

A Follow Up On Part 4
Midnight Commander allows direct access to its file viewer and built in text editor. The mcview command can be used to view files and the mcedit command can be used to invoke the editor.

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Saturday, April 4, 2009

LinuxCommand.org (And Others) Under DDoS Attack

Since Thursday my domain registrar, register.com, has been under heavy distributed denial of service (DDoS) attack. This has, at times, made all or part of LinuxCommand.org unavailable. Here is the latest news:
Dear William,

Earlier today we communicated to you we were experiencing intermittent
service disruptions as a result of a distributed denial of service
(DDoS) attack – an intentionally malicious flooding of our systems
from various points across the internet.

We want to update you on where things stand.

Services have been restored for most of our customers including hosting
and email. However for some of our customers, services are not fully
restored. We know this is unacceptable.

We are using all available means to restore services to every one of
our customers and halt this criminal attack on our business and our
customers’ business. We are working round the clock to make that happen.

We are committed to updating you in as timely manner as possible,
please check your inbox or our website for additional updates.


Thank you for your patience.



Larry Kutscher
Chief Executive Officer
Register.com

Friday, April 3, 2009

Another NYT Story On Netbooks

Light and Cheap, Netbooks Are Poised to Reshape PC Industry

Tip: Redirecting Multiple Command Outputs

Let's imagine a simple script:
#!/bin/bash

echo 1
echo 2
echo 3
Simple enough. It produces three lines of output:
1
2
3
Now let's say we wanted to redirect the output of the commands to a file named foo.txt. We could change the script as follows:
#!/bin/bash

F=foo.txt

echo 1 >> $F
echo 2 >> $F
echo 3 >> $F
Again, pretty straightforward, but what if we wanted to pipe the output of all three echo commands into less? We would soon discover that this won't work:
#!/bin/bash

F=foo.txt

echo 1 | less
echo 2 | less
echo 3 | less
This causes less to be executed three times. Not what we want. We want a single instance of less to input the results of all three echo commands. There are four approaches to this:

Make A Separate Script
script1:
#!/bin/bash

echo 1
echo 2
echo 3
script2:
#!/bin/bash

script1 | less
By running script2, script1 is also executed and its output is piped into less. This works but it's a little clumsy.

Write A Shell Function
We could take the basic idea of the separate script and incorporate it into a single script by making script1 into a shell function:
#!/bin/bash

# shell function
run_echoes () {
echo 1
echo 2
echo 3
}

# call shell function and redirect
run_echoes | less
This works too, but it's not the simplest way to do it.

Make A List
We could construct a compound command using {} characters to enclose a list of commands:
#!/bin/bash

{ echo 1; echo 2; echo 3; } | less
The {} characters allow us to group the three commands into a single output stream. Note that the spaces between the {} and the commands, as well as the trailing semicolon after the third echo, are required.

Launch A Subshell
Finally, we could do this:
#!/bin/bash

(echo 1; echo 2; echo 3) | less
Placing the list inside () creates a subshell, or another copy of bash and it executes the commands. This has the same result as enclosing the list of commands within {} but with more overhead. The real reason we would want to do this is if, instead of just redirecting the output, we wanted to put all three commands in the background:
#!/bin/bash

(echo 1; echo 2; echo 3) > foo.txt &
This doesn't make much sense for our echo commands (they execute too quickly to bother with), but if we have commands that take a long time to run, this technique can come in handy.

Enjoy!

Friday, March 27, 2009

Project: Building An All-Text Linux Workstation - Part 6

In this installment, we will tackle printing. If you recall from the second installment when we installed Debian on the system, we selected the "print server" group of packages to included with our installation. This installed CUPS (Common Unix Printing System) and related programs. This set of packages allows the system to print local print jobs and (if configured to do so) act as a print server to other systems on the local network.

Installing cups-pdf

One of the cool things we can do is install the cups-pdf package which supports a virtual printer that creates PDF files in lieu of actual printed output. After installation, we will take advantage of the package to demonstrate how to configure CUPS. Using apt-get we can install the package this way:

sudo apt-get install cups-pdf

Configuring CUPS

CUPS, by default, makes available a web-based configuration system. To access it, we will use w3m:

w3m http://localhost:631

which will open the web page on the local network interface using port 631. After the page opens, we will get a screen like this:



With this interface, you can configure:
  • Local USB, parallel port and virtual printers.
  • Remote IPP (Internet Printing Protocol) printers. Cups will find them automatically if your network allows it.
  • Remote SMB (Windows shared) printers.
To add the PDF virtual printer, follow this procedure. Note that at some point you may be prompted to enter a user name and password for CUPS. Regular users are not allowed to make system-wide changes to the printing system. When prompted enter the user name "root" and the root password to continue.

  1. Using the tab key, move the cursor to the link labeled "Add Printer" and press enter.
  2. We will see a new page with some input fields. They are delimited with square bracket characters ([]). To enter data into the fields, move the cursor to the field and press enter. A text prompt will appear at the bottom of the screen.
  3. In the first field, "Name:" we will enter the name of the printer to be added. This name is like a file name and should be one word with no spaces. We will call our new printer, PDF. Type the letters PDF at the text prompt and press enter.
  4. Press the tab key to move to the next field, "Location:" and enter "localhost" using the text prompt.
  5. Press the tab key to move to the next field, "Description:" and enter "CUPS-PDF Virtual Printer."
  6. Press the tab key to move to the link labeled "Continue" and press enter. After a few seconds, we will see a new screen with a pull-down box of printer devices. Using the tab key move the cursor to the field labeled "Device:" and press enter. The contents of the list will appear.
  7. Using the arrow keys, select the entry "CUPS-PDF (Virtual PDF Printer)" and press enter.
  8. Press the tab key to move to the "Continue" link. Press enter.
  9. After a few seconds, a new screen will appear, again with a pull-down box. This box is for selecting drivers. The CUPS-PDF driver does not really need this, so open the box labeled "Make:" and select "Generic" and press enter.
  10. Move to the "Contiune" link and press enter.
  11. The next screen will appear and we can skip over the fields (they should already be correct.) Move the cursor to the link labeled "Add Printer" and press enter.
  12. You will briefly see a screen announcing that the printer has been successfully added. It will be followed by a page of printer options. If you need to change any, such as default paper size, do so now. When done, move to the link labeled "Set Printer Options" and press enter.
  13. The final screen contains a summary of the printer setup and contains a list of links for controlling the printer including printing a test page.

At this point we are done. You may move to the "Home" link at the top of the page to repeat the process to add more printers.

That's all for this installment. Next time we will look at some printer commands that we can use with our new printing capability.

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Monday, March 23, 2009

Saturday, March 21, 2009

Project: Building An All-Text Linux Workstation - Part 5

In today's episode, we'll look at sudo and some text editing stuff. So fire up your text boxes and let's get going!

sudo
If you're an Ubuntu user, you probably already know a little about sudo. It's the command you use to get temporary administrative privileges for doing such things as editing configuration files and installing software. sudo allows users to enter their own password instead of the root password to perform privileged tasks. Further, sudo can be configured to allow specific users specific privileges. For example, a user can be allowed to only execute a specific command as the superuser.

sudo is invoked this way:

sudo command

where command is the command to be executed with escalated privileges.

One of the inconveniences on our all-text system is that we must either login as root or use the su command to shut the system down, since only the superuser is allowed to shutdown the system. A sensible precaution considering that Linux systems are designed to support multiple users at the same time.

To fix this problem, we'll install sudo on our system. We can do this with either aptitude or apt-get. As root, enter the following command:

apt-get install sudo

and the sudo package will be installed.

sudo is controlled by a configuration file named /etc/sudoers. The sudoers file is a little unique in that it wants to be edited only with the visudo program. You can actually edit it with any text editor, but the visudo program checks the syntax of the file to help prevent errors. This is important for any file with the security implications of sudoers.

To edit the file, we simply (as root again) enter the command:

visudo

and the following screen will appear:



Despite the name, the visudo program uses the nano text editor and not vi. This makes it much easier for new users. Our change to the sudoers file is very simple. Add this line in the section labled "User privilege specification" and then save the file and exit the editor by first typing ctrl-o and then ctrl-x.:

me ALL=(ALL) ALL

Substitute your user name for the name "me" and this configuration will allow you to execute any command with root privileges by entering your own password.

To allow a user of the system to only run the poweroff command (performs the same as shutdown but does not require the time to be specified), we could add this line to the file:

username ALL=(ALL) /sbin/poweroff

where username is the name of the user.

nano
As we have seen, our Debian system has the nano text editor installed by default. nano is a clone of an earlier editor named pico that is included with the pine email program. pine has some license issues that prevents it from being included with some Linux distributions so a replacement version of its editor was developed.

To use nano, type the following:

nano textfile

where textfile is the name of a file to edit.

As text-based editors go, it's pretty easy to use. It provides a small list of commands at the bottom of the screen. Pressing the ctrl-g key brings up the help screen which displays all of its commands. nano does support the mouse. See the nano man page for details.

nano uses a global configuration file named /etc/nanorc and, if available, a local configuration file named ~/.nanorc. To make an initial copy of the local configuration file, copy the global file to your home directory:

me@linuxbox:~$ cp /etc/nanorc ~/.nanorc

One useful change we can make to the configuration file is the activation of syntax highlighting. This can be done by removing comments from several lines at the end of the file. The last section of the configuration file, the "Color setup" section, has a number of "include" statements which load syntax definitions. To enable syntax highlighting support, remove the comment symbol from the beginning of the line. Change lines like this:

## Bourne shell scripts
#include "/usr/share/nano/sh.nanorc"


to:

## Bourne shell scripts
include "/usr/share/nano/sh.nanorc"


for as many languages as you wish to support.

Syntax highlighting is automatically activated based on the file name extension of the file being edited. For example, if we edit a file named foo.html and the HTML syntax definition has been included in the .nanorc file, the foo.html file will be displayed with syntax highlighting when loaded. The colors can be toggled on and off by pressing alt-y. This presents a problem for shell scripts, however as most do not have a file extension in their name. This can be solved in one of two ways. First, add the extension ".sh" to the file name, or second, invoke nano with the -Y sh option, which forces the selection of the sh highlighting. It might be a good idea to add an alias such as:

alias nano-sh='nano -Y sh'

to your .bashrc file to make this more convenient.

vim
Real Linux users, of course, don't use nano. They use vim, the enhanced replacement for the traditional Unix editor, vi. The version of vim installed by default on our system is called vim-tiny, a subset of the full package. I recommend installing the full vim package. This can be done with apt-get like so:

apt-get install vim

I'm not going to cover vim in this posting as it would take too much space (it consumed a rather chapter in my upcoming book) but there is a pretty good on-line book (in PDF format) that describes it in detail. See The Vim Book. Seriously, to be a real Linux person you should take the time to learn it. I know its not easy, but you'll really enjoy lording it over the other Linux newbies once you do.

Well, that's all for this time. See you again soon!

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Wednesday, March 11, 2009

Project: Building An All-Text Linux Workstation - Part 4

In this installment, we will explore web browsing and install a couple of packages to help with file management.

A Text Web Browser?

Yes, there are such things. In fact there are several available for Linux. Our Debian workstation has one installed by default. Called w3m, it is a full featured browser that operates in text mode. So what can you do with it? Well, we won't be watching YouTube with it, that's for sure, but many well written web sites (especially those designed to follow acceptable standards of accessibility) will render just fine. We can try it out:

me@linuxbox:~$ w3m linuxcommand.org

and after a few seconds we will see this:

The arrow keys will navigate, and the tab key will advance from link to link. Shift-h will bring up the help screens and shift-b will perform a "back" function (this will get you out of help too). Press the q key to quit w3m. The program can do tabbed browsing, render tables and has a number of command line tricks. This blog renders fine too, so you can now follow along directly on our all-text system.

Unfortunately, there is a bug that prevents w3m from using the mouse on the console to help with navigation. Fortunately, there are other browsers that you can install. See the link at the end of this article.

Automatically Mounting USB Devices

If we insert a USB flash drive into our system, we will see a kernel message appear on the screen. This is because the kernel sends its messages to the console in the hopes that an ever-vigilant operator (that's you) is paying attention. However this message means very little as it only announces the fact that the kernel has detected a device attached to the USP port. It does not mean that the system has done anything useful, like mounting the device. We could manually mount the device, that that is a nuisance.

To solve this problem we will install a package called usbmount that can automatically mount USB devices. We can do this using aptitude. Just search for the "usbmount" package and install it. We described the process in installment 3.

After the package is installed, we must modify its configuration file to allow support for VFAT file systems, the type most often used on USB drives. As a precaution, usbmount does not enable VFAT since the kernel does not fully support the "sync" mount option on VFAT file systems. Normally, usbmount allows USB devices to removed without unmounting. It does this by keeping file systems "synchronized," that is, it immediately writes changes to the device versus waiting to consolidate multiple write for improved performance. With VFAT enabled, the user must issue a sync command before removing a USB VFAT device.

After the package is installed, we need to (as root) edit the /etc/usbmount/usbmount.conf file and change the following two lines:

FILESYSTEMS="ext2 ext3"

to:

FILESYSTEMS="ext2 ext3 vfat"

and:

FS_MOUNTOPTIONS=""

to:

FS_MOUNTOPTIONS="-fstype=vfat,gid=floppy,dmask=0007,fmask=0117"

After the configuration file is modified, usbmount will automatically mount VFAT devices. You will find the mount point in the /media directory. When a flash drive is inserted we can verify the mount using df:

me@linuxbox:~$ df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda1             18856292    980140  16918280   6% /
tmpfs                   160096         0    160096   0% /lib/init/rw
udev                     10240        88     10152   1% /dev
tmpfs                   160096         0    160096   0% /dev/shm
/dev/sda1                15560      5944      9616  39% /media/usb0
and we see that the drive has been mounted on /media/usb0. Just remember to use the sync command before removing the device, or really bad things may happen to the drive:

me@linuxbox:~$ sync

Midnight Commander

The last package we will install in this episode is Midnight Commander, a text based file manager. Using aptitude, install the mc package and the following additional packages that are recommended: arj, bzip2, odt2txt, unzip, and zip. If you prefer, you can use apt-get to install the packages as the mc package is a little hard to find using aptitude:

linuxbox:~# apt-get install mc arj bzip2 odt2txt unzip zip

After mc is installed we can fire it up:

me@linuxbox:~$ mc

and the following screen will appear:


The numbered blocks along the bottom of the screen correspond to the function keys, F1-F10 and permit access to many of the programs functions and it has a lot of them! Unlike w3m, the mouse is well supported by mc.

That's all for this installment. While you're waiting for Part 5, study Midnight Commander. It has a help function and a man page. That should keep you busy for a while! Also, if you are interested in other text-based web browsers, check out the Text Mode Browser Roundup from Linux Journal.

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Monday, March 9, 2009

Dvorak Likes Linux

I never thought that I would live long enough to see it, but John C. Dvorak, professional curmudgeon likes Ubuntu!

Thursday, March 5, 2009

Interview With Steve Bourne, Creator Of sh

Computerworld has a very interesting interview with Steve Bourne, the author of sh (the ancestor of bash) where he talks about the history of the Unix shell. A good read for you Unix history buffs out there.

Wednesday, March 4, 2009

Project: Building An All-Text Linux Workstation - Part 3

In this installment we'll learn how to navigate the text environment and install our first packages.

If you have gotten through parts 1 and 2 of our series, you now have a sparkling new Linux system that displays -- a prompt! But if you think it only displays one prompt, you'd be wrong, as we shall soon see.

Let's fire up our system and log in again as the root user.

One of the commands I often use is locate, which rapidly searches a small database of files installed on the system. locate is installed on our system but when we try to use it we get the following message:

linuxbox:~# locate foo
locate: can not open `/var/lib/mlocate/mlocate.db': No such file or directory

This message appears when you try to use the locate command before the database is created. Normally the database is rebuilt each night by a cron job, but unless you let the machine run overnight, the database will never get built. We'll fix this problem in a little bit, but to solve the problem in the meantime, we'll run the database update program manually:

linuxbox:~# updatedb

After it completes, locate should start to work. If you are unfamiliar with this wonderful utility, now is a good time to look at its man page.

If we enter the command:

linuxbox:~# locate foo

We will get back a long list of filenames containing the string "foo." Go ahead and do this. Notice how the list scrolls off the top of the screen. Next, type shift-PageUp and notice how we are able to scroll up the list. Shift-PageDown scrolls downward.

The terminal screen is only 80 characters wide but sometimes a command will output lines wider than 80 characters. This will usually cause the text to wrap but in some rare instances it will actually go off the edge of the screen. In these cases, you can scroll the screen sideways by using the shift-right arrow and shift-left arrow.

To see the next keyboard trick, type Alt-F2 and you will see a new login prompt. Actually you are seeing another virtual terminal you can log into. Go ahead and log in using your ordinary user name and password. Now try the who command:

me@linuxbox ~$ who
root tty1         2009-03-04 13:49
me   tty2         2009-03-04 14:28


As we can see, we have two users logged in. Type Alt-F3 and we'll see another virtual terminal. In fact, on our system, Alt-F1 through F6 provide separate terminals we can use. You can use Alt-right and left arrows to rapidly cycle through them.

Wouldn't It Be Great If The Mouse Worked?

Let's go back to the first terminal session where root is logged in by typing Alt-F1. Then start up the aptitude program:

linuxbox:~# aptitude

aptitude is a fancy character based way to install packages on Debian and other Debian-based distributions. It has many features and is a handy way to manage packages on our system. The apt-get program is also available. aptitude features a multi-pane screen:



It took me a few minutes to figure out the user interface, but after installing a few packages with it, I figured it out. We're going to use aptitude to install a couple of packages. We'll use its search feature to help us out. Type / and a search prompt should appear. Enter "anacron" and it should find the package in its database.



You can use the tab key to toggle between the two display panes. With the package name highlighted, press the + key to mark the package for installation. Next, let's do another search, this time for "gpm". If it does not find it the first time, type "n" to search for the next occurrence. Repeat until it finds a package named simply gpm. This use of / (search) and n (next) is the same as the less program. Again, type + to mark the package for installation.

With our two packages selected, it's time to install them. We do this by typing "g" (for "go".) aptitude will display a summary of the actions it is about to take and by pressing "g" a second time, the installation will commence. When installing gpm for the first you will see an error message about being unable to shut down the daemon. This may be safely ignored.

When the aptitude screen returns, type "q" to quit.

We have now installed anacron, which will make sure that periodic tasks (like running updatedb) take place even if the machine is not run continuously. We also installed gpm which should make the mouse work. Some of the programs that we will install in upcoming installments can use a mouse, but its most useful feature is that we can now use the mouse to copy and paste text, just like we can on the X display. The only difference is that the right button is used to paste rather than the middle button.

You can terminate your extra terminal sessions by typing "exit" and root can shut down the machine by:

linuxbox:~# shutdown -h now

That's all for today. See you again soon!

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Monday, March 2, 2009

Linuxcommand.org Terminal Screenshot Colour Scheme

I recently received this email from inquisitive reader Dermot:
Hi

I was going through your website http://linuxcommand.org tonight and find it really easy to follow how the commands and descriptions are laid out. I have a question about the screenshots you use.

The terminal screenshots have a black background, with "[me@llinuxbox me}$" in green and the commands to be run in white. How did you get this colour scheme? Ive been asking on Ubuntuforums and other forums, as it would really make the lines on which you enter commands stand out against the output results of those commands.

Please get back to me if you get a chance.

Thanks

Dermot
Ireland

Thanks Dermot for taking the time to write. The "screen shots" you refer to aren't actually screen shots at all. They're implemented in hand-coded HTML and CSS, but you can create the same effect on your own system. Here's how:

The contents of your prompt are defined in an shell variable called PS1. You can examine its contents like this:

me@linuxbox ~$ echo $PS1
${debian_chroot:+($debian_chroot)}\u@\h:\w\$

This example is from an Ubuntu system. Other distributions will be different.

To set the colors, first use the Edit -> Current Profile dialog in gnome-terminal to set the color scheme to "White on black":

Next, you need to add some ANSI color codes to the prompt string contained in PS1. To change the prompt to green and then back to its original colors (so that subsequent text will remain white), you need to change the prompt string to this:

\033[0;32m${debian_chroot:+($debian_chroot)}\u@\h:\w\$\[\033[0m\]

You can test your prompt by setting the PS1 variable this way:

me@linuxbox ~$ PS1="\033[0;32m${debian_chroot:+($debian_chroot)}\u@\h:\w\$\[\033[0m\] "

After you are satisfied with the new prompt design, you can make it permanent by adding these two lines to your .bashrc file:

PS1="\033[0;32m${debian_chroot:+($debian_chroot)}\u@\h:\w\$\[\033[0m\] "
export PS1

Hope this helps!

You can read more about configuring the prompt at the Linux Documentation Project. They have an excellent HOWTO document entitled:

The Bash-Prompt HOWTO

Friday, February 27, 2009

Project: Building An All-Text Linux Workstation - Part 2

In this episode, we will choose the Linux distribution and perform the installation.

Choosing A Distribution

After some consideration, I selected Debian 5.0 for our workstation project for three reasons:
  1. Debian is fairly simple on a conceptional level. Good, straightforward design with good documentation. It's also capable of being installed on very small machines. Its minimum memory requirement is only 44 MB.
  2. Its package repository is huge. With over 22,000 packages, we are likely to find a good selection of text based applications for our project.
  3. It just came out and I wanted to play with it ;-)
Creating Installation Media

With that decision out of the way, we next decide on the installation media. I chose the "netinst" (also called the "minimal CD") option. This is a downloadable CD image that contains a minimal base system. Additional packages are installed from the Internet. The install image is 150 MB, much smaller than the full install CDs. To use this option you must have a working network connection. If you need other installation options, take a look at the Debian Installation Guide for guidance.

You can download the installation image from here.

The last step is creating the installation CD. I will assume that your present system, Linux or otherwise is capable of that and that you know how to do it.

Installation

With our machine ready, it's time to boot up with our install CD. After the CD boots you will receive an attractive Debian splash screen listing various install options. For our purposes, select "Install", not "Graphical Install".

The installer is pretty easy to use and in most cases the default selections are fine. The arrow keys move from selection to selection as do the tab/shift-tab keys. The space bar is used to toggle the contents of check boxes.

  • When the prompt appears for disk partitioning, select "Guided - Use entire disk" and "All files in one partition".
  • At the "Set up users and passwords" screen, you will be prompted to set the password for the root account. If you have been using Ubuntu up to this point, I have to explain that Debian, like most Linux distributions has a discreet root account rather than using sudo for everything as Ubuntu does. Much of the early work we will perform on the system will require root access. Choose a root password that is both strong, and one that you can easily remember.
  • Next, you will be prompted to create your personal account. In keeping with LinuxCommand tradition, I named my machine "Linuxbox" and created a user account named "me". You, of course, can use any name you like.
  • At the "Select and install software" screen you will be presented with a group of check boxes for different sets of packages to install. Using the space bar, select the "Print server" group and unselect the "Desktop environment" group.
  • If you have installed the system using the entire disk as suggested above, answer "yes" to the prompt on the "Install the GRUB boot loader on the hard disk" screen to install GRUB in the disk's master boot record (MBR).
The install should complete and prompt you to remove the install CD. Next, the machine will reboot and we should see the fruits of our labors.

A lot of boot messages should scroll by after the reboot and you will see at the very bottom a login prompt like this:

Debian GNU/Linux 5.0 linuxbox tty1

linuxbox login:

Enter the name root and then the root password and we will see a prompt like this:

linuxbox: ~#

Enter the command shutdown -h now and the machine will shutdown.

We're done for today.

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Wednesday, February 25, 2009

Project: Building An All-Text Linux Workstation - Part 1

When I see an old PC in the trash, I have a strong urge to rescue and adopt it like a stray puppy. My wife, of course, has very effective ways of constraining my behavior in this regard, so I don't have nearly as many computers as I want. I hate seeing computers go to waste. I figure if the processor and the power supply are still working, the computer should be doing something. So what if it can't run the latest version of Windows? It can still run Linux!

Over the next few weeks, I will show you how to take an old, slow computer and make it into a text-only Linux workstation with surprising capabilities, including document production, email, instant messaging, audio playback, USENET news, calendaring, and, yes, even web browsing.

Why would anyone want to build a text-only workstation? I don't know. Because we can! And besides, it's a great way to learn a bunch of command line stuff and that's why you're here, right?

So if you want to play along, find yourself a computer with a least the following:
  • Pentium processor or above
  • 64 MB or more of RAM
  • 2 GB or larger hard disk
  • PS/2 or USB mouse
  • A PCI Network card (no ISA or wireless please)
We're going to format over the existing software so don't use a valuable machine for this project. Many machines from the Windows 98 era should be good candidates. I will be using an HP Pavilion (circa 1996) with a 600 MHz Celeron, 320 MB of RAM and a 20 GB disk.

Good luck and I will see you again soon!

Further Reading

Other installments in this series: 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Monday, February 23, 2009

Bash 4.0 Released Today

Version 4.0 of everyone's favorite shell program was released today. I imagine this will be showing up in new Linux distributions soon. A copy of the release announcement is here.

Friday, February 20, 2009

The Unix (Linux) Frame Of Mind

If you are migrating from Windows to Linux, one of the barriers you will encounter (besides the myriad superficial differences in the user interface) is the whole "culture thing."

Using a Unix-like operating system involves more than learning where the buttons are located on the desktop. It requires a different way of thinking about your computer. On top of that, Linux requires that you learn about your relationship with the Free and open source software communities. That will be the subject, no doubt, of many future postings, but for now we'll talk about the Unix frame of mind.

To understand Unix, you have to understand its historical context. Unix certainly has its faults, but if you understand its history, you can see that many of its "features" are due to the state of computing during its formative years. But it's also important to realize that just because something is old does not mean that it is necessarily bad. Take for instance, the command line interface. If you listen to pundits and the PC magazine writers, you would think that the command line is the equivalent of waterboarding to the modern computer user. I liken use of a graphical user interface to watching television and the command line to reading a book. They are very different, but both are valid (and valuable) if used in the proper way.

So what is "the Unix frame of mind?" It is based on some the following principles:

The computer is multi-user. Unix was designed for "big" computers. Big, expensive computers that supported many users at the same time. This has a number of implications. For one, it meant a different security model. On a multi-user computer, it is essential that one user cannot trash another user, or the entire system. It also implies that communities can form among the computer's users. If you have Fedora or any of the other Red Hat family of Linux distributions, you may have noticed that it runs sendmail by default. This is part of the Unix tradition. Users on a system send email to each other. The first form of instant messaging was the write command which sent a text message to another user's terminal session.

Automate everything. Back in the 1960's IBM used a marketing slogan, "Machines should work. People should think." Yet today, we have millions of office workers sitting in front of PCs pumping their mice all day. Why? It's because the computer is not doing the work. The human is. The Unix way is to instruct the computer how to do the work and then forget about it. Back in the 1990s I marveled at the cron daemon. It could perform tasks on a scheduled basis. I remember leaving my PC on all night and letting it do the work I needed and having a summary of the results each morning. It did the work so I didn't have to.

Everyone programs. The computer is a tool, or rather, it is a huge collection of tools that can be arranged to perform work. To do this, you must program. You have to break your tasks into simple steps and arrange the tools to perform them. Unix actually makes this fairly simple. You can create pipelines of commands or write shell scripts. Have you noticed how, over the years, Windows has removed all the programming tools from its system? This keeps the users helpless. If they want a solution, they have to go out and buy it. By contrast, every major Linux distribution includes the shell, perl, python, text editors and thousands of tools for building solutions. If you need more, there is just about every programming tool under the sun available for immediate download.

A computer is not "easy to use." That is a marketing myth. A computer is no more easy to use than a violin, but with much study and practice, both can produce beautiful music.

It all depends on your frame of mind.

Every Linux user should learn something about the history of Unix. The Wikipedia is a good place to start:

http://en.wikipedia.org/wiki/Unix

Also check out Eric Raymond's The Art Of Unix Programming:

http://www.faqs.org/docs/artu/

Wednesday, February 18, 2009

A Brief History Of Printing

The following is an excerpt from my upcoming book, "The Linux Command Line" due out later this year.

To fully understand the printing features found in Unix-like operating systems, we first have to learn some history. Printing on Unix-like systems goes way back to the beginning of operating system itself. In those days, printers, and the way they were used was much different than today.

Printing In The Dim Times
Like the computers themselves, printers in the pre-PC era tended to be large, expensive and centralized. The typical computer user of 1980 worked at a terminal connected to a computer some distance away. The printer was located near the computer and was under the watchful eyes of the computer’s operators.

When printers were expensive and centralized, as they often were in the early days of Unix, it was common practice for many users to share a printer. To identify print jobs belonging to a particular user, a banner page was often printed at the beginning of each print job displaying the name of the user. The computer support staff would then load up a cart containing the day’s print jobs and deliver them to the individual users.

Character-based Printers
The printer technology of that period was very different in two respects. First, printers of that period were almost always impact printers. Impact printers use a mechanical mechanism to strike a ribbon against the paper to form character impressions on the page. Two of the popular technologies of that time were daisy wheel printing and dot-matrix printing.

The second, and more important characteristic of early printers was that printers used a fixed set of characters that were intrinsic to the device itself. For example, a daisy wheel printer could only print the characters actually molded into the petals of the daisy wheel. This made printers of the period much like high-speed typewriters. As with most typewriters, they printed using monospaced (fixed width) fonts. This means that each character has the same width. Printing was done at fixed positions on the page and the printable area of a page contained a fixed number of characters. Printers, depending on the model, most likely printed ten characters per inch (CPI) horizontally and six lines per inch (LPI) vertically. Using this scheme, a US letter sheet of paper is eighty-five characters wide and sixty-six lines high. Taking into account a small margin on each side, eighty characters was considered the maximum width of a print line. This explains why terminal displays (and our terminal emulators) are normally eighty characters wide. It’s to provide a WYSIWYG (What You See Is What You Get) view of printed output using a monospaced font.

Data is sent to a typewriter-like printer in a simple stream of bytes containing the characters to be printed. For example, to print an “a”, the ASCII character code 97 is sent. In addition, the low-numbered ASCII control codes provided a means of moving the printer’s carriage and paper using codes for carriage return, line feed, form feed and the like. Using the control codes, it’s possible to achieve some limited font effects such as bold face by having the printer print a character, backspace and print the character again to get a darker print impression on the page.

Graphical Printers
The development of GUIs lead to major changes in printer technology. As computers moved to more picture-based displays, so to did printing move from character-based to graphical techniques. This was facilitated by the advent of the low-cost laser printer which, instead of printing fixed characters, could print tiny dots anywhere in the printable area of the page. This made printing proportional fonts (like those used by typesetters) and even photographs and high quality diagrams possible.

However, moving from a character-based scheme to a graphical scheme presented a formidable technical challenge. Here’s why: the number of bytes needed to fill a page using a character-based printer can be calculated this way (assuming sixty lines per page each containing eighty characters):

60 X 80 = 4800 bytes

whereas, a three hundred dot per inch (DPI) laser printer (assuming a eight by ten inch print area per page) requires:

(8 X 300) X (10 X 300) / 8 = 900000 bytes

The need to send nearly one megabyte of data per page to fully utilize a laser printer was more than many of the slow PC networks could handle, so it was clear that a clever invention was needed.

That invention turned out to be the page description language (PDL). A page description language is a programming language that describes the contents of a page. Basically it says, “go to this position, draw the character ‘a’ in ten point Helvetica, go to this position...” until everything on the page is described. The first major PDL was PostScript from Adobe Systems which is still in wide use today. The PostScript language is a complete programming language tailored for typography and other kinds of graphics and imaging. It includes built-in support for thirty-five standard high quality fonts and the ability to accept additional font definitions at run time. At first, support for PostScript was built into printers themselves. This solved the data transmission problem. While the typical PostScript program was very verbose in comparison to the simple byte stream of character-based printers, it was much smaller than the number of bytes required to represent the entire printed page.

A PostScript printer accepted a PostScript program as input. The printer contained its own processor and memory (often times making the printer a more powerful computer than the computer it was attached to) and executed a special program called a PostScript interpreter which read the incoming PostScript program and rendered the results into the printer’s internal memory thus forming the pattern of bits (dots) that would be transferred to the paper. The generic name for this process of rendering something into a large bit pattern (called a bitmap) is raster image processor or RIP.

As the years went by, both computers and networks became much faster. This allowed the RIP to move from the printer to the host computer. This permitted high quality printers to be much less expensive.

Many printers today still accept character-based streams, but many low-cost printers do not. They rely on the host computer’s RIP to provide a stream of bits to print as dots. There are still some PostScript printers too.

Today's Site Updates

  • Thanks to sharp-eyed readers Dmitry Zhuravlev-Nevsky and Alexander Wireen, I have corrected a bunch of typos in the "Writing Shell Scripts" section of the tutorials.

Saturday, February 14, 2009

Peanuts And Software

I really enjoy factory tours. I've always had a fascination with how things work and how things are made. Over the years, I've been on some great tours, Syracuse China, Corning Glass, Ben & Jerry's Ice Cream, and best of all, the (now defunct) General Motors assembly plant in Baltimore.

In the past, it seemed like many companies offered tours of their factories to the public, and why not? The great companies were truly proud of what they were doing and were happy to show you. This even applied to products whose production processes are better left unseen.

In recent years, factory tours have gotten harder to find. I don't know the exact reason for this. Maybe it's the fear of litigation, or a desire to keep everything proprietary, I just don't know.

The alert among you may have noticed the story in yesterday's New York Times announcing that the Peanut Corporation of America has filed for bankruptcy and will be going out of business. As you may know, Peanut Corporation of America recently gained notoriety for knowingly shipping peanut butter to its customers contaminated with salmonella resulting in over 600 reported illnesses and 9 deaths. As the scandal unfolded, reports surfaced of unsanitary conditions in its plants including cockroaches, leaking roofs, and a report of a dead rat in one of its peanut roasters.

Somehow, I don't think they offered a factory tour.

What does this have to do with software? Plenty. When you make something in the full light of day, it's a lot less likely that something horrible is going on under cover of secrecy. Open source code is like a factory tour for software. You can wander around and feel the pride of those who created it.

Proprietary software is different. What are they hiding? Does their program have "rats in the roasters?"

You'll never know.

Friday, February 13, 2009

Followup On My Dell Inspiron 530N

Had a problem with the monitor that came with my recently purchased Dell 530N with Ubuntu. After a couple of hours, the monitor would get brighter and brighter until it reached maximum brightness and then after a few minutes of that, the lamp in the monitor would go out leaving me with a black screen. I could turn the monitor off for a few minutes (and presumably, let it cool) and it would work for another couple of hours.

I got tired of that (Hey! I'm trying to write a book here!) so I called Dell's Ubuntu support number (1-866-622-1947) and got connected to an overseas call center. The representative had me perform a simple test, reboot the computer and press F2 to get to the setup screen, then just let it sit and see if the monitor would fail again. This test would eliminate the possibility of a video driver issue. Sure enough, after about 30 minutes, it went out again. I called back and talked to another representative who was able to see what had transpired during the first call. She arranged for a replacement to be shipped overnight to me. A refurbished monitor arrived the next afternoon.

I'm pleased with the service. Their support staff was polite and professional, but I didn't have a chance to test their Linux skills.

Maybe next time.

Thursday, January 29, 2009

Adding Ubuntu-style sudo To Fedora 10

One of the neat things about Ubuntu is the absence of a discreet user account for root. This was an unusual idea when it was introduced a couple of years ago, and I myself, had some initial doubts about it. But that was then and this is now. I have come to really enjoy this feature.

I recently installed Fedora 10 on one of my test systems and decided to see what was involved in getting a similar feature in Fedora.

The sudo command is governed by a configuration file named /etc/sudoers. This file defines the users who are allowed to use sudo and precisely what commands they are allowed to execute with elevated privileges. I have written definitions for this before, so I took a look at the Fedora 10 sudoers files to see how to add myself to the list of users that can execute any command as root.

What I discovered was that the Fedora 10 sudoers thoughtfully provides the following lines:

## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

This definition states that any user belonging to the group wheel will have full access to all commands. Just what the doctor ordered!

So this leaves me with two things to do:

  1. Uncomment the line in the sudoers file to enable the definition.
  2. Add myself to the wheel group.

The first task is pretty easy. I just fire up vi (as root) and edit /etc/sudoers. I only need to remove the leading pound symbol to make the change:

## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL


The second task is not very hard either. I bring up the graphical (yes, I'm cheating) user configuration tool and select my user account and press the "Properties" button, then select the "Groups" tab on the properties dialog. Scrolling to the bottom of the list, check the box labeled "wheel":


Press OK and we're done!

Here are a couple of questions for all you geniuses out there:

  1. How would you add yourself to a group using only the command line tools?
  2. What is the history and meaning the "wheel" group?

Have fun!

Monday, January 26, 2009

A "Command Line To The Rescue" Story

As you may recall from my earlier review of my new Dell desktop, I bought it to replace my previous Dell desktop, a Dimension 2400. Over the last couple of weeks I have installed a couple of distributions on my old system. First the Windows 7 Beta (the first installation of Windows in my house since 1996), a sad story that I hope to write about soon, and over the weekend I formatted over that horror and installed Fedora 10.

I've used a lot of Red Hat products over the years and I favor it for administration, however, I have a growing fondness for Ubuntu because it seems to deliver the best desktop user experience. When it comes to Red Hat stuff though, I'm pretty expert at installing it.

Now I have to digress a bit and talk about a phenomenon I have discovered about Linux. It is often useful to think about Linux as though it were a living thing. It's been said that Linux is based on evolution, not "intelligent design." This means that Linux evolves naturally in a series of fits and starts, rather than being the result of some grand plan. This is the opposite of proprietary software, which (if done properly) is the result of careful planning.

What this means is that sometimes a subsystem in Linux goes away and is replaced by another re-written replacement that addresses a need. And, of course, programmers prefer to re-write code rather than maintain old code. Anyway, this periodic "churn" of code is natural to the process.

The problem is that sometimes things that worked in previous versions stop working in the new versions. This is fairly common Linux, and it's perhaps its most off putting characteristic.

One case in point is the display driver for Intel integrated graphics chips. Admittedly, some of these chips really suck, especially the earlier ones. The new ones are better and also get official driver support from Intel. Over the last year or so, Linux distributions have been deploying the new "intel" driver replacing the previous "i810" driver. The new driver is in every way much better than the old driver, except it doesn't like the older chips very much.

This brings us back to my Dimension 2400, which, as you may have guessed by now, uses one of the old chips.

So I pop the Fedora 10 install DVD into the drive and reboot the machine. The installer starts and gets to the first graphical install screen and completely hangs. Dead, inert. It doesn't look like I'll be doing a graphical install this time.

I power-cycle the machine and just as the DVD starts to boot, i press the escape key and get:

boot:

Wonderful! A boot prompt. Now I know what to do. I enter:

boot: linux text

which instructs the boot loader to boot the kernel called "linux" and pass the argument "text" to the system. On a Red Hat style systems this will invoke the text mode installer. I answer all the prompts and the install gets underway. While I have some time to kill, I Google for a way to get the graphics working again. From my search, I determine that the problem isn't so much the new driver but rather that, by default, the new driver uses a new "acceleration method" called "EXA" rather than the previous method called "XAA". Making a one line change to the /etc/X11/xorg.conf file will cause X to revert to the old method and solve the problem.

After the installation is complete, I reboot the machine and it comes up in text mode. This is caused by a configuration file called /etc/inittab which contains a setting for the default run level, which is 3 for text mode and 5 for graphical mode on Red Hat style systems. In text mode you get a login screen:

Login:

I log in as root and create a personal account for myself:

useradd bshotts

passwd bshotts

After I create the account and set the password, I press Alt-F2 to get to the second virtual console. There I log in as bshotts and try to launch X. I do this by entering the command:

startx

The X server starts and almost gets to the desktop before it chokes again. Looks like it's time to fix that xorg.conf file. I restart the machine and and log in as root again. I go looking around in the /etc/X11 directory and notice that there is no xorg.conf file. This is because with modern X servers, the file is no longer required, as all the configuration is done dynamically at runtime which, in concept, is great when it works.

Bummer.

I think to myself, "there must be some kind of configuration program for X," so I start digging around in the man pages. I discover that the Xorg program has an option called -configure that will create an xorg.conf file based on the server's best guess of what the correct configuration should be. You must be root to use this option so I give it a try:

Xorg -configure

and it creates a file named xorg.conf.new in root's home directory. I copy this file to /etc/X11 which is where X expects to find it:

cp /root/xorg.conf.new /etc/X11/xorg.conf

Next, a little editing with vi and I add one line to the "Device" section of the file:

Option "AccelMethod" "XAA"

After saving the edited file, I login in as bshotts and try the startx command again and low and behold, I have a working desktop! I confirm that the graphics seem correct and log out of GNOME. This returns me to the text console. Since I want the system to work in graphical mode by default, I need to change the default run level. I do this by returning to my root console session and editing the /etc/inittab file. I change the last line to read:

id:5:initdefault:

changing the "3" to a "5" to set the run level to graphical. After saving the file I reboot the machine and everything works. New installation successful.

So what does all this teach us? It teaches us that sometimes there is no substitute for a good command line. A command line can work wonders. It teaches us that, with enough digging around, most problems are solvable. And most of all, it teaches us you should keep learning and never give up.

Hope you had a good weekend too!

Interesting Story In The NYT

In my earlier post about the effect of the Asus Eee PC, I explained that it showed how Linux can be used to create "disruptive technologies." The New York Times seems to agree.

Sunday, January 4, 2009

Why use Linux?

I recently received this thoughtful question from a concerned reader:


Hi,

My name is Allan. I´m from Costa Rica. First of all, let me congratulate you because it´s a little bit hard to find a good Linux site that explains everything about the shell (and how to start using commands) and how Linux works in a friendly clean way. I have always been interested in Linux (since I was 12 years old more or less, now I´m 25). But I´m still a rookie using it. Unfortunately most of us have been some kind of forced to depend (exclusively) on Windows because of all the software available, the GUI and the easiness of using it. We all complain about how bad Windows works, that´s why I´m trying to be more involved on Linux. Yesterday, I just installed Ubuntu 8.10 Intrepid. Great OS so far.

I do not want to bother you, you should be very busy, but I have one big question. How can I stop using Windows if I need it for work and personal use (software like CS3, Paint Shop Pro, Ulead Video Studio, among many others). Maybe any Linux tech would tell me to look for an alternative GPL software. For example for Office, the option would be Openoffice (very nice soft!) but its not perfect or good enough recognizing many characters or text formats. For CS3 or Paint Shop, the option would be Gimp (nice one too) but it lacks of better user friendly funcionality or editing options. Maybe you can do almost the same thing with it but it´s going to take you a lot of time work. I could give you many examples like those, but I think you already got the point good enough.

Please do not think that I´m saying that Linux sucks, that´s not it at all, what I´m trying to say is that I realize that you all are working to make it better and better each day, more user friendly, more attractive for other branded companies (like Nero, Skype, Kaspersky, etc) but what does it take to let Linux runs everything what Windows runs. I have heard about Wine, but I also heard that only works fine on some specific softwares.

I just wanted to share my point of view. Maybe you can share yours with me so that I can understand better Linux.


First off, thank you Allen for taking the time to write. You raise an interesting issue that I'm sure confronts many people looking to migrate from Windows to Linux.

Changing computer platforms is a challenaging problem for anyone, not just those seeking to move from Windows to Linux. It really boils down to a question of cost versus benefit.

So what are the costs of switching? Obviously they are having to learn new applications and perhaps worse, migrating your data to work with your new applications. We all experience this when switching an application even if no platform change is involved. Using an application involves a certain amount of investment on the part of the user. An investment in time needed learn the application and the time to reshape his or her "world" of data to fit the confines of the application's needs. If you have been using a particular platform and its applications for a long time, you probably have a lot of investment in it. So for many people the costs are high.

But what about the benefits? Those are a little harder to quantify. First, there is the economics which should be rather cut and dry, but, for many personal computer users (as opposed to business users), it is not. If you actually paid for Photoshop, Office, and the other software you mention, you're talking about a lot of money (potentially thousands of dollars) spent on your application set. However, many personal computer users, don't bother with the formality of license compliance and simply use unauthorized copies which can be had to no cost. Therefore, for many people, Linux offers no economic advantage.

The second benefit is this nebulous thing called "freedom." Some people express this in somewhat abstract terms, saying that it is a virtue in its own right. I tend to be a bit more pragmatic. I think that free software offers distinct practical advantages over proprietary software. I like the fact, for example, that I can install Linux distributions all day long and never worry about having to call a vendor and ask for permission to do so. I like that fact that since the source code is freely available, many people can offer technical support. I like the fact that I never encounter, timed demos, "crippleware", and "lite" versions of products. Any time I want to install something, I can just install the full version, no strings attached.

Then there are the technical benefits of using a Unix-like operating system. Things like virus and malware resistance, file systems that don't require periodic defragmentation, a powerful command line interface, and the potential for almost limitless customization.

While these benefits are clear, many people do not have a clear picture of what a migration means. For many people who want to move from Windows to another platform, what they really seem to want is a "Windows" that does not have the problems that they have been experiencing. So many times you hear, "I'd change to Linux but I tried it once and it was different from what I'm used to."

Yes, Linux is different. Linux is very different and to successfully move to it (or any other platform for that matter) you have to be willing to accept change. Some people really cannot do this. Their minds are not built that way. They learn just enough about "the computer" to do their jobs by rote. Platforms just don't matter to them.

Platforms matter to people who enjoy and care about computing. I use Linux as opposed to Windows because I really like computers. I enjoy using them and learning about them. I find that Linux helps me enjoy my computer much more than any version of Windows ever did.

As to your concerns about OpenOffice.org and GIMP versus Office and Photoshop, yes, they are different too. I'm currently writing a book with OpenOffice.org and I have found it very satisfactory. I used to write a lot with Word and it was fine too. I don't see a lot of practical differences in what each program does, but there certainly are surface differences which may be difficult for some people. I suppose that the same may be true with GIMP and Photoshop. I've never used Photoshop so I can't really comment, but from reading the comments of others, I sense that many people reject GIMP out-of-hand because it's not just like Photoshop. Both are very capable programs and the field of digital image processing is an extremely technical one which makes any truly capable application dauntingly complex. But with the exception of of deep color support, and some pre-press functions, GIMP is very comparable to Photoshop for many tasks. I use it routinely in my photographic work.

To sum up, change requires, well, change. Forward progress sometimes means giving up old ways of doing things and learning some new ones. For anyone attempting it, the question remains, "is it worth it?" and only you can know the answer to that.

Hope this helps.