Profile
Engels started playing with Linux® in 1991 and obtained his Red Hat Certified Engineer (RHCE), Red Hat Certified Instructor (RHCI), and Red Hat Certified Examiner (RHCX) certifications in 2002. He is in charge of Bluepoint's Total Linux®, Linux Kernel Internals®, Perl & Python Programming, and Extreme PHP curriculum and instruction development.
/* Conveniently yanked from the Bluepoint Institute profile page */
Navigation
People

Richard Keech, RHCE, RHCX conducted my RHCE, RHCI, and RHCX training & certification in his capacity as Red Hat Asia-Pacific's Chief Instructor. Before we parted ways, he congratulated me for becoming the first Filipino RHCX. I told him that I was trained by the best.
mortega.net
In loving memory of CPT Mario B. Mortega Sr., USAFFE, VET (1920-2004)
OpenVZ
Tuesday, Jul 4, 2006, 7:31 PM
OpenVZ is an operating system-level server virtualization solution built on Linux. It creates isolated, secure virtual environments - VEs (otherwise known as virtual private servers or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each VPS performs and executes exactly like a stand-alone server; VPSs can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.


INGREDIENTS

SRC
patch-026test015-combined.gz
http://download.openvz.org/kernel/devel/026test015.2/patches/patch-026test015-combined.gz

linux-2.6.16.tar.bz2
http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2


RPM
ovzkernel-2.6.16-026test015.2.i686.rpm
http://download.openvz.org/kernel/devel/026test015.2/ovzkernel-2.6.16-026test015.2.i686.rpm


RPM (FC5)
kernel-2.6.16-1.2111_FC5.026test012.i686.rpm
http://download.openvz.org/kernel/fc5/2.6.16-1.2111_FC5.026test012/kernel-2.6.16-1.2111_FC5.026test012.i686.rpm


SRC/RPM/RPM (FC5)
vzctl-3.0.10-1.i386.rpm
http://download.openvz.org/utils/vzctl/3.0.10/vzctl-3.0.10-1.i386.rpm

vzctl-lib-3.0.10-1.i386.rpm
http://download.openvz.org/utils/vzctl/3.0.10/vzctl-lib-3.0.10-1.i386.rpm

vzquota-3.0.8-1.i386.rpm
http://download.openvz.org/utils/vzquota/3.0.8/vzquota-3.0.8-1.i386.rpm


TEMPLATES
fedora-core-5-i386-minimal.tar.gz
http://download.openvz.org/template/precreated/fedora-core-5-i386-minimal.tar.gz

centos-4-i386-minimal.tar.gz
http://download.openvz.org/template/precreated/centos-4-i386-minimal.tar.gz

debian-3.1-i386-minimal.tar.gz
http://download.openvz.org/template/precreated/debian-3.1-i386-minimal.tar.gz

gentoo-20060317-i686-stage3.tar.gz
http://download.openvz.org/template/precreated/gentoo-20060317-i686-stage3.tar.gz

opensuse-10-i386-default.tar.gz
http://download.openvz.org/template/precreated/opensuse-10-i386-default.tar.gz

slackware-10.2-i386-minimal.tar.gz
http://download.openvz.org/template/precreated/contrib/slackware-10.2-i386-minimal.tar.gz

ubuntu-6.06-i386-minimal.tar.gz
http://download.openvz.org/template/precreated/contrib/ubuntu-6.06-i386-minimal.tar.gz


PROCEDURE

SRC
# cd /var/tmp/
# wget http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.16.tar.bz2
# tar xjpf linux-2.6.16.tar.bz2
# cd linux-2.6.16/
# wget http://download.openvz.org/kernel/devel/026test015.2/patches/patch-026test015-combined.gz
# gzip -dc patch-026test015-combined.gz | patch -p1

# vi Makefile

Makefile

LOOK FOR:
EXTRAVERSION =

CHANGE TO:
EXTRAVERSION = openvz-026test015

# make menuconfig
# make bzImage
# cp arch/i386/boot/bzImage /boot/vmlinuz-2.6.16-openvz-026test015
# cp System.map /boot/System.map-2.6.16-openvz-026test015

MODULAR
# make modules
# make modules_install
# mkinitrd /boot/initrd-2.6.16-openvz-026test015.img 2.6.16-openvz-026test015

CLEAN-UP
# cd /var/tmp/
# rm -rf linux-2.6.16/ linux-2.6.16.tar.bz2

GRUB
# vi /etc/grub.conf

grub.conf

ADD:
title OpenVZ (2.6.16-openvz-026test015)
root (hd0,0)
kernel /vmlinuz-2.6.16-openvz-026test015
# Uncomment the next line if kernel is "modular"
# initrd /initrd-2.6.16-openvz-026test015.img

# reboot


RPM
# wget http://download.openvz.org/kernel/devel/026test015.2/ovzkernel-2.6.16-026test015.2.i686.rpm
# rpm -ivh ovzkernel-2.6.16-026test015.2.i686.rpm
# rm -f ovzkernel-2.6.16-026test015.2.i686.rpm
# reboot


RPM (FC5)
# wget http://download.openvz.org/kernel/fc5/2.6.16-1.2111_FC5.026test012/kernel-2.6.16-1.2111_FC5.026test012.i686.rpm
# rpm -ivh kernel-2.6.16-1.2111_FC5.026test012.i686.rpm
# rm -f kernel-2.6.16-1.2111_FC5.026test012.i686.rpm
# reboot


SRC/RPM/RPM (FC5)
# wget http://download.openvz.org/utils/vzctl/3.0.10/vzctl-3.0.10-1.i386.rpm
# wget http://download.openvz.org/utils/vzctl/3.0.10/vzctl-lib-3.0.10-1.i386.rpm
# wget http://download.openvz.org/utils/vzquota/3.0.8/vzquota-3.0.8-1.i386.rpm
# rpm -Uvh vzctl-3.0.10-1.i386.rpm vzctl-lib-3.0.10-1.i386.rpm vzquota-3.0.8-1.i386.rpm
# rm -f vzctl-3.0.10-1.i386.rpm vzctl-lib-3.0.10-1.i386.rpm vzquota-3.0.8-1.i386.rpm

# vi /etc/sysctl.conf

sysctl.conf

LOOK FOR:
net.ipv4.ip_forward = 0

CHANGE to:
net.ipv4.ip_forward = 1

# sysctl -p


TEMPLATES
# wget http://download.openvz.org/template/precreated/fedora-core-5-i386-minimal.tar.gz
# mv fedora-core-5-i386-minimal.tar.gz /vz/template/cache/
# vzctl create 101 --ostemplate fedora-core-5-i386-minimal
# vzctl set 101 --ipadd 1.2.3.4 --save
# vzctl set 101 --nameserver 1.2.3.53 --save
# vzctl set 101 --hostname host101.domain.com --save
# vzctl set 101 --onboot yes --save
# vzctl start 101
# vzctl exec 101 passwd


ACCESS
# ssh 1.2.3.4
# vzctl enter 101
# cd /vz/private/101/
# chroot /vz/private/101/


MANAGEMENT
# vzlist -a
# vzctl status 101
# vzctl stop 101
# vzctl restart 101


MORE
# man vzctl
# man vzlist
# man vzquota
Nessus Closes Source
Wednesday, Oct 5, 2005, 2:44 PM
Just got the following heads-up from Fyodor via the nmap-hackers mailing list:

Date: Wed, 5 Oct 2005 14:43:55 -0700
From: Fyodor <fyodor at insecure.org>
To: nmap-hackers at insecure.org
Subject: Nessus closes source => How to help open source projects

Nmap Hackers,

In the last Insecure.Org Security Tools survey, you guys proudly voted Nessus #1. It complements the functionality of Nmap by going further to detect application-level vulnerabilities. Then in February of this year, Tenable changed the Nessus license to further restrict the plugins and require that you fax them a permission request form before you use Nessus for any consulting engagements. Renaud wrote to this list on Feb 8 (http://seclists.org/lists/nmap-hackers/2005/Jan-Mar/0001.html), explaining that their new slogan ("the open-source vulnerability scanner") was accurate because the engine was still open source.

Today, their slogan has changed to "the network vulnerability scanner", and you can probably guess what that means. In the announcement below, Renaud announces that Nessus 3 (due in a couple weeks) will be binary only and forbid redistribution. They say it will be free, for now, if you use the delayed plugin feed. They have also announced that Nessus 3 will be faster and contain various other improvements. They promise to maintain GPL Nessus 2 for a while, but I wouldn't count on that lasting long.

I am not taking a position on this move, but I do feel it is worth noting for the many Nessus users on this list. Tenable argues that this move is necessary to further improve Nessus and/or make more money. Perhaps so, but the Nmap Project has no plans to follow suit. Nmap has been GPL since its creation more than 8 years ago and I am happy with that license.

When asked why they are making this change, Renaud replied to the Nessus list today that open source hasn't really worked for Nessus because "virtually nobody has ever contributed anything to improve the scanning _engine_ over the last 6 years." This may be the most important and useful point we can take from this change. Open source really is a two-way street. The only way we (open source projects) can seriously compete with projects staffed by dozens or hundreds of paid full time developers is by having hundreds or thousands of volunteers each contributing a little bit part time. So if you are a heavy user of open source software, please think about how you can help out. Here are some ideas:

o If you are feeling ambitious, write and distribute your own little program to solve a problem you are having or otherwise makes your life easier. It doesn't have to be anything big or fancy at first. Nmap started out as a little 2,000-line utility published in Phrack magazine. Post your creation to Freshmeat, or to nmap-dev if it relates to Nmap in some way. Hmm, I think there is a current vacuum in the open source vulnerability scanner field .

o Or take a more active coding role for an existing open source project. In the Nmap world, former Google SoC students are developing three promising projects: NmapGUI and UMIT are new GUIs and results viewers for large Nmap scans, and Ncat is a powerful reinterpretation of the venerable Netcat. Working code for all three of these is available if you join the Nmap-dev list (http://cgi.insecure.org/mailman/listinfo/nmap-dev) and I'm sure the respective authors (Ole Morten Grodaas, Adriano Monteiro, and Chris Gibson) would appreciate help, feedback, and testing.

o Find a bug in some open source software? Try to reproduce it with the latest version of the software and do some web searching to see if it is already known/fixed. If not, report it with full details about how to reproduce it and the platform and software version of the software you are running. It is even better if you can submit a patch which fixes the problem.

o Join the relevant mailing lists for the project and help out new users. Maybe you can write or translate some documentation, such as a tutorial for using the product or a HOWTO for using it to solve a common need.

o The Nmap Project does not accept financial donations, but many other projects do. If some little project does exactly what you need and saves you half a day of work or makes it into your regular-usage arsenal of tools, consider kicking the author back $5 or $10. Not only will it help defray costs of the project, but it shows the author that users really appreciate his/her work and thus makes a newer version more likely. Similarly, if you see an ad on the project web site that interests you, click on it and spend a couple minutes checking the product out.

o Spread the word! Commercial software houses pay to spread the word about their product in magazines, web sites, TV, conferences, etc. Open source projects such as Nmap can't. So if you find a project useful, don't hesitate to post a link on your web page and mention it (including the URL) on mailing list, newsgroup, and web forum posts.

Those are a few ideas, and I'm sure you can think of more based on your experience, expertise, and available resources. Rather than mope over the loss of open source Nessus, we can treat this as a call to action and a reminder not to take valuable open source software such as Ethereal, DSniff, Ettercap, gcc, emacs, apache, OpenBSD, and Linux for granted.

Cheers,
Fyodor

PS: Here is the Nessus announcement:

----- Forwarded message from Renaud Deraison <rderaison at tenablesecurity.com> -----

Date: Wed, 5 Oct 2005 12:16:45 -0400
X-Mailer: Apple Mail (2.734)
From: Renaud Deraison <rderaison at tenablesecurity.com>
To: nessus at list.nessus.org, nessus-announce at list.nessus.org
Cc:
Subject: [Nessus-announce] Nessus Roadmap / Nessus 3.0.0rc1 testers wanted

Hi everyone,

We are a few weeks away from releasing Nessus 3.0.0, and I'd like to take some time to explain our roadmap in this regard.

Nessus 3 / Nessus 2 Roadmap

Nessus 3 is major enhancement of the key components of the Nessus engine - the NASL3 intepreter has been rewritten from scratch, the process management has changed to reduce the overhead of executing a plugin (instead of creating NxM processes, nessusd now only creates N processes), the way plugins are stored has been improved to reduce disk usage, etc...

Nessus 3 also contains a lot of built-in features and checks to debug crashes and mis-behaving plugins more easily, and to catch inconsistencies early.

As a result, Nessus 3 is much faster than Nessus 2 and less resource intensive. Your mileage may vary, but when scanning a local network, Nessus 3 is on average twice as fast as Nessus 2, with spikes going as high as 5 times faster when scanning desktop windows systems.

Nessus 3 will be available free of charge, including on the Windows platform, but will not be released under the GPL.

Nessus 3 will be available for many platforms, but do understand that we won't be able to support every distribution / operating system available. I also understand that some free software advocates won't
want to use a binary-only Nessus 3. This is why Nessus 2 will continue to be maintained and will stay under the GPL.

To make things simple :

- Nessus 2 : GPL, will have regular releases containing bug fixes
- Nessus 3 : free of charge, contains major improvements

The two versions can share most of their plugins -- we intend to maintain backward compatibility whenever possible for most vulnerability checks. Some checks will only work on Nessus 3 (ie: we
are about to release a set of plugins to determine policy compliance), but the huge majority will work on either platform likewise.

Finally, the Nessus GUI has been split in a separate project (NessusClient) which is released under the GPL. The 'nessus' client in Nessus3 is CLI only, as it will be in Nessus 2.4.x. For a GUI, use NessusClient.

Testers needed

That being said, we are looking for experienced Nessus users who would want to try Nessus 3.0.0rc1. For the sake of simplicity, we would like users running on Red Hat ES3 or ES4 platforms or compatible. We are looking for people scanning big networks, mostly to collect performance information. Keep in mind that Nessus3 is CLI only, so you'll have to use NessusWX or be familiar with the CLI.

If you are interested in testing Nessus 3.0.0rc1, please drop me a line at (no @gmail/@hotmail/@anonymous accounts please).

Thanks,
-- Renaud
So There You Are!
Wednesday, May 25, 2005, 11:00 PM
And it took a year!

Linux a 'waste of money', says top MS exec
The latest approach in a campaign of misinformation in Asia.

Geoffrey P. Ramos, Computerworld and Kieren McCarthy, Techworld
25 May 2004

http://www.techworld.com/news/index.cfm?newsID=1615

Microsoft has come out fighting in Asia, telling governments in the region that adopting open-source software will damage their own economies and is a "waste of money".

Chris Sharp, director for platform strategy for Microsoft in Asia-Pacific, said governments that standardise on open-source software are hurting their local software vendors as they can't make the money needed to invest in their own software products.

Sharp, who used to work for Red Hat before joining Microsoft, said building open-source software is a "waste of money" and that a company was in effect giving away its intellectual property, preventing it from getting future benefits. "If you are compelled to give back to the community, then you don't have the opportunity to benefit from that knowledge," he stressed.

Sharp added that there are several myths surrounding open source. People tend to believe it is free, he said, but even companies that support open source are just as motivated by commercial interests as any other commercial software vendor. Apparently undermining his initial assertion about open-source ruining local software efforts, he pointed out that open source giants such as Red Hat and IBM are still after a return on their investments. "They are not for the greater good of the community; they are also after the money," he said.

He then contradicted himself again, adding that without getting back any commercial returns, a software company will find it difficult to invest in developing new software products. Intellectual property rights fuel sustained innovation, was his point. "With open source, there is no way to make more software."

This aggressive if confused approach comes after months of determined effort by the software giant to prevent Linux taking over as the de facto operating system in the world's largest expanding software market.

Two months ago, Microsoft's Asia-Pacific CTO Paul Moore told a bemused audience that no Asian governments were going with Linux. "I believe there is no government that has a policy going beyond recommending open source," he said, immediately contradicting himself with, "If so, I believe that is not to their best interest."

Sharp tried the same thing, saying that announcements that certain governments are deploying open source software are untrue. In many cases, he said, it's just one branch or agency of the government making the announcement, and it is not a government-wide purchasing policy.

An earlier, also failed, tactic in September, saw Microsoft's director of government affairs in Asia, Tom Robertson, explain that a deal by the Japanese, Chinese and South Korean governments to build their own open-source software was anti-competitive. Bizarre in the extreme you would think for a company investigated across the globe for abusing its monopoly.

So, Microsoft is not at all worried about open-source software destroying its market control because they are damaging themselves and no one is using open source anyway. It is so unconcerned in fact that the software giant has produced two cut-down versions of XP in very specific local languages and attempted to charge less than Linux, and failed to do so.

It is so unconcerned that it has rewritten its cardinal rule about one price for Windows across the globe. And is so unconcerned that its execs have been touring the region's governments non-stop for three months telling them why they should buy Microsoft. Yep, Microsoft is very relaxed about open source in Asia.
Thanks So Much Sir Engels!
Saturday, Mar 19, 2005, 1:17 AM
I was pleasantly surprised to receive the following e-mail from Chivas Sicam today:

Hi Sir,

I'd like to thank you because I really learned a lot about Linux because of your quality training at Bluepoint. I took the RH300 today and I passed. I got a 100%. :p. I'm an RHCE now. Thanks sir!

--
chivas

This is one of the reasons why I enjoy teaching. A student who remembers never fails to brighten up my day.
Trained by the Best
Thursday, Jan 20, 2005, 2:00 PM
Red Hat Asia-Pacific Chief Instructor produces first Filipino Red Hat Certified Examiner (RHCX)


Three years ago today, Richard Keech, RHCE, RHCX conducted my RHCE and RHCX training & certification in his capacity as Red Hat Asia-Pacific's Chief Instructor. Before we parted ways, he congratulated me for becoming the first Filipino RHCX. I told him that I was trained by the best

http://bluepoint.com.ph/index.php?entry=20020120140031
From Source to Binary: The Inner Workings of GCC
Friday, Dec 17, 2004, 9:39 AM
Here's a well-written article from Red Hat Magazine.

Red Hat Magazine
Issue #2 December 2004
http://www.redhat.com/magazine/002dec04/features/gcc/

From Source to Binary: The Inner Workings of GCC
by Diego Novillo

* Introduction
* The Compilation Process
* Internal Organization of GCC
* Overcoming Architectural Limitations
* Tree SSA
* Conclusions
* About the Author
* Bibliography

Introduction
Computers are very complex and, at the same time, very dumb objects. They need to be told in excruciating detail exactly what to do. Even worse, they only understand sequences of bits (1s and 0s), so for instance, simple operations like adding two integer numbers may require a sequence of about 48 bits on a modern Pentium processor. Imagine what it would take to implement an Operating System.

Needless to say, we don't usually program computers using strings of bits. Instead, we write all the instructions in a language we can understand and then have it translated into the appropriate sequence of 1s and 0s that the computer can understand. The concept of translation in the computer world is pervasive. Everything you tell the computer to do needs to go through a translation process. Sometimes this translation is done on-the-fly (for instance, Perl scripts and spreadsheets) in a process called interpretation. Other times the translation is done once before the program starts (for instance, C or C++ applications) in a process called compilation. There are also hybrid approaches that are becoming popular which mix interpretation and compilation in a process called just-in-time compilation (JIT).

In this article, we explore how compilers work internally by examining GCC, the GNU Compiler Collection, which has become one of the most popular compilers in the Unix world. We will begin by describing the anatomy of a generic compiler and continue with an overview of GCC's internal organization. Finally, we will describe recent architectural advances in GCC's core infrastructure that are paving the way for more powerful features that were difficult or impossible to implement in earlier versions of GCC.

Note that this is intended to be a very high-level overview. Compilers are fairly complex software systems (GCC consists of about 2.1 million lines of code and has been in development for over 15 years) and we will gloss over many of the gory details. If you are interested in GCC or compilers in general, please refer to the links listed in the section called “Bibliography”.

The Compilation Process
A compiler takes a program written in one language (source code) and transforms it into a semantically equivalent program in another language (object code). The best way of thinking about a compiler is that of a pipeline that gradually converts the program into different intermediate shapes until it reaches the final object code.

There are three major components to this pipeline. The Front End, which reads and validates the syntax of the input program. The Middle End, which analyzes and transforms the program into a more efficient form. Finally, the Back End is responsible for doing the final mapping into object code.

Front End
Computers are not very good at dealing with free form text, so the front end takes away all the spacing, comments and other formatting and converts the original program into a more concise form called intermediate representation (IR). These representations are nothing more than internal data structures that are much easier to manipulate than the original stream of text.

In general, compilers manipulate more than a single representation for the program. Initially, the Front End parses the input program and builds a data structure called Abstract Syntax Trees (AST), which represent each statement in a hierarchical structure. For instance, given the statement x = y - z * 3, its corresponding AST is shown in Figure 1, “Abstract Syntax Tree for x = y - z * 3”.


Figure 1. Abstract Syntax Tree for x = y - z * 3

The Front End is responsible for validating the syntactical structure of the input program, emitting most diagnostics about language conformance, creating internal data structures for data types and variables declared in the program, debugging information like file names and line numbers, and building the initial AST representation.

Middle End
With the intermediate representation built by the front end, the middle end proceeds to analyze and transform the program. All the transformations done here and in the back end usually have two goals: (a) make the object code run as fast as possible (performance optimizations), or (b) make the object code take as little space as possible (space optimizations). These optimizations are typically machine and target independent.

The Front End knows very little about what the program actually does. Optimization is possible when the compiler understands the flow of control in the program (control-flow analysis) and how the data is transformed as the program executes (data-flow analysis). Analysis of the control and data flow of the program allows the compiler to improve the runtime performance of the code. Many different optimizations are possible once the compiler understands the control and data flow of the program. The following are a few of the most popular optimization techniques used in standard optimizing compilers:

Algebraic simplifications
Expressions are simplified using algebraic properties of their operators and operands. For instance, i + 1 - i is converted to 1. Other properties like associativity, commutativity, and distributivity are also used to simplify expressions.

Constant folding
Expressions for which all operands are constant can be evaluated at compile time and replaced with their values. For instance, the expression a = 4 + 3 - 8 can be replaced with a = -1. This optimization yields best results when combined with constant propagation (see below).

Redundancy elimination
There are several techniques that deal with the elimination of redundant computations. Some of the more common ones include:

Loop-invariant code motion
Computations inside loops that produce the same result for every iteration are moved outside the loop.

Common sub-expression elimination
If an expression is computed more than once on a specific execution path and its operands are never modified, the repeated computations are replaced with the result computed in the first one.

Partial redundancy elimination
A computation is partially redundant if some execution path computes the expression more than once. This optimization adds and removes computations from execution paths to minimize the number of redundant computations in the program. It encompasses the effects of loop-invariant code motion and common sub-expression elimination.

Back End
A final translation phase produces machine code for the target architecture. At this stage, the compiler needs to have very detailed knowledge about the hardware where the program will run. This means that the intermediate representation of the program is usually converted into a form resembling machine language. In this form, it is possible to apply transformations that take advantage of the target architecture. For instance:

Register allocation
Tries to maximize the amount of program variables that are assigned to hardware registers instead of memory (registers are orders of magnitude faster than main memory).

Code scheduling
Takes advantage of the super-scalar features of modern processors to rearrange instructions so that multiple instructions are in different stages of execution simultaneously.

Even after final code generation, the resulting code is typically not ready to run as-is. Some compilers (GCC among them) generate assembly code which is then fed into the assembler for object code generation. After object code has been generated, the linker is responsible for collecting all the different object files and libraries needed to build the final executable.

Internal Organization of GCC
GCC is designed around two different IRs. One called tree, which is essentially the abstract syntax trees we discussed previously. The other IR is called RTL (Register Transfer Language), is used by GCC to do optimizations and code generation. The following diagram shows an overview of the compilation process in GCC.


Figure 2. Existing Compilation Process in GCC

Notice how the Front and Middle ends are sort of fused together. This is because, GCC does not operate on trees all that much. Trees are mostly used as a stepping stone in the generation of RTL. Until recently, the only operation done on trees was function inlining. Most everything else was relegated to the RTL optimizer. In a way, this makes sense because from the diagram, you will notice that each language Front End generates its own version of trees. The C parser generates C trees, the C++ parser generates C++ trees, etc. Each version is different in its own way, so analyzes and optimizations on trees would require N different implementations, one for each front end. That is hardly scalable, so all the optimization work is done in RTL.

RTL can be thought of as an assembly language for a machine with an infinite number of registers. Being a low-level representation, RTL works well for optimizations that are close to the target (for example, register allocation, delay slot optimizations, peepholes, etc). Furthermore, GCC offers a fairly flexible way for developers to describe their target machine when porting GCC to a new architecture. In fact, when it was initially implemented, GCC would generate RTL almost immediately, so it's only natural that most of the compiler is implemented around it.

However, some analyzes and transformations need higher level information about the program that is not possible (or very difficult) to obtain from RTL (for example, array references, data types, references to program variables, control flow structures). In general, too many target features are exposed in RTL. For instance, if the target cannot handle more than 32-bit quantities in a register, the RTL representation splits values bigger than 32 bits into multiple values. Also, things like function calling conventions are exposed in detail in RTL, so a single function call may span multiple RTL instructions. All these details make it difficult to implement analyzes and optimizations that don't really need to concern themselves with target details. Over time, some of these transformations have been implemented in RTL, but the end result is code that is excessively convoluted, hard to maintain, and error prone.

Overcoming Architectural Limitations
Since the existing optimization framework in GCC was not flexible enough to implement several improvements we had in mind, Red Hat sponsored a project to overhaul GCC's architecture. We considered several approaches and settled on the idea of starting with the tree representation. Trees contain detailed information about data types, variables, and control structures of the original program. Optimizing the tree representation in GCC is appealing because (a) it facilitates the implementation of new analyzes and optimizations closer to the source and (b) it simplifies the work of the RTL optimizers, potentially speeding up the compilation process or improving the generated code.

This new infrastructure represents one of the biggest changes GCC has experienced in recent years. It took approximately 3 years to complete the basic functionality with the contributions of more than 30 developers from the GCC community. The framework is based on a compiler formalism called Static Single Assignment (SSA), which is a representation used internally by the compiler to keep track of how data flows through the program. This information is essential for the compiler to guide its optimization decisions. Given that the framework was implemented on top of the Tree data structure and uses SSA for data flow analysis, we named it Tree SSA.

Tree SSA
Although GCC parse trees provide very detailed information about the original program, they are not suitable for optimization for two main reasons:

Lack of a common representation
There is no single tree representation shared by all the front ends. Notice in Figure 2, “Existing Compilation Process in GCC” how every front end generates its own flavor of trees. A tree optimizer would have to be implemented once for every Front End supported by GCC. This would be a maintenance nightmare and would create a significant burden on every new Front End.

Structural complexity
The parse trees generated by the Front End are an almost exact representation of the original input program. As such, expressions can be combined in almost infinite ways (if you've ever seen an obfuscated C program, you know what I mean). This flexibility is very convenient for humans, because of its expressive power. However, compilers, being the simple-minded automata they are, cannot easily cope with them. To overcome these limitations, we have introduced two new tree-based intermediate representations called GENERIC and GIMPLE. GENERIC addresses the lack of a common tree representation among the various front ends. GIMPLE solves the complexity problems that facilitate the discovery of data and control flow in the program.


Figure 3. GCC's Compilation Process with Tree SSA

GENERIC and GIMPLE
Although some Front Ends share the same tree representation, there is no single representation used by all GCC Front Ends. Every Front End is responsible from translating its own parse trees directly into RTL. To address this problem, we have introduced a new representation, named GENERIC, that is merely a superset of all the existing tree representations in GCC. Instead of generating RTL, every Front End is responsible for converting its own parse trees into GENERIC trees. Since GENERIC trees are language-independent, all the semantics of the input language must be explicitly described by each Front End. Some of these semantics are translated in the "genericize" pass, while others are translated in the "gimplification" pass.

The conversion to GENERIC removes language dependencies from the program representation. However, GENERIC trees still are structurally complex, so the next step is to break them down into more manageable pieces. This new representation, called GIMPLE, is lexically identical to GENERIC but it imposes structural restrictions on statements. For instance, no expression can contain more than 3 operands, arguments to a function call can only be variables (i.e., function calls cannot be nested), the predicate of an if statement can only contain a single comparison, etc. This form was derived from the SIMPLE representation used by McGill University's McCAT compiler [1].

The program in the section called “GENERIC and GIMPLE” illustrates the differences between GENERIC and GIMPLE. Notice how the two versions are the same program, but the GIMPLE form of the program is more concise (albeit longer). This is the key property that makes things easier to the optimizer because it can make a variety of simplifying assumptions that, in the end, make for a compiler that is easier to maintain and improve.

GENERIC

if (foo (a + b, c))

c = b++ / a

endif

return c



GIMPLE

t1 = a + b

t2 = foo (t1, c)

if (t2 != 0)

t3 = b

b = b + 1

c = t3 / a

endif

return c



Figure 4. GENERIC and GIMPLE Forms of the Same Code

Static Single Assignment (SSA)
The Static Single Assignment form is a representation that is laid on top of GIMPLE that exposes very explicitly the flow of data within the program. The central idea is that of versioning. Every time a variable X is assigned a new value, the compiler creates a new version of X and the next time that variable X is used, the compiler looks up the latest version of X and uses that. Notice that this representation is completely internal to the compiler, it is not something that shows up in the generated code nor could be observed by the debugger.

Take, for instance, the program in Figure 5, “Static Single Assignment Form of a Program”. The left side is the original GIMPLE program and the right side shows the same program in SSA form.

GIMPLE program SSA form

1 a = 3 a_1 = 3

2 b = 9 b_2 = 9

3 c = a + b c_3 = a_1 + b_2

4 a = b + 1 a_4 = b_2 + 21

5 d = a + c d_5 = a_4 + c_3

6 return d return d_5



Figure 5. Static Single Assignment Form of a Program

Notice that every assignment generates a new version number for the variable being modified. And every time a variable is used inside an expression, it always uses the latest version. So, the use of variable a in line 3 is modified to use a_1, while the use of a in line 5 uses a_4, instead.

So, why is this advantageous to the optimizer? Consider, for instance, a very common optimization called constant propagation. In this optimization, the compiler tries to compute at compile time as many expressions as possible using constant values. Since the program is in SSA form and all the variables have version numbers, the compiler can simply build arrays indexed by version number to keep track of constant values. Suppose we had an array CST for this purpose. In this case, CST[1] would hold 3 and CST[2] would hold 9. So, when the compiler examines the statement in line 3, c_3 = a_1 + b_2, it can deduce that c_3 must always be 12 (that is, it stores 12 in CST[3]). After visiting all the statements in this fashion, we end up with

1 a_1 = 3
2 b_2 = 9
3 c_3 = 12
4 a_4 = 30
5 d_5 = 42

6 return 42


And now, after having propagated all the constants, we find out that none of the statements in lines 1 through 5 are really useful at all. This program merely computes and returns the value 42.

Now, what happens when it is not possible to determine what the latest version of a variable is? Real programs are seldom straight line code, there are loops and conditionals that alter the flow of control. For instance, in the following program it may be impossible for the compiler to determine what branch of the conditional will be taken at runtime:

x = foo ()
if (x > 10)
a = 92
else
a = 23
endif
return a


When the compiler tries to convert the program above into SSA form, it does not know which of the two versions for variable a to use. To solve this ambiguity, the compiler creates a third version for variable a that is the merge of the two conflicting versions. This merge operation is known as a PHI function:

x_1 = foo ()
if (x_1 > 10)
a_2 = 92
else
a_3 = 23
endif
a_4 = PHI (a_2, a_3)
return a_4


Since the compiler does not know in advance which of a_2 or a_3 will be returned, it creates a_4 as the merge of the two. This indicates to the optimizers that a_4 can be one of the other two versions, but we don't know which.

Conclusions
Though largely invisible to the end user, a compiler is one of the most important components in the system. As the popularity of Linux grows, so does the need of having a strong compiler. Recognizing this importance, we started the Tree SSA project to modernize GCC's internals.

Tree SSA provides a new optimization framework to make it possible for GCC to implement high-level analyzes and optimizations. The initial implementation will be available with the release of GCC 4.0. Though the framework is still in active development, the next release will include several new features including pointer and array bound checking (-fmudflap), support for Fortran 90, auto vectorization, various high-level loop transformations and most traditional scalar optimization passes.

By basing all the analyzes and transformations on widely known published algorithms, we aim to improve our ability to maintain and add new features to GCC. Furthermore, the use of standard techniques will encourage external participation from groups in the compiler community that are not necessarily familiar with GCC.

About the Author
Diego Novillo was born in Córdoba, Argentina. In 1993 he moved to Canada to obtain a PhD degree at the University of Alberta, Edmonton. He graduated with a PhD in Computing Science in 2000. He is a member of the compiler group at Red Hat Canada. One of the main missions of this group is to improve GCC both in developing new ports and implementing new analyses and optimizations. He is one of the main architects of Tree SSA, a new global optimization framework for GCC.

Bibliography
[1] R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. ACM Transactions on Programming Languages and Systems, 13(4): 451-490, October 1991.

[2] L. Hendren, C. Donawa, M. Emami, G. Gao, Justiani, B. Sridharan, "Designing the McCAT Compiler Based on a Family of Structured Intermediate Representations", Proceedings of the 5th International Workshop on Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science, no. 457, Springer-Verlag.

[3] Robert Morgan. Building an Optimizing Compiler, Butterworth-Heinemann, 1998.
Writing Custom udev Rules
Friday, Dec 17, 2004, 9:14 AM
Here's an interesting article from Red Hat Magazine.

Red Hat Magazine
Issue #2 December 2004
http://www.redhat.com/magazine/002dec04/features/udev/

Writing Custom udev Rules
by Greg Kroah-Hartman

* Introduction
* Configuration Files
* Rules
* Calling External Programs
* Conclusion
* About the Author

Introduction
As more and more users become aware of the many advantages Linux has over other operating system such as security and flexibility, its ease of use must be improved to increase its adoption rate. One significant improvement is the introduction of udev. udev allows a Linux system to use consistent device names for devices such as removable drives and printers, which in turn allows users to experience predictable behavior when devices are added or removed from the system.

Starting with Fedora Core 3 and the upcoming release of Red Hat Enterprise Linux 4, all system device nodes in the /dev directory are managed by the udev program. In all previous Red Hat releases, the /dev directory was a static list of all possible devices that might be connected to the system. This meant that there were over 18,000 different device nodes in that directory. Now, with udev, only the devices that are present and available to use in the system appear in the /dev directory.

udev manages the /dev directory by monitoring the /sys directory. The /sys directory is where the sysfs file system is mounted. Through the information present in sysfs, udev knows what devices need to have a /dev entry created and what to name the entry.

Configuration Files
udev has different configuration files to control how it works and how it creates the different /dev nodes. There are three basic types of udev configuration files: the main udev configuration file, udev permission files, and udev rules files.

The main udev configuration file, /etc/udev/udev.conf, controls what directory contains the udev permission and rules files, where to put the udev database, and where udev creates the device nodes. Example 1, “udev.conf Example” shows an example udev.conf file.

# /etc/udev/udev.conf: main config file for udev
#
# This file can be used to override some of udev's default values
# for where it looks for files, and where it places device nodes.

# udev_root - where in the filesystem to place the device nodes
udev_root="/dev/"

# udev_db - The name and location of the udev database.
# NOTE: this should be somewhere that is writable before
# / is mounted rw, like /dev ...
udev_db="/dev/.udevdb"

# udev_rules - The name and location of the udev rules file
udev_rules="/etc/udev/rules.d/"

# udev_permissions - The name and location of the udev permission file
udev_permissions="/etc/udev/permissions.d/"

# default_mode - set the default mode for all nodes that have no
# explicit match in the permissions file
# NOTE: do not set to anything else if you are not
# sure you have everything needed for normal
# user to function in the permissions file!
default_mode="0660"

# udev_log - set to "yes" if you want logging
udev_log="no"


Example 1. udev.conf Example

udev permission files specify what the permissions should be set to for specific device nodes. For details on how this file can be modified, refer to the udev documentation. For almost all users, the default permissions that Red Hat has specified are adequate.

udev rules files are used by udev to determine the device names used for devices present in the system. Every line in the rules files defines how a specific device attribute is mapped to a device file. If all keys that are specified in a rule match the device that was found, the specified device file is created.

The default udev rule file in Fedora Core 3 and Red Hat Enterprise Linux 4 is located at /etc/udev/rules.d/50-udev.rules. This file should not be modified by a user. To create new rules, create a new file in the /etc/udev/rules.d/ directory. All rules files must have a filename that ends with the .rules extension. Files are read in sorted order, so to use a custom rule, create a file for it that is read before the default rules files, such as /etc/udev/rules.d/10-my.rules.

Rules
The basic form of a udev rule is

key, [key, ...] name [, symlink]

Every key is a field that must match with the specific device. These fields are specified in the form of type=value. The different types of fields are:

BUS
This matches the bus type of the device. Common values for this field are usb, scsi, and ide.

KERNEL
This matches the default device name assigned to the device by the kernel. Examples include hda, ttyUSB0, and lp0.

SUBSYSTEM
This matches the kernel subsystem name for the device. This is usually the same as the BUS value but can sometimes be different. Examples of this are usb and scsi.

DRIVER
This matches the name of the kernel driver that is controlling the device. Examples of this are orinoco and usb-storage.

ID
This matches the device number on the bus to which it is attached. Examples include the PCI bus id or USB id for the device.

PLACE
This matches the device's topological position on the bus to which the device is attached. This is usually only valid for USB devices, as they encode their topology within the USB bus id value.

SYSFS{filename}
udev reads the specified sysfs filename in the directory for the device and tries to match the value read from it. This is useful in determining the vendor, label, serial number, UUID, or system label of a device. Anything that is in the sysfs tree can be matched against. Up to 5 different sysfs files can be checked in the same rule, and any trailing space characters in the sysfs file are ignored, even if the key does not have any trailing spaces either.

PROGRAM
This allows udev to call any external program and use the output of the program in naming a device. This key is valid if the external program returns successfully. When the program is called, all environment variables that were passed to udev are also available for the program to evaluate. The string returned by the program can also be matched with the RESULT key.

RESULT
This is used to match the return string of the last PROGRAM call. This key may be used in any following rule, after a PROGRAM call, enabling different rules to match a previous PROGRAM call.

If all of the specified keys are valid in a rule, then the following values are used to specify what udev does with the device:

NAME
This is the name of the device node that udev should create.

SYMLINK
This is the name of a symlink created to point back to the specified NAME. Multiple symlinks may be specified by separating them by a space character. If there is a SYMLINK field in a rule without a NAME field, the rule is not applied immediately. Instead, the symlink value is added to the symlink list for any future rule that might create a device node. This is useful for adding a user-specified symlink to a device node that is later specified in a different rules file.

Example Rules
Now that all of the different ways a rule can be written have been described, let us look at some sample rules to better understand how to put the different fields together.

A simple rule to create the device node for the CPU microcode driver might look like:

KERNEL="microcode", NAME="cpu/microcode"

This rule is valid if the kernel name is microcode. If so, it creates the device node cpu/microcode putting the device node in a separate subdirectory.

Another simple rule is the one to match up the sound control device:

KERNEL="controlC[0-9]*", NAME="snd/%k"

This rule matches any device that starts with controlC, followed by a single digit which is then followed by any number of characters. This type of rule uses a simple form of a regular expression to match the name of the device. Refer to the udev man page for more details on how to specify a regular expression that udev understands. The NAME field for this rules specifies that it should be placed in the snd subdirectory and use the kernel name. The %k directory name is replaced with the kernel name. For the full list of modifier types that udev supports, consult the udev man page.

Now for a more advanced rule. Most digital cameras are USB devices and use the usb-storage kernel driver to talk to the system. To always name a specific digital camera the same device node no matter when it is plugged into the computer, use the following rule:

BUS="scsi", SYSFS{vendor}="FUJIFILM", NAME="camera"

This rule states that if the BUS value is scsi and the sysfs file vendor in the device directory contains the value FUJIFILM, than the device name should be set to camera.

Using the SYSFS{filename} type of key, we can solve a common problem for anyone who has more than one USB printers. Depending on the order in which the printers are plugged into the computer or on the order in which they power on when rebooting the computer, they could get different device nodes every other time. This makes determining which printer is connected to which printer queue quite difficult at times. If we can specify a way for a printer to always have the same name, no matter which order in which it was plugged in, that would solve the problem.

Here are two rules that users can use to always bind a specific printer to a specific name:

BUS="usb", SYSFS{serial}="W09090207101241330", NAME="lp_color"
BUS="usb", SYSFS{serial}="HXOLL0012202323480", NAME="lp_bw"


These rules cause the printer with the serial number W09090207101241330 to be always accessed through the device node lp_color and the printer with the serial number HXOLL0012202323480 always accessed through the device node lp_bw. These serial numbers can be found in sysfs in the directory for the usb device, as described in the section called “Writing New Rule”.

Writing New Rule
The previous examples demonstrate simple rules, but how does the user know what fields are best to use to specify within a rule to uniquely identify the device? Using the program udevinfo that comes bundled with the udev program, a wide range of information can be determined about a device that can be used to create a new udev rule.

udevinfo can be used to list all of the sysfs information about a specific device. For example, to name your green USB mouse, mouse_green, refer to the the sysfs information about the USB mouse using udevinfo:

udevinfo -a -p /sys/class/input/mouse0/

The output of this command is shown in Example 2, “Example udevinfo Output”.

udevinfo starts with the device the node belongs to and then walks up the
device chain, to print for every device found, all possibly useful attributes
in the udev key format.

Only attributes within one device section may be used together in one rule,
to match the device for which the node will be created.

device '/sys/class/input/mouse0' has major:minor 13:32
looking at class device '/sys/class/input/mouse0':
SYSFS{dev}="13:33"

follow the class device's "device"
looking at the device chain at '/sys/devices/pci0000:00/0000:00:09.0/usb2/2-1/2-1:1.0':
BUS="usb"
ID="2-1:1.0"
SYSFS{bAlternateSetting}=" 0"
SYSFS{bInterfaceClass}="03"
SYSFS{bInterfaceNumber}="00"
SYSFS{bInterfaceProtocol}="02"
SYSFS{bInterfaceSubClass}="01"
SYSFS{bNumEndpoints}="01"
SYSFS{detach_state}="0"

looking at the device chain at '/sys/devices/pci0000:00/0000:00:09.0/usb2/2-1':
BUS="usb"
ID="2-1"
SYSFS{bConfigurationValue}="1"
SYSFS{bDeviceClass}="00"
SYSFS{bDeviceProtocol}="00"
SYSFS{bDeviceSubClass}="00"
SYSFS{bMaxPower}="100mA"
SYSFS{bNumConfigurations}="1"
SYSFS{bNumInterfaces}=" 1"
SYSFS{bcdDevice}="0100"
SYSFS{bmAttributes}="a0"
SYSFS{configuration}="HID-Mouse"
SYSFS{detach_state}="0"
SYSFS{devnum}="2"
SYSFS{idProduct}="1035"
SYSFS{idVendor}="047d"
SYSFS{manufacturer}="Kensington"
SYSFS{maxchild}="0"
SYSFS{product}="Kensington USB Mouse"
SYSFS{speed}="1.5"
SYSFS{version}=" 1.10"

looking at the device chain at '/sys/devices/pci0000:00/0000:00:09.0/usb2':
BUS="usb"
ID="usb2"
SYSFS{bConfigurationValue}="1"
SYSFS{bDeviceClass}="09"
SYSFS{bDeviceProtocol}="00"
SYSFS{bDeviceSubClass}="00"
SYSFS{bMaxPower}=" 0mA"
SYSFS{bNumConfigurations}="1"
SYSFS{bNumInterfaces}=" 1"
SYSFS{bcdDevice}="0206"
SYSFS{bmAttributes}="c0"
SYSFS{detach_state}="0"
SYSFS{devnum}="1"
SYSFS{idProduct}="0000"
SYSFS{idVendor}="0000"
SYSFS{manufacturer}="Linux 2.6.10-rc2 ohci_hcd"
SYSFS{maxchild}="1"
SYSFS{product}="NEC Corporation USB"
SYSFS{serial}="0000:00:09.0"
SYSFS{speed}="12"
SYSFS{version}=" 1.10"

looking at the device chain at '/sys/devices/pci0000:00/0000:00:09.0':
BUS="pci"
ID="0000:00:09.0"
SYSFS{class}="0x0c0310"
SYSFS{detach_state}="0"
SYSFS{device}="0x0035"
SYSFS{irq}="9"
SYSFS{subsystem_device}="0x11a3"
SYSFS{subsystem_vendor}="0x10cf"
SYSFS{vendor}="0x1033"

looking at the device chain at '/sys/devices/pci0000:00':
BUS=""
ID="pci0000:00"
SYSFS{detach_state}="0"


Example 2. Example udevinfo Output

So, from the output of the udevinfo program, all of the sysfs files and other udev fields are described. To write the best type of rule for a device, determine the most unique way a device can be described from the output.. For example, from Example 2, “Example udevinfo Output”, the following line shows a unique field that distinguishes it from all other mice that might be connected to the system:

SYSFS{product}="Kensington USB Mouse"

However, this is not true if a user has more than one kind of the same mouse. In that case, the ID value should be used since it is unique:

ID="2-1:1.0"

Using the product sysfs file, a udev rule to always name this mouse mouse_green is as follows:

BUS="usb", SYSFS{product}="Kensington USB Mouse", NAME="mouse_green"

Calling External Programs
One of the most powerful features of udev is its ability to call other programs to determine what to name a device or if a device should be matched. This is useful if the built-in keys and rule matching sequences are not flexible enough.

As an example, what if you want to use the artist name from the currently inserted CD as the CD-ROM device name? There is no way that sysfs will be extended to provide this kind of information about the data within the device, so an external program must be used. The following rule causes udev to call the program name_cdrom.pl whenever a block device is found by the kernel:

KERNEL="[hs]d[a-z]", PROGRAM="name_cdrom.pl %M %m", NAME="%c{1}", SYMLINK="cdrom"

This rule matches any device that starts with h or s followed by a d and then followed by any lowercase alphabetic character. If that name matches, the Perl script, name_cdrom.pl is called. This script is shown in Example 3, “name_cdrom.pl Script”. There are two arguments passed in for this script, the major and minor number of the device being matched by udev (as specified by the %M and %m modifiers). If this program returns a success, then the first return value of the script is used to name the device (that is the %c{1} modifier). A symlink to the file cdrom is also created so that other programs are still able to find the cdrom device in the system.

#!/usr/bin/perl

# a horribly funny script that shows how flexible udev can really be
# This is to be executed by udev with the following rules:
# KERNEL="[hs]d[a-z]", PROGRAM="name_cdrom.pl %M %m", NAME="%c{1}", SYMLINK="cdrom"

use strict;
use warnings;

use CDDB_get qw( get_cddb );

my $dev_node = "/tmp/cd_foo";

# following variables just need to be declared if different from defaults
my %config;
$config{CDDB_HOST}="freedb.freedb.org"; # set cddb host
$config{CDDB_PORT}=8880; # set cddb port
$config{CDDB_MODE}="cddb"; # set cddb mode: cddb or http
$config{CD_DEVICE}="$dev_node"; # set cd device

# No user interaction, this is a automated script!
$config{input}=0;

my $major = $ARGV[0];
my $minor = $ARGV[1];

# create our temp device node to read the cd info from
unlink($dev_node);
if (system("mknod $dev_node b $major $minor")) {
die "bad mknod failed";
}

# get it on
my %cd=get_cddb(%config);

# remove the dev node we just created
unlink($dev_node);

# print out our cd name if we have found it or skip rule by nonzero exit
if (defined $cd{title}) {
$cd{artist} =~ s/ /_/g;
$cd{title} =~ s/ /_/g;
print "$cd{artist}-$cd{title}n";
} else {
exit -1;
}


Example 3. name_cdrom.pl Script

The name_cdrom.pl script queries the freedb database (assuming the system has an Internet connection) to determine the artist name of the currently inserted CD. Such a complex functionality would never have been built into udev but because of the PROGRAM feature, udev can rely on helper programs to create complex rules if it is desired.

Conclusion
There are many ways to create new udev rules, due to its wide range of key matching abilities and ability to call other programs. To learn more about how to create custom udev rules, refer to the udev documentation that comes with the program. In it are full descriptions of all of the different keys that can be used, an explanation of the regular expressions that are supported by udev, and a tutorial on how to create udev rules based on many different types of devices.

The author would like to thank Kay Sievers for all of his udev development and documentation work and Daniel Drake who created the udev tutorial provided with the program.

About the Author
Greg Kroah-Hartman is currently the Linux kernel maintainer for a variety of different driver subsystems. He works for IBM, doing Linux kernel-related things, and can be reached at .
Pilot-Link
Tuesday, Nov 9, 2004, 11:14 PM
I needed to install a soft copy of the King James Bible on Magie's Visor Prism for Psylocke's school work. However, Palm Desktop was no longer installed on our laptops and virtual machines - we stopped using it when Magie got a Handspring backup module. Luckily, we had the Linux pilot-link suite at our disposal. This allowed me to backup, restore, and manage the PDA's contents, all from the convenience of a commandline shell.

BACKUP
Before installing anything on the handheld, I decided to backup its contents. Better safe than sorry.

mkdir -p ~/pda/backup

After pressing the hotsync button, the visor module was automatically inserted into the kernel, connecting the PDA to /dev/ttyUSB1. I performed a complete backup.

pilot-xfer -p /dev/ttyUSB1 -b ~/pda/backup/

Everything was quickly copied to the backup directory as PDB and PRC files, 7.8MB worth. To restore from this backup, I simply have to press the hotsync button and issue:

pilot-xfer -p /dev/ttyUSB1 -r ~/pda/backup/

I noticed that backup and restore was faster with pilot-link on Linux than with Palm Desktop on Windows.

INSTALL
Having backed-up the handheld, I downloaded the King James Bible archive from http://www.itb.net/handheld/free/, extracted Bible-Kjv.pdb from it to ~/pda/, pressed the hotsync button, and executed the following command:

pilot-xfer -p /dev/ttyUSB1 -i ~/pda/Bible-Kjv.pdb

It only took a few seconds to transfer the 1.47MB PDB file from laptop to PDA. When I opened Bible Reader on the Visor, the King James version was there.

PURGE
I was almost done, but I saw an interesting entry about the purge option in the pilot-xfer man page:

Purge any deleted data that hasn't been cleaned up by a sync (this is a safe operation, and will prevent your Palm from accumulating crud if you delete records but aren't syncing with a PIM). The only information you can lose is archived but unsynced records.

It seemed like a pretty good option, considering the fact that the Visor hadn't been synced for months. So I pressed the hotsync button and issued:

pilot-xfer -p /dev/ttyUSB1 -P

I got 86KB of additional free memory without missing any files afterwards. Cool!

This pleasant experience with pilot-link made me realize that I'm too much into Linux server applications. Otherwise, I wouldn't have missed such a nifty "desktop" utility.
previous
Search