Engels started playing with Linux® in 1991 and obtained his Red Hat Certified Engineer (RHCE), Red Hat Certified Instructor (RHCI), and Red Hat Certified Examiner (RHCX) certifications in 2002. He is in charge of Bluepoint's Total Linux®, Linux Kernel Internals®, Perl & Python Programming, and Extreme PHP curriculum and instruction development.
/* Conveniently yanked from the Bluepoint Institute profile page */
Elvin Joseph Sanico was one of the best professors I was privileged to have at the UP National Institute of Physics in Diliman. His use of the continuity equation for steady one-dimensional flow to prove the "silent waters run deep" axiom was really cool!
In loving memory of CPT Mario B. Mortega Sr., USAFFE, VET (1920-2004)
SSL Version Control
Saturday, Dec 20, 2014, 3:30 PM
Firefox 35.0 displayed this error while accessing Google today:

Secure Connection Failed

An error occurred during a connection to The server rejected the handshake because the client downgraded to a lower TLS version than the server supports. (Error code: ssl_error_inappropriate_fallback_alert)

SSL Version Control to the rescue:

SSLv3 is now insecure, and is soon going to be disabled by default.

In the meantime, you can use this extension to turn off SSLv3 in your copy of Firefox. When you install the add-on, it will set the minimum TLS version to TLS 1.0 (disabling SSLv3). If you want to change that setting later, like if you really need to access an SSLv3 site, just go to Tools / Add-ons and click the "Preferences" button next to the add-on. That will give you a drop-down menu to select the minimum TLS version you want to allow.
Thursday, Sep 5, 2013, 8:41 AM

Flutter is a $20 wireless ARM development board with over 1 km (half-mile) range; secured using 256-bit AES encryption; built with Open Source hardware and Open Source software. Exciting times ahead!
Tuesday, Oct 4, 2011, 5:00 PM
I was a reluctant panelist at today's public hearing for HB 1011 aka FOSS Act of 2010. Some folks did not do their homework and there was a lot of classic FUD going around, which I thought went out of fashion ten years ago.
Thursday, Aug 11, 2011, 8:11 AM
Live migrated 20 virtual machines from US to UK in under 5 minutes. Yeah!
Linux Day
Monday, Aug 1, 2011, 1:08 AM
Linux® Day is a global celebration of Linux's 20th anniversary. Join the network and start making plans for August 27, 2011. Check out for more information!
KGPU - Augmenting Linux With GPUs
Wednesday, May 25, 2011, 12:35 AM
Psylocke designed a GPU computer with almost 200 effective cores for her ECE 12 (Computer-Aided Design) subject a few days ago. Her teacher thinks it's impossible to build such a machine.

For the record, we have an actual unit at home.

ASUS, Dell, and Microway (among others) have been offering similar products abroad for a while now.

Aside from the usual GPGPU applications (Psylocke used TeraChem), we also explored KGPU:

"KGPU is a GPU computing framework for the Linux kernel. It allows Linux kernel to call CUDA programs running on GPUs directly. The motivation is to augment operating systems with GPUs so that not only userspace applications but also the operating system itself can benefit from GPU acceleration. It can also free the CPU from some computation intensive work by enabling the GPU as an extra computing device.

Modern GPUs can be used for more than just graphics processing; they can run general-purpose programs as well. While not well-suited to all types of programs, they excel on code that can make use of their high degree of parallelism. Most uses of so-called "General Purpose GPU" computation have been outside the realm of systems software. However, recent work on software routers and encrypted network connections has given examples of how GPGPUs can be applied to tasks more traditionally within the realm of operating systems. These uses are only scratching the surface. Other examples of system-level tasks that can take advantage of GPUs include general cryptography, pattern matching, program analysis, and acceleration of basic commonly-used algorithms."

Really cool stuff! Visit for more information and a white paper on KGPU.

And oh yes, impossible is nothing.
Recalled to Active Duty
Monday, May 23, 2011, 5:23 PM
I'll be handling Total Linux 41 this June:

Lock and load!
Sunday, Jan 9, 2011, 11:30 AM
After using Gitosis for almost 3 years, I have made the switch to Gitolite ... and it über rocks!
Tuesday, May 25, 2010, 11:13 PM
Fedora 13 has been released:

Congratulations to the Fedora Project!
Tuesday, Nov 17, 2009, 11:14 PM
Fedora 12 has been released:

Congratulations to the Fedora Project!
Mel Chua
Monday, Nov 16, 2009, 11:32 PM

Magie, Herson, and I met with fellow Fedora Ambassador (and Red Hat Community Architecture team member) Mel Chua tonight at the University of the Philippines Techno Hub in Quezon City.

After dinner at Razon's (thanks Mel!), we chanced upon Herson's fraternity brod (and Mozilla Philippines Community organizer) Regnard Raquedan at the Coffee Bean (thanks Maj!).

Thanks for coordinating the meet up Herson! It was great to have some face time with comrades in Open Source.
Google Chrome OS
Thursday, Jul 9, 2009, 10:24 AM
I can't wait to see how this pans out:

Introducing the Google Chrome OS
7/07/2009 09:37:00 PM

It's been an exciting nine months since we launched the Google Chrome browser. Already, over 30 million people use it regularly. We designed Google Chrome for people who live on the web — searching for information, checking email, catching up on the news, shopping or just staying in touch with friends. However, the operating systems that browsers run on were designed in an era where there was no web. So today, we're announcing a new project that's a natural extension of Google Chrome — the Google Chrome Operating System. It's our attempt to re-think what operating systems should be.

Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks. Later this year we will open-source its code, and netbooks running Google Chrome OS will be available for consumers in the second half of 2010. Because we're already talking to partners about the project, and we'll soon be working with the open source community, we wanted to share our vision now so everyone understands what we are trying to achieve.

Speed, simplicity and security are the key aspects of Google Chrome OS. We're designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don't have to deal with viruses, malware and security updates. It should just work.

Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year. The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux thereby giving developers the largest user base of any platform.

Google Chrome OS is a new project, separate from Android. Android was designed from the beginning to work across a variety of devices from phones to set-top boxes to netbooks. Google Chrome OS is being created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems. While there are areas where Google Chrome OS and Android overlap, we believe choice will drive innovation for the benefit of everyone, including Google.

We hear a lot from our users and their message is clear — computers need to get better. People want to get to their email instantly, without wasting time waiting for their computers to boot and browsers to start up. They want their computers to always run as fast as when they first bought them. They want their data to be accessible to them wherever they are and not have to worry about losing their computer or forgetting to back up files. Even more importantly, they don't want to spend hours configuring their computers to work with every new piece of hardware, or have to worry about constant software updates. And any time our users have a better computing experience, Google benefits as well by having happier users who are more likely to spend time on the Internet.

We have a lot of work to do, and we're definitely going to need a lot of help from the open source community to accomplish this vision. We're excited for what's to come and we hope you are too. Stay tuned for more updates in the fall and have a great summer.

Update on 7/8/2009: We have posted an FAQ on the Google Chrome Blog.

Posted by Sundar Pichai, VP Product Management and Linus Upson, Engineering Director
Fedora Community
Tuesday, Mar 17, 2009, 11:00 AM
What started out as an initiative to get last January 15 resulted in being assigned to Fedora Philippines today.

Fedora Philippines is being hosted by Bluepoint Foundation on a Xen VM using as interim domain.

My original request was for the delegation of to Bluepoint's DNS servers via a couple of glue and NS records.

But hey, I'll take a single A record anytime.
Fedora Ambassadors Meeting
Sunday, Jun 8, 2008, 8:00 PM
It's a Sunday, but Magie and I went online for a meeting of APAC Fedora Ambassadors at freenode earlier today.

The meeting was announced 3 weeks in advance, but there were only 5-6 attendees. I cannot help but echo Susmit Shannigrahi's sentiments: "In APAC there is a lack of people seriously trying to contribute. A lot of people join but a few continues."

At the local level, Magie tried to initiate a meeting of the 7 Fedora Ambassadors in the Philippines at least 3 times during the last 6 months. Only one bothered to reply and actually show up everytime. Me.

I hope things get better.
Friday, Nov 16, 2007, 6:00 PM
This is a very timely reminder!

[Devel] [PATCH][DOCUMENTATION] The namespaces compatibility list doc

---------------------------- Original Message ----------------------------
Subject: [Devel] [PATCH][DOCUMENTATION] The namespaces compatibility list doc
From: "Pavel Emelyanov" <>
Date: Fri, November 16, 2007 5:34 pm
To: "Andrew Morton" <>
Cc: "Linux Containers" <>
"Cedric Le Goater" <>
"Theodore Tso" <>
"Linux Kernel Mailing List" <>

From time to time people begin discussions about how the namespaces are working/going-to-work together.

Ted T'so proposed to create some document that describes what problems user may have when he/she creates some new namespace, but keeps others shared. I liked this idea, so here's the initial version of such a document with the problems I currently have in mind and can describe somewhat audibly - the "namespaces compatibility list".

The Documentation/namespaces/ directory is about to contain more docs about the namespaces stuff.

Thanks to Cedirc for notes and spell checks on the doc.

Signed-off-by: Pavel Emelyanov <>


commit 83061c56e1c4dcd54d48a62b108d219a7f5279a0
Author: Pavel <>
Date: Fri Nov 16 12:25:53 2007 +0300

Namespaces compatibility list

diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index 910e511..3ead06b 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -262,6 +262,8 @@ mtrr.txt
- how to use PPro Memory Type Range Registers to increase performance.
- info on the generic mutex subsystem.
+ - directory with various information about namespaces
- info on a TCP implementation of a network block device.
diff --git a/Documentation/namespaces/compatibility-list.txt b/Documentation/namespaces/compatibility-list.txt
new file mode 100644
index 0000000..9c9e5c1
--- /dev/null
+++ b/Documentation/namespaces/compatibility-list.txt
@@ -0,0 +1,33 @@
+ Namespaces compatibility list
+This document contains the information about the problems user
+may have when creating tasks living in different namespaces.
+Here's the summary. This matrix shows the known problems, that
+occur when tasks share some namespace (the columns) while living
+in different other namespaces (the rows):
+IPC X 1
+PID 1 1 X
+User 2 X
+Net X
+1. Both the IPC and the PID namespaces provide IDs to address
+ object inside the kernel. E.g. semaphore with ipcid or
+ process group with pid.
+ In both cases, tasks shouldn't try exposing this id to some
+ other task living in a different namespace via a shared filesystem
+ or IPC shmem/message. The fact is that this ID is only valid
+ within the namespace it was obtained in and may refer to some
+ other object in another namespace.
+2. Intentionnaly, two equal user ids in different user namespaces
+ should not be equal from the VFS point of view. In other
+ words, user 10 in one user namespace shouldn't have the same
+ access permissions to files, beloging to user 10 in another
+ namespace. But currently this is not so.

Containers mailing list

Devel mailing list
Saturday, Sep 8, 2007, 7:24 PM
Some of my former students asked me about the status of Buhawi after our SFD07 planning session earlier today. I still don't know when the development of Bluepoint's meta-distro will resume, but the next release will definitely be for x86_64 and will showcase the following:

Tuesday, Aug 7, 2007, 6:33 PM
I'm now playing with Varnish, a state-of-the-art, high-performance HTTP accelerator. Looks good so far!

Notes from the Architect

Once you start working with the Varnish source code, you will notice that Varnish is not your average run of the mill application.

That is not a coincidence.

I have spent many years working on the FreeBSD kernel, and only rarely did I venture into userland programming, but when I had occation to do so, I invariably found that people programmed like it was still 1975.

So when I was approached about the Varnish project I wasn't really interested until I realized that this would be a good opportunity to try to put some of all my knowledge of how hardware and kernels work to good use, and now that we have reached alpha stage, I can say I have really enjoyed it.

So what's wrong with 1975 programming?

here isn't anything new The really short answer is that computers do not have two kinds of storage any more.

It used to be that you had the primary store, and it was anything from acoustic delaylines filled with mercury via small magnetic dougnuts via transistor flip-flops to dynamic RAM.

And then there were the secondary store, paper tape, magnetic tape, disk drives the size of houses, then the size of washing machines and these days so small that girls get disappointed if think they got hold of something else than the MP3 player you had in your pocket.

And people program this way.

They have variables in "memory" and move data to and from "disk".

Take Squid for instance, a 1975 program if I ever saw one: You tell it how much RAM it can use and how much disk it can use. It will then spend inordinate amounts of time keeping track of what HTTP objects are in RAM and which are on disk and it will move them forth and back depending on traffic patterns.

Well, today computers really only have one kind of storage, and it is usually some sort of disk, the operating system and the virtual memory management hardware has converted the RAM to a cache for the disk storage.

So what happens with squids elaborate memory management is that it gets into fights with the kernels elaborate memory management, and like any civil war, that never gets anything done.

What happens is this: Squid creates a HTTP object in "RAM" and it gets used some times rapidly after creation. Then after some time it get no more hits and the kernel notices this. Then somebody tries to get memory from the kernel for something and the kernel decides to push those unused pages of memory out to swap space and use the (cache-RAM) more sensibly for some data which is actually used by a program. This however, is done without squid knowing about it. Squid still thinks that these http objects are in RAM, and they will be, the very second it tries to access them, but until then, the RAM is used for something productive.

This is what Virtual Memory is all about.

If squid did nothing else, things would be fine, but this is where the 1975 programming kicks in.

After some time, squid will also notice that these objects are unused, and it decides to move them to disk so the RAM can be used for more busy data. So squid goes out, creates a file and then it writes the http objects to the file.

Here we switch to the high-speed camera: Squid calls write(2), the address i gives is a "virtual address" and the kernel has it marked as "not at home".

So the CPU hardwares paging unit will raise a trap, a sort of interrupt to the operating system telling it "fix the memory please".

The kernel tries to find a free page, if there are none, it will take a little used page from somewhere, likely another little used squid object, write it to the paging poll space on the disk (the "swap area") when that write completes, it will read from another place in the paging pool the data it "paged out" into the now unused RAM page, fix up the paging tables, and retry the instruction which failed.

Squid knows nothing about this, for squid it was just a single normal memory acces.

So now squid has the object in a page in RAM and written to the disk two places: one copy in the operating systems paging space and one copy in the filesystem.

Squid now uses this RAM for something else but after some time, the HTTP object gets a hit, so squid needs it back.

First squid needs some RAM, so it may decide to push another HTTP object out to disk (repeat above), then it reads the filesystem file back into RAM, and then it sends the data on the network connections socket.

Did any of that sound like wasted work to you ?

Here is how Varnish does it:

Varnish allocate some virtual memory, it tells the operating system to back this memory with space from a disk file. When it needs to send the object to a client, it simply refers to that piece of virtual memory and leaves the rest to the kernel.

If/when the kernel decides it needs to use RAM for something else, the page will get written to the backing file and the RAM page reused elsewhere.

When Varnish next time refers to the virtual memory, the operating system will find a RAM page, possibly freeing one, and read the contents in from the backing file.

And that's it. Varnish doesn't really try to control what is cached in RAM and what is not, the kernel has code and hardware support to do a good job at that, and it does a good job.

Varnish also only has a single file on the disk whereas squid puts one object in its own separate file. The HTTP objects are not needed as filesystem objects, so there is no point in wasting time in the filesystem name space (directories, filenames and all that) for each object, all we need to have in Varnish is a pointer into virtual memory and a length, the kernel does the rest.

Virtual memory was meant to make it easier to program when data was larger than the physical memory, but people have still not caught on.

More caches.

But there are more caches around, the silicon mafia has more or less stalled at 4GHz CPU clock and to get even that far they have had to put level 1, 2 and sometimes 3 caches between the CPU and the RAM (which is the level 4 cache), there are also things like write buffers, pipeline and page-mode fetches involved, all to make it a tad less slow to pick up something from memory.

And since they have hit the 4GHz limit, but decreasing silicon feature sizes give them more and more transistors to work with, multi-cpu designs have become the fancy of the world, despite the fact that they suck as a programming model.

Multi-CPU systems is nothing new, but writing programs that use more than one CPU at a time has always been tricky and it still is.

Writing programs that perform well on multi-CPU systems is even trickier.

Imagine I have two statistics counters:

unsigned n_foo;
unsigned n_bar;

So one CPU is chugging along and has to execute n_foo++

To do that, it read n_foo and then write n_foo back. It may or may not involve a load into a CPU register, but that is not important.

To read a memory location means to check if we have it in the CPUs level 1 cache. It is unlikely to be unless it is very frequently used. Next check the level two cache, and let us assume that is a miss as well.

If this is a single CPU system, the game ends here, we pick it out of RAM and move on.

On a Multi-CPU system, and it doesn't matter if the CPUs share a socket or have their own, we first have to check if any of the other CPUs have a modified copy of n_foo stored in their caches, so a special bus-transaction goes out to find this out, if if some cpu comes back and says "yeah, I have it" that cpu gets to write it to RAM. On good hardware designs, our CPU will listen in on the bus during that write operation, on bad designs it will have to do a memory read afterwards.

Now the CPU can increment the value of n_foo, and write it back. But it is unlikely to go directly back to memory, we might need it again quickly, so the modified value gets stored in our own L1 cache and then at some point, it will end up in RAM.

Now imagine that another CPU wants to n_bar+++ at the same time, can it do that ? No. Caches operate not on bytes but on some "linesize" of bytes, typically from 8 to 128 bytes in each line. So since the first cpu was busy dealing with n_foo, the second CPU will be trying to grab the same cache-line, so it will have to wait, even through it is a different variable.

Starting to get the idea ?

Yes, it's ugly.

How do we cope ?

Avoid memory operations if at all possible.

Here are some ways Varnish tries to do that:

When we need to handle a HTTP request or response, we have an array of pointers and a workspace. We do not call malloc(3) for each header. We call it once for the entire workspace and then we pick space for the headers from there. The nice thing about this is that we usually free the entire header in one go and we can do that simply by resetting a pointer to the start of the workspace.

When we need to copy a HTTP header from one request to another (or from a response to another) we don't copy the string, we just copy the pointer to it. Provided we do not change or free the source headers, this is perfectly safe, a good example is copying from the client request to the request we will send to the backend.

When the new header has a longer lifetime than the source, then we have to copy it. For instance when we store headers in a cached object. But in that case we build the new header in a workspace, and once we know how big it will be, we do a single malloc(3) to get the space and then we put the entire header in that space.

We also try to reuse memory which is likely to be in the caches.

The worker threads are used in "most recently busy" fashion, when a workerthread becomes free it goes to the front of the queue where it is most likely to get the next request, so that all the memory it already has cached, stack space, variables etc, can be reused while in the cache, instead of having the expensive fetches from RAM.

We also give each worker thread a private set of variables it is likely to need, all allocated on the stack of the thread. That way we are certain that they occupy a page in RAM which none of the other CPUs will ever think about touching as long as this thread runs on its own CPU. That way they will not fight about the cachelines.

If all this sounds foreign to you, let me just assure you that it works: we spend less than 18 system calls on serving a cache hit, and even many of those are calls tog get timestamps for statistics.

These techniques are also nothing new, we have used them in the kernel for more than a decade, now it's your turn to learn them :-)

So Welcome to Varnish, a 2006 architecture program.

Poul-Henning Kamp, Varnish architect and coder.
Monday, May 21, 2007, 2:04 PM
I've been using OpenVZ since it's first release. It's time to join devel.

So far, I'm learning a lot from the community, especially from Linux luminaries like Ingo Molnar, Rusty Russel, Andrew Morton, and even Linus Torvalds!
Friday, May 4, 2007, 7:04 PM
Gaim is now Pidgin! Following a legal settlement with AOL, Gaim has been renamed Pidgin and its 2.0.0 release is now available.
Joining Sun
Monday, Mar 19, 2007, 12:00 PM
News from Debian Master Ian Murdock:

I saw my first Sun workstation about 15 years ago, in 1992. I was a business student at Purdue University, and a childhood love for computers had just been reawakened. I was spending countless hours in the basement of the Math building, basking in the green phosphorescent glow of a Z29 and happily exploring every nook and cranny of the Sequent Symmetry upstairs. It didn’t take too long to discover, though, just a short walk away in the computer science building, several labs full of Sun workstations. Suddenly, the Z29 didn’t have quite the same allure. A few months later, I walked over to the registrar’s office and changed my major to computer science. (OK, advanced tax accounting had something to do with it too.)

Everything I know about computing I learned on those Sun workstations, as did so many other early Linux developers; I even had my own for a while, after I joined the University of Arizona computer science department in 1997. But within a year, the Suns were starting to disappear, replaced by Pentiums running Red Hat Linux. More and more people coming through university computer science programs were cutting their teeth on Linux, much as I had on Sun. Pretty soon, Sun was increasingly seen by this new generation as the vendor who didn’t “get it”, and Sun’s rivals did a masterful job running with that and painting the company literally built on open standards as “closed”. To those of us who knew better, it was a sad thing to watch.

The last several years have been hard for Sun, but the corner has been turned. As an outsider, I’ve watched as Sun has successfully embraced x86, pioneered energy efficiency as an essential computing feature, open sourced its software portfolio to maximize the network effects, championed transparency in corporate communications, and so many other great things. Now, I’m going to be a part of it.

And, so, I’m excited to announce that, as of today, I’m joining Sun to head up operating system platform strategy. I’m not saying much about what I’ll be doing yet, but you can probably guess from my background and earlier writings that I’ll be advocating that Solaris needs to close the usability gap with Linux to be competitive; that while as I believe Solaris needs to change in some ways, I also believe deeply in the importance of backward compatibility; and that even with Solaris front and center, I’m pretty strongly of the opinion that Linux needs to play a clearer role in the platform strategy.

It is with regrets that I leave the Linux Foundation, but if you haven’t figured out already, Sun is a company I’ve always loved, and being a part of it was an opportunity I simply could not pass up. I think the world of the people at the LF, particularly my former FSG colleagues with whom I worked so closely over the past year and a half: Jim Zemlin, Amanda McPherson, Jeff Licquia, and Dan Kohn. And I still very much believe in the core LF mission, to prevent the fragmentation of the Linux platform. Indeed, I’m remaining in my role as chair of the LSB—and Sun, of course, is a member of the Linux Foundation.

Anyway. Watch this space. This is going to be fun!
10 years of
Tuesday, Mar 13, 2007, 6:18 PM

From: (H. Peter Anvin)
Newsgroups: comp.os.linux.announce
Subject: FTP: New Linux FTP site:
Date: 13 Mar 1997 18:18:28 GMT
Organization: Transmeta Corporation, Santa Clara CA
Approved: (Lars Wirzenius)
Message-ID: <>
Reply-To: (H. Peter Anvin)
Lines: 48
Xref: comp.os.linux.announce:6836


Transmeta Corporation is proud to sponsor a new Linux FTP site:

Currently, contains the Linux kernel and a set of
mirror sites; in the future, we hope to make space available for users
to publish packages directly.

We are currently connected via a T1; plans are to upgrade to 10 Mbit/s
in the near future.

This site is accessible via:

SMB/CIFS \\\pub



- --
This space intentionally has nothing but text explaining why this
space has nothing but text explaining that this space would otherwise
have been left blank, and would otherwise have been left blank.

- --
This article has been digitally signed by the moderator, using PGP. has PGP key for validating signature.
Send submissions for comp.os.linux.announce to:
PLEASE remember a short description of the software and the LOCATION.
This group is archived at

Version: 2.6.3i
Charset: noconv

Dual Core
Wednesday, Feb 21, 2007, 5:30 PM
I caused a kernel panic while playing around with some ReiserFS parameters on my Fedora Core 6 Athlon64 X2 test box. Instead of freezing as expected, Linux continued to run! I managed to duplicate the error. The box froze this time. I found out that the first kernel panic only took out one of the cores. This was why Linux continued to run. It took a second panic to take out the other core and crash the box. Cool!
Seclists.Org Shut Down By Myspace and GoDaddy
Thursday, Jan 25, 2007, 5:50 PM
From: "Fyodor" <>
Subject: Seclists.Org shut down by Myspace and GoDaddy
Date: Thu, January 25, 2007 5:47 pm

Hi everyone,

Many of you reported that our SecLists.Org security mailing list archive was down most of yesterday (Wed), and all you really need to know is that we're back up and running! But I'm going into rant mode
anyway in case you care for the details.

I woke up yesterday morning to find a voice message from my domain registrar (GoDaddy) saying they were suspending the domain One minute later I received an email saying that has "been suspended for violation of the Abuse Policy". And also "if the domain name(s) listed above are private, your Domains By Proxy(R) account has also been suspended." WTF??! Neither the email nor voicemail gave a phone number to reach them at, nor did they feel it was worth the effort to explain what the supposed violation was. They changed my domain nameserver to

I called GoDaddy several times, and all three support people I spoke with (Craig, Ricky, then Wael) said that the abuse department doesn't take calls. They said I had email (which I had
already done 3 times) and that I could then expect a response "within 1 or two business days". Given that tens of thousands of people use SecLists.Org every day, I didn't take that well. When they realized I
was going to just keep calling until they did something, they finally persuaded the abuse department to explain why they cut me off: Myspace.Com asked them to.

Apparently Myspace is still reeling from all the news reports more than a week ago about a list of 56,000 myspace usernames+passwords making the rounds. It was all over the news, and reminded people of a
completely different list of 34,000 MySpace passwords which was floating around last year. MySpace users fall for a LOT of phishing scams. They are basically the new AOL. Anyway, everyone has this
latest password list now, and it was even posted (several times) to the thousands of members of the fulldisclosure mailing list more than a week ago. So it was archived by all the sites which archive
full-disclosure, including SecLists.Org.

Instead of simply writing me (or asking to have the password list removed, MySpace decided to contact (only) GoDaddy and try to have the whole site of 250,000 pages removed because they don't like one of them. And GoDaddy cowardly and lazily decided to simply shut down the site rather than actually investigating or giving me a chance to contest or comply with the complaint. Needless to say, I'm in the market for a new registrar. One who doesn't immediately bend over for any large corporation who asks. One who considers it their job just to refer people to the SecLists.Org nameserver at, not to police the content of the services hosted at the domains. The GoDaddy ToS forbids hosting what they call "morally objectionable activities".

It is way too late for MySpace to put the cat back in the bag anyway. The bad guys already have the file, and anyone else who wants it need only Google for "myspace1.txt.bz2" or "duckqueen1". Is MySpace going to try and shut down Google next?

For some reason, this is only one of a spate of bogus Seclists removal requests. I do remove material that is clearly illegal or inappropriate for (like the bonehead who keeps posting furry porn to fulldisclosure). But one company sent a legal threat demanding[1] that I remove a 7-year old Bugtraq posting which was a complaint about previous bogus legal threats they had sent. Another guy[2] last week sent a complaint to my ISP saying that an image was child porn and declaring that he would notify the FBI. When asked why he thought the picture was of a child, he tried a different tack: sending a DMCA complaint declaring under penalty of perjury that he is the copyright holder of the photo! Michael Crook told me on the phone that he sent the DMCA request, but when I forwarded the info to the EFF (who is already suing this guy for sending other bogus DMCA complaints), he changed his mind and wrote that "after further review, I can find no record" or mailing the complaint.

Most of the censorship attempts are for the full-disclosure list. It would be easiest just to cease archiving that list, but I do think it serves an important purpose in keeping the industry honest. And many good postings do make it through if you can filter out all the junk. So I'm keeping it, no matter how "morally objectionable" GoDaddy and MySpace may think it to be!

In much happier Nmap news, I'm pleased to report that the Nmap project now has a public SVN server so you can always check out the latest version. Due to a bug in SVN, we use a username as "guest" with no password rather than anonymous. So check it out with the command:

svn co --username guest --password "" svn://

Then do the normal:

And install it or set NMAPDIR to "." to run in place. Among other goodies, this release includes the Nmap scripting language[3].

If you want to follow Nmap development on a check-in by check-in basis, there is a new nmap-svn mailing list[4] for that. But be prepared for some high traffic as you'll get every patch!

2007 will be a good year for Nmap!



Sent through the nmap-hackers mailing list
Archived at
Linux Foundation
Monday, Jan 22, 2007, 4:08 PM
The Open Source Developer Labs and the Free Standards Group have merged today as the Linux Foundation.
SHA-1 Cracked!
Friday, Jan 12, 2007, 11:00 PM
Chinese Professor Cracks Fifth Data Security Algorithm
SHA-1 added to list of "accomplishments"

Central News Agency

Jan 11, 2007

Associate professor Wang Xiaoyun of Beijing's Tsinghua University and Shandong University of Technology has cracked SHA-1, a widely used data security algorithm.

TAIPEI—Within four years, the U.S. government will cease to use SHA-1 (Secure Hash Algorithm) for digital signatures, and convert to a new and more advanced "hash" algorithm, according to the article "Security Cracked!" from New Scientist . The reason for this change is that associate professor Wang Xiaoyun of Beijing's Tsinghua University and Shandong University of Technology, and her associates, have already cracked SHA-1.

Wang also cracked MD5 (Message Digest 5), the hash algorithm most commonly used before SHA-1 became popular. Previous attacks on MD5 required over a million years of supercomputer time, but Wang and her research team obtained results using ordinary personal computers.

In early 2005, Wang and her research team announced that they had succeeded in cracking SHA-1. In addition to the U.S. government, well-known companies like Microsoft, Sun, Atmel, and others have also announced that they will no longer be using SHA-1.

Two years ago, Wang announced at an international data security conference that her team had successfully cracked four well-known hash algorithms—MD5, HAVAL-128, MD4, and RIPEMD—within ten years.

A few months later, she cracked the even more robust SHA-1.

Focus and Dedication

According to the article, Wang's research focusses on hash algorithms.

A hash algorithm is a mathematical procedure for deriving a 'fingerprint' of a block of data. The hash algorithms used in cryptography are "one-way": it is easy to derive hash values from inputs, but very difficult to work backwards, finding an input message that yields a given hash value. Cryptographic hash algorithms are also resistant to "collisions": that is, it is computationally infeasible to find any two messages that yield the same hash value.

Hash algorithms' usefulness in data security relies on these properties, and much research focusses in this area.

Recent years have seen a stream of ever-more-refined attacks on MD5 and SHA-1—including, notably, Wang's team's results on SHA-1, which permit finding collisions in SHA-1 about 2,000 times more quickly than brute-force guessing. Wang's technique makes attacking SHA-1 efficient enough to be feasible.

MD5 and SHA-1 are the two most extensively used hash algorithms in the world. These two algorithms underpin many digital signature and other security schemes in use throughout the international community. They are widely used in banking, securities, and e-commerce. SHA-1 has been recognized as the cornerstone for modern Internet security.

According to the article, in the early stages of Wang's research, there were other researchers who tried to crack it. However, none of them succeeded. This is why in 15 years hash research had become the domain of hopeless research in many scientists' minds.

Wang's method of cracking algorithms differs from others'. Although such analysis usually cannot be done without the use of computers, according to Wang, the computer only assisted in cracking the algorithm. Most of the time, she calculated manually, and manually designed the methods.

"Hackers crack passwords with bad intentions," Wang said. "I hope efforts to protect against password theft will benefit [from this]. Password analysts work to evaluate the security of data encryption and to search for even more secure … algorithms."

"On the day that I cracked SHA-1," she added, "I went out to eat. I was very excited. I knew I was the only person who knew this world-class secret."

Within ten years, Wang cracked the five biggest names in cryptographic hash algorithms. Many people would think the life of this scientist must be monotonous, but "That ten years was a very relaxed time for me," she says.

During her work, she bore a daughter and cultivated a balcony full of flowers. The only mathematics-related habit in her life is that she remembers the license plates of taxi cabs.

With additional reporting by The Epoch Times.
Microsoft + Novell = ?
Tuesday, Nov 7, 2006, 6:31 PM
Novell and Microsoft Collaborate
Frequently Asked Questions (FAQ)

Q. What are you announcing?
Novell and Microsoft are announcing an historic bridging of the divide between open source and proprietary software. They have signed three related agreements which, taken together, will greatly enhance interoperability between Linux and Windows and give customers greater flexibility in their IT environments. Under a technical cooperation agreement, Novell and Microsoft will work together in three primary areas to deliver new solutions to customers: virtualization, web services management and document format compatibility. Under a patent cooperation agreement, Microsoft and Novell provide patent coverage for each others customers, giving customers peace of mind regarding patent issues. Finally, under a business cooperation agreement, Novell and Microsoft are committing to dedicate marketing and sales resources to promote joint solutions.

Q. What does this mean for Linux?
Novell and Microsoft recognize that many customers have, and will continue to have, multiple platforms, including Linux and Windows, in their environments. Customers are asking for highly reliable, secure, and interoperable solutions. Enabling easy and powerful virtualization of Linux on Windows and Windows on Linux is a great step forward towards this goal. Novell will continue to promote Linux as the premier platform for core infrastructure and application services. This deal strengthens Novell's commitment to the community through leading-edge development projects as well as the continued promotion of Linux in the marketplace. Novell recognizes the significant contribution open source developers have made to Linux and their reliance on the General Public License. The patent agreement signed by Novell and Microsoft was designed with the principles and obligations of the GPL in mind. Under this agreement, customers of SUSE Linux Enterprise know they have patent protection from Microsoft in connection with their use of SUSE Linux Enterprise, further encouraging the adoption of Linux in the marketplace.

Q. Will Novell and Microsoft stop competing?
This agreement is focused on building a bridge between business and development models, not removing competition in the marketplace. We will continue to compete in a number of arenas, including the desktop, identity and security management, and systems and resource management. At the product level, Windows and SUSE Linux Enterprise will continue to compete; however, the agreement is focused on making it easier for customers who want to run both Windows and Linux to do so. This is a very common relationship for large businesses where we simultaneously partner and compete in different areas.

Q. I am a current Novell customer who subscribes to SUSE Linux Enterprise Server. Does the patent protection offered by Microsoft apply to me?
Yes. The patent protection offered by Microsoft applies to ALL customers who subscribe to a SUSE Linux Enterprise product. It does not matter if you purchased SLES or SLED, if you bought it directly from Novell, from a reseller, from a distributor, or acquired it via a coupon from Microsoft. If you have a current subscription to SUSE Linux Enterprise, then you are covered by the Microsoft patent protection. Microsoft has provided a covenant not to assert its patent portfolio directly to customers who have purchased SUSE Linux Enterprise from Novell.

Q. From the customer's perspective, what is covered in openSUSE?
The patent agreement covers everything from that is included in past and current Novell supported versions of SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop. It also covers future versions (for 5 years) of SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop, with recognition of the fact that development changes may occur that fall outside the terms of this agreement. While some future scenarios may not be included, we have established a working relationship and structure to have conversations about those issues as they arise.

Q. Does this covenant apply to original equipment manufacturers (OEMs) that buy SUSE Linux Enterprise and preload or resell it?
The covenant applies to end customers of Novell products.

Q. Is this in response to recent events, such as Oracle's announcement about Red Hat?
No. Negotiations on this agreement have been going on for many months. This agreement reflects a joint assessment by Novell and Microsoft that customers will be best served by ensuring Linux and Windows can interoperate effectively. In terms of a possible Oracle move to offer support for SUSE Linux Enterprise, Novell believes customers with heterogeneous networks are best served by an independent operating systems vendor like Novell with broad hardware and software support.

Q. What are the financial benefits to Novell? To Microsoft?
Novell anticipates the agreement will increase demand for SUSE Linux Enterprise, although they are not putting out any formal estimates. Through the improved interoperability and patent protection offered as part of this agreement, both Novell and Microsoft anticipate increased business opportunity through both best of breed product solutions and market differentiation.

Q. What are the specifics of the agreement?
Like many commercial transactions, the financial terms of the agreement are not being disclosed at this time.

Under the technical collaboration agreement, the companies will create a joint research facility and pursue new software solutions for virtualization, management, and document format compatibility. These are potentially huge markets — IDC projects the overall market for virtual machine software to be $1.8 billion by 2010, and the overall market for distributed system management software to be $10.2 billion by 2010 — and the companies believe their investment in interoperability will make their respective products more attractive to customers.

Under the business collaboration agreement, the companies will pursue a variety of joint marketing activities. In addition, Microsoft will distribute as part of a resale arrangement approximately 70,000 coupons for SUSE Linux Enterprise Server maintenance and support per year so that customers can benefit from the use of the new software solutions developed through the collaborative research effort, as well as a version of Linux that is covered with respect to Microsoft's IP rights.

Under the patent agreement, both companies will make up-front payments in exchange for a release from any potential liability for use of each others patented intellectual property, with a net balancing payment from Microsoft to Novell reflecting the larger applicable volume of Microsoft's product shipments. Novell will also make running royalty payments based on a percentage of its revenues from open source products.

Q. Does this mean that Microsoft will now sell Linux?
No. However, as part of this agreement, Microsoft and Novell want to ensure our joint customers have the opportunity to take advantage of the improved interoperability and patent protection enabled by this agreement. To help promote these new solutions, Microsoft has purchased a quantity of coupons from Novell that entitle the recipient to a 1-year subscription for maintenance and updates to SUSE Linux Enterprise Server. Microsoft will make these coupons available to joint customers who are interested in deploying virtualized Windows on SUSE Linux Enterprise Server, or virtualized SUSE Linux Enterprise Server on Windows.

For customers who have a significant Windows investment and want to add Linux to their IT infrastructure, Microsoft will recommend SUSE Linux Enterprise for Windows-Linux solutions.

Q. What does this mean for customers?
Customers have repeatedly told both Novell and Microsoft that flexibility is an increasingly important part of their data center. At a time when CIOs are being asked to do more with less, and improve utilization, virtualization is a key to solving that problem. Both Novell and Microsoft realize that the data center of the future will have both Linux and Windows as significant platforms. This agreement is all about making those two platforms work together, and providing the enterprise support for that interoperability that customers demand. By working together, Novell and Microsoft enable customers to choose the operating system that best fits their applications and business needs.

Q. Why is the patent agreement important?
The patent agreement demonstrates that Microsoft is willing to enter into agreements that extend its patent protection to open source customers. This is an important foundation in building the bridge between proprietary and open source software.

One of the biggest perceived differences between open and closed source software revolves around intellectual property. Because open source software is developed in a cooperative environment, some have expressed concerns that intellectual property protections could be compromised more easily in open source. Today's agreement between Novell and Microsoft provides confidence on intellectual property for Novell and Microsoft customers. By mutually agreeing not to assert their patent rights against one another's customers, the two companies give customers greater peace of mind regarding the patents in the solutions they're deploying. Novell and Microsoft believe that this arrangement makes it possible to offer customers the highest level of interoperability with the assurance that both companies stand behind these solutions.

Q. The press release indicates Microsoft is also pledging not to assert its patents against individual, non-commercial open source developers. How is this connected to Novell?
Microsoft and Novell felt it was important to establish a precedent for the individual, non-commercial open source developer community that potential patent litigation need not be a concern. Microsoft is excited to more actively participate in the open source community and Novell is and will continue to be an important enabler for this bridge. For these reasons, both Novell and Microsoft felt it was appropriate to make this pledge for Microsoft not to assert its patents against the non-commercial community.

Q. What are the exact terms of the individual, non-commercial developer patent non-assert? Who is covered and who is not?
The terms of the individual, non-commercial developer patent non-assert are on You are covered if you are doing non-commercial open source software development. This includes individual enthusiasts, such as a student or a developer who does work on his own time on a project of personal interest to him. If you are compensated for your development, then your activities are considered "commercial", and you would not be covered.

Q. How will the technical cooperation work?
The two companies will create a joint research facility at which Microsoft and Novell technical experts will architect and test new software solutions and work with customers and the community to build and support these technologies. The agreement between Microsoft and Novell focuses on three technical areas that provide important value and choice to the market:

Virtualization. Virtualization is one of the most important trends in the industry. Customers tell us that virtualization is one way they can consolidate and more easily manage rapidly growing server workloads and their large set of server applications. Microsoft and Novell will jointly develop the most compelling virtualization offering in the market for Linux and Windows.

Web Services for managing physical and virtual servers. Web Services and service oriented architectures continue to be one of the defining ways software companies can deliver greater value to customers. Microsoft and Novell will undertake work to make it easier for customers to manage mixed Windows and SUSE Linux Enterprise environments and to make it easier for customers to federate Microsoft Active Directory with Novell eDirectory.

Document Format Compatibility. Microsoft and Novell have been focusing on ways to improve interoperability between office productivity applications. The two companies will now work together on ways for and Microsoft Office users to best share documents and both will take steps to make translators available to improve interoperability between Open XML and OpenDocument Formats.

Q. What are the main components of the business cooperation agreement?
The business cooperation agreement addresses a series of issues designed to maximize the value of the patent cooperation and technical collaboration agreements, including: marketing, training, support, and sales resources.

Q. By making it easy to run Windows virtualized on Linux, isn't Novell undercutting its own Mono project, which shares a similar goal?
Mono provides developers a way to run applications designed using Microsoft .NET technologies to run on Linux and other platforms. Its main focus is the Linux desktop, where Mono has been leveraged to build a series of new services, including search, music playback, and more. Virtualization focuses on maximizing the value of server hardware by running multiple operating systems. It is used for server consolidation, workload balancing and other corporate needs. So while both approaches are designed to give customers flexibility in their IT systems, their focuses are quite different.

Q. What does the patent agreement cover with regard to Mono and OpenOffice?
Under the patent agreement, customers will receive coverage for Mono, Samba, and as well as .NET and Windows Server. All of these technologies will be improved upon during the five years of the agreement and there are some limits on the coverage that would be provided for future technologies added to these offerings. The collaboration framework we have put in place allows us to work on complex subjects such as this where intellectual property and innovation are important parts of the conversation.
Unfakeable Linux?
Tuesday, Nov 7, 2006, 6:19 PM
This is going to be a long one.

Oracle recently announced that "it would provide the same enterprise class support for Linux as it provides for its database, middleware and applications products. Oracle starts with Red Hat Linux, removes Red Hat trademarks, and then adds Linux bug fixes."

Oracle Announces The Same Enterprise Class Support For Linux As For Its Database
Dell, Intel, HP, IBM, Accenture, AMD, BP, EMC, BMC, and NetApp Join Unbreakable Linux Program

REDWOOD SHORES, Calif., 25-OCT-2006 01:03 PM Today Oracle announced that it would provide the same enterprise class support for Linux as it provides for its database, middleware and applications products. Oracle starts with Red Hat Linux, removes Red Hat trademarks, and then adds Linux bug fixes.

Currently, Red Hat only provides bug fixes for the latest version of its software. This often requires customers to upgrade to a new version of Linux software to get a bug fixed. Oracle's new Unbreakable Linux program will provide bug fixes to future, current, and back releases of Linux. In other words, Oracle will provide the same level of enterprise support for Linux as is available for other operating systems.

Oracle is offering its Unbreakable Linux program for substantially less than Red Hat currently charges for its best support. "We believe that better support and lower support prices will speed the adoption of Linux, and we are working closely with our partners to make that happen," said Oracle CEO Larry Ellison. "Intel is a development partner. Dell and HP are resellers and support partners. Many others are signed up to help us move Linux up to mission critical status in the data center."

"Oracle's Unbreakable Linux program is available to all Linux users for as low as $99 per system per year," said Oracle President Charles Phillips. "You do not have to be a user of Oracle software to qualify. This is all about broadening the success of Linux. To get Oracle support for Red Hat Linux all you have to do is point your Red Hat server to the Oracle network. The switch takes less than a minute."

"We think it's important not to fragment the market," said Oracle's Chief Corporate Architect Edward Screven. "We will maintain compatibility with Red Hat Linux. Every time Red Hat distributes a new version we will resynchronize with their code. All we add are bug fixes, which are immediately available to Red Hat and the rest of the community. We have years of Linux engineering experience. Several Oracle employees are Linux mainline maintainers."

"As a customer with first hand experience of Oracle's outstanding support organization, Dell will use Oracle to support Linux operating systems internally," said Michael Dell, Chairman of the Board, Dell. "Oracle's new Linux support program will help us drive standards deeper into the enterprise. Today we're announcing that Dell customers can choose Oracle's Unbreakable Linux program to support Linux environments running on Dell PowerEdge servers."

"Having worked with Oracle for many years in the enterprise computing space, we believe that the Oracle Unbreakable Linux program will bring tremendous value to our mutual Linux customers," said Paul Otellini, President and CEO, Intel Corporation. "Our work with Oracle on this program will be an important extension to our longstanding enterprise computing relationship."

"HP and Oracle's collaboration and testing of Linux with integrated stacks of hardware, software, storage, and networking has helped create numerous best practices across the industry. HP welcomes the addition of Oracle's Unbreakable Linux program to the portfolio," said Mark Hurd, Chairman and Chief Executive Officer, HP.

"Oracle's support for Red Hat Linux will encourage broader adoption of Linux in the enterprise," said Bill Zeitler, Senior Vice President & Group Executive, IBM Systems and Technology Group. "IBM shares Oracle's goal of making Linux a reliable, highly standard, cost effective platform for mission critical applications backed by world class support."

"Linux is important to us, and to our customers," said Don Rippert, Chief Technology Officer, Accenture. "We applaud Oracle's efforts to bring enterprise-quality support to Linux with the Oracle Unbreakable Linux program announcement. Together with Oracle, we at Accenture look forward to making the Linux experience even better for our customers."

"Oracle's Unbreakable Linux program will greatly expand the servicing options available to our AMD Linux customers," said Hector Ruiz, Chairman and Chief Executive Officer of Advanced Micro Devices. "We are excited by the program's potential to further enhance the success of AMD Linux servers in the enterprise."

Bearing Point
"It is critical that our customers have true enterprise-quality support for their Linux deployments. Oracle's Unbreakable Linux program support delivers the level of confidence our customers need to run Linux in their data centers," said Harry You, CEO, Bearing Point.

"The combined power of EMC and Oracle solutions bring superior reliability, scalability, high availability, and now, enhanced enterprise supportability to Linux users. We are confident that joint Linux solutions from EMC and Oracle will deliver enterprise scale and quality while lowering the cost of infrastructure for our customers," said Joe Tucci, Chairman, CEO, President, EMC.

"As Oracle's only systems management ISV at the highest level in Oracle's Partner Program, BMC Software is excited to see Oracle's deepening commitment to Linux," said Bob Beauchamp, BMC Software President and CEO. "Business Service Management from BMC Software with the Oracle Unbreakable Linux program meets customer demand for lower cost and higher quality support for their infrastructure."

"The world's largest enterprises must have the flexibility to quickly and continually adapt to today's rapidly changing business requirements, without incurring risk," said Dan Warmenhoven, CEO of Network Appliance. "The Oracle Unbreakable Linux program is designed to drive the key benefits of Linux - including flexibility, reliability, and simplicity - directly into the data center. The longstanding relationship between NetApp and Oracle has enabled us to continuously deliver superior enterprise solutions to enable business agility and improve reliability - all tenets of the NetApp brand."

Oracle Support
Oracle's breadth and depth of technical expertise, advanced support technologies, and global reach includes 7,000 support staff in 17 global support centers, providing help to our customers in 27 languages, in any time zone. Oracle has recently been awarded the J.D. Power and Associates Global Technology Service and Support Certification for "an outstanding customer service experience."

"With the scale of our support organization we can provide much better Linux support at a much lower price," said Executive Vice President of Oracle Customer Services Juergen Rottler. "We have the expertise and infrastructure to improve substantially the quality of support for enterprise Linux customers."

Enterprise Linux binaries will be available for free from Oracle. Enterprise Linux Network Support will be offered for $99.00 per system / per year. Enterprise Linux Basic support, which offers Network access plus 24x7 global coverage will be offered for $399 for a 2 CPU system per year and $999 for a system with unlimited CPU's. Enterprise Linux Premier Support, which offers Basic support plus back port of fixes to earlier releases as well as Oracle Lifetime Support will be offered for $1,199 for a 2 CPU system per year and $1,999 for a system with unlimited CPU's.

Oracle and Linux
Oracle has been a long-standing, key contributor to the Linux community. Oracle produced its first commercial Linux database in 1998. Since that time Oracle has worked steadily to improve the experience of all Linux users. Oracle's Linux Engineering team is a trusted part of the Linux community, and has made major code contributions such as Oracle Cluster File System that is now part of Linux kernel 2.6.16. Oracle has been and will continue contributing Linux related innovations, modifications, documentation and fixes directly to the Linux community on a timely basis.

Now here's Red Hat's "interesting" response:

Red Hat Responds

The opportunity for Linux just got bigger. Oracle's support for Linux reaffirms Red Hat's technical industry leadership and the end of proprietary Unix. It's no accident that Red Hat was chosen #1 in value two years running. Want to know what else we think? Read on.

Red Hat & Oracle Partnership

Q: Does Oracle's recent announcement change Red Hat's partnership with Oracle?

A: No. Red Hat has had a productive 7-year relationship with Oracle. Red Hat will continue to work closely with Oracle to optimize Red Hat Enterprise Linux and JBoss middleware subscriptions for Oracle products, and to support joint customers.

Red Hat & JBoss Subscriptions

Q: Does Oracle's announcement include support for the Red Hat Application Stack, JBoss, Hibernate, Red Hat GFS, Red Hat Cluster Suite, and Red Hat Directory Server?

A: No. Oracle does not support any of these leading open source products.

Hardware Compatibility

Q: Oracle says their Linux support includes the same hardware compatibility and certifications as Red Hat Enterprise Linux. Is this true?

A: No. Oracle has stated they will make changes to the code independently of Red Hat. As a result these changes will not be tested during Red Hat's hardware testing and certification process, and may cause unexpected behavior. Hence Red Hat hardware certifications are invalidated.

Software Compatibility

Q: Oracle says their Linux support includes the same software compatibility and ISV certifications of Red Hat Enterprise Linux. Is this true?

A: No. Oracle has stated they will make changes to the code independently of Red Hat. These changes will not be tested during Red Hat's software testing and certification process, and may cause unexpected behavior. Hence Red Hat software certifications are invalidated.

Binary Compatibility

Q: Will Oracle's Linux support be binary compatible with Red Hat Enterprise Linux so that my applications continue to work?

A: There is no way to guarantee that changes made by Oracle will maintain API (Application Programming Interface) or ABI (Application Binary Interface) compatibility; there may be material differences in the code that will result in application failures. Compatibility with Red Hat Enterprise Linux can only be verified by Red Hat's internal test suite.

Source Code Compatibility

Q: Will Oracle's product result in a "fork" of the operating system?

A: Yes. The changes Oracle has stated they will make will result in a different code base than Red Hat Enterprise Linux. Simply put, this derivative will not be Red Hat Enterprise Linux and customers will not have the assurance of compatibility with the Red Hat Enterprise Linux hardware and application ecosystem.


Q: What do Customers need to give in order to get Oracle's indemnification?

A: Customers are required to provide Oracle with IP indemnification without financial limitation for any software or materials provided to Oracle (e.g. patch or enhancement). Unlike Oracle, a Customer's liability is not capped at the value of the software or materials it provides to Oracle.

Q: Are backports covered by Oracle's indemnification?

A: Only if Oracle has not released a later non-infringing version of the code. Red Hat's Open Source Assurance covers all released versions and updates.

Q: What protection does Red Hat provide?

A: Under Red Hat's Open Source Assurance Program, if the Red Hat Software is found to infringe, Red Hat will (a) obtain the rights necessary for Customer to continue to use the Software; (b) modify the Software so that it is non-infringing; or (c) replace the infringing portion of the Software with non-infringing. And it also provides for indemnification.

Q: So in the end, is Oracle's indemnification revolutionary? Does it provide greater value?

A: No. With its Open Source Assurance Policy, Red Hat focuses on the Customer's business continuity in the face of an infringement claim. With Oracle's indemnity program, you only get an indemnity so long as you give Oracle an unlimited one in return.


Q: Oracle says they will provide the same updates as Red Hat Enterprise Linux. Can they do this?

A: There are multiple requirements to building binary compatible software. One piece is the source code; another is the build and test environment. While Oracle may be able to take the source code at some point after a Red Hat update release, obviously their build and test environment will be inherently different than that of Red Hat Enterprise Linux. For similar reasons, there is no guarantee that the source code for the Red Hat Enterprise Linux update will work correctly when integrated into Oracle's modified Linux code base.

Support & Maintenance Lifecycle

Q: In order to get support and maintenance for Red Hat Enterprise Linux, do you need to upgrade to the most recent version?

A: No. Red Hat subscribers enjoy support and updates for all versions for up to 7 years. Throughout that time, Red Hat provides regular maintenance releases as part of the Red Hat Enterprise Linux subscription. This is supplemented through our support services by a 'hot-fix' process that provides critical bug fixes on a customer-specific basis. Oracle "reserves the right to desupport certain Enterprise Linux program releases" as part of their Oracle Enterprise Linux support policies.

Support Level Flexibility

Q: Does Red Hat allow you to tailor your support level to your workload?

A: Yes. Many customers match their Red Hat Enterprise Linux subscription level to their application SLA requirements. For example, customers may choose a Basic subscription for non-mission critical file and print servers, while selecting Premium subscriptions for database servers. Oracle does not allow this flexibility - their support policy reads: "If acquiring Enterprise Linux Premier Support, all of your Oracle supported systems must be supported with Enterprise Linux Premier Support."

Q: Can Oracle produce timely security updates to Red Hat Enterprise Linux as they stated?

A: No. There will be a delay between the time a Red Hat Enterprise Linux update is issued and the time the source code makes its way to Oracle. There is no guarantee that the source code for the Red Hat Enterprise Linux update will work correctly when integrated into Oracle's Linux code base; this integration and test will take additional time. In the case where the update corrects critical security flaws, Oracle customers may be exposed to additional risk.

Linux Assurance

Q: Red Hat Enterprise Linux has government security certifications including Common Criteria Evaluated Assurance Level (EAL) 4+/Controlled Access Protection Profile (CAPP). Will Oracle's version of Linux inherit these certifications?

A: No. Common Criteria evaluations are conducted on a specific configuration of software and hardware. Any changes to the software such as those Oracle has announced will invalidate certification.

Customer Collaboration

Q: Will Oracle's Linux customers have the same degree of influence over Oracle's Linux as Red Hat's customers do with Red Hat Enterprise Linux?

A: The support we provide for Red Hat Enterprise Linux starts when Red Hat and its customers collaborate in the design of new versions. This collaboration extends through the development, testing, and production deployment of Red Hat Enterprise Linux. Vendors of a derivative distribution are simply not positioned to provide their customers the same collaboration opportunity.
Support Partners

Q: Hardware vendors such as Dell, HP, and IBM provide support for Red Hat Enterprise Linux. How is Oracle's support offering different?

A: Red Hat's hardware partners provide front line support to customers, backed by Red Hat. Red Hat has a close contractual relationship with these partners, which requires training, well defined escalation paths, Red Hat back-line support, and cooperative customer issue management. Our joint customers enjoy the same degree of collaborative participation as any Red Hat customer.
Saturday, Oct 14, 2006, 12:00 AM
Related news about my favorite filesystem, from

From: Kobajashi Zaghi [email blocked]
To: linux-kernel
Subject: The Future of ReiserFS development
Date: Wed, 11 Oct 2006 10:53:02 +0200


Hans Reiser arrested on suspicion of murder.

What is the plan? Could i
migrate from reiserfs to another journaling filesystem? How will this
trouble affect reiserfs development?

I hope Hans innocent.



From: Jan Engelhardt [email blocked]
Subject: Re: The Future of ReiserFS development
Date: Wed, 11 Oct 2006 13:20:39 +0200 (MEST)

> What is the plan? Could i
> migrate from reiserfs to another journaling filesystem? How will this
> trouble affect reiserfs development?

Since development has pretty much ceased already, there is nothing to
lose if you continue to use reiserfs.


From: Alan Cox [email blocked]
Subject: Re: The Future of ReiserFS development
Date: Wed, 11 Oct 2006 18:56:44 +0100

Ar Mer, 2006-10-11 am 10:53 +0200, ysgrifennodd Kobajashi Zaghi:
> Hi!
> Hans Reiser arrested on suspicion of murder.
> What is the plan? Could i
> migrate from reiserfs to another journaling filesystem? How will this
> trouble affect reiserfs development?

Reiserfs is written by a team of people at Namesys, and particularly
with reiserfs3 people at SuSE and elsewhere as well.


From: Alexander Lyamin [email blocked]
Subject: Re: The Future of ReiserFS development
Date: Wed, 11 Oct 2006 20:41:03 +0400

Well, this is correct statement if we are talking about 3.6, its only bugfixes
lately. Altough SuSE people used to add some new stuff
like ACL support.

As for reiser4, we are still going through revision, thanks to AKPM. Chunking
out patches,fixing issues and generally cleaning the house.

Yes, we are rather shaked and stressed at moment, altough I can not say, we
didn't seen it coming.
I, personally, really like how US police acted exactly like their russian
counterpart: e.g. sitting on their ass for whole month, waiting, so they can
declare person officially missing and then just press charges against whoever
looks most vulnerable. Well, probably I am wrong. Time will show.

What WE (e.g. reiser4 dev people) are planng to do ?

Short term ( present + 6 months ):
We will just buzz along as ussual, chunking out patches and going through
review, while pursuing existing business oportunities to get some funding.

Long term (6 months from now and beyond):
If it goes way we hope it will go. Well... We will do fine.
If it goes bad. That is where it becomes tricky. We will try to appoint a proxy
to run Namesys business.

Thats it for now.

Wed, Oct 11, 2006 at 01:20:39PM +0200, Jan Engelhardt wrote:
> > What is the plan? Could i
> > migrate from reiserfs to another journaling filesystem? How will this
> > trouble affect reiserfs development?
> Since development has pretty much ceased already, there is nothing to
> lose if you continue to use reiserfs.
> -`J'
> --
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email blocked]
> More majordomo info at
> Please read the FAQ at

"the liberation loophole will make it clear.."
lex lyamin
Friday, Aug 25, 2006, 8:57 PM
Happy 15th Birthday Linux!

From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Newsgroups: comp.os.minix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.FI>
Date: 25 Aug 91 20:57:08 GMT
Organization: University of Helsinki

Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Linus (
PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT protable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.
SELinux Troubleshooter
Tuesday, Aug 22, 2006, 11:58 PM
Here's a cool tool for FC6 and RHEL5 from Dan Walsh:

One of the great strengths of SELinux and other MAC architectures is that applications do not have to be modified to be protected by SELinux. This allows us to write policy for a great many services without going through the process of modifying code and getting upstream acceptance. It also allows flexibility in that different vendors or different users can have different security profiles for an application without having to modify the application.

While this is a great benefit to the developers it is not necessarily a great benefit to usability. Since applications do not understand what SELinux is doing, they can not report that SELinux is preventing them from doing something. As an example if you are running an Apache Web Server and SELinux denies access to a file, the apache web server reports permission denied. Users of Unix and other operating systems have gained experience through the years, understand that permission denied means that there is a problem with either the files ownership or file permissions (DAC). But when they go look at the file they see that apache has ownership and can read it. This leads them to scratching their heads. They go back to the log file and all it says is permission denied.

Some may suspect that SELinux is the problem, but how do they tell? If they figure that SELinux is causing the denial, how do they fix it? Could this be a security violation attempt? Could this be a configuration problem? Is the file mislabeled?

We have created a new tool in FC6 and RHEL5 called the SELinux Troubleshooter (setroubleshoot). This tool watches the audit log files for AVC messages. When an AVC messages arrives the tool runs through the SELinux plugins database looking for a match and then sends a message to the user with a description, and a suggested fix.

As an example, say you create a file index.html in your homedir and mv it to /var/html/www directory. If you try to access this file via a web browser you will receive an avc message that looks like:

type=AVC msg=audit(1155056960.933:208967): avc: denied { getattr } for pid=12321 comm="httpd" name="index.html" dev=dm-0 ino=6260297 scontext=user_u:system_r:httpd_t:s0-s0:c1,c2 tcontext=system_u:object_r:user_home_t:s0 tclass=file

Obviously this tells you that apache web server is not allowed to look at files labeled with the users home directory label.:^)

With setroubleshoot you receive a message like the following:

You can also configure the setroubleshoot daemon to send mail when it receives an AVC. So you will get them even on servers or when not logged in.

There are currently 56 Plugins which map to all of the booleans along with several known situations that come up. There is also a catchall plugin (disable_trans) which will look for avc's with no match and will suggest either writing a loadable policy module or disable trans.

You can read more about this tool at

The Plugin code to generate the above message is fairly simple and looks like this:

from setroubleshoot.util import *
from setroubleshoot.Plugin import Plugin
from rhpl.translate import _
import re

class plugin(Plugin):
summary =_('''
SELinux is preventing the http daemon from using potentially mislabeled files ($TARGET_PATH).

problem_description = _('''
SELinux has denied the http daemon access to potentially
mislabeled files ($TARGET_PATH). This means that SELinux will not
allow http to use these files. It is common for users to edit
files in their home directory or tmp directories and then move
(mv) them to the httpd directory tree. The problem is that they
end up with a file context which http is not allowed to access.

fix_description = _('''
If you want the http daemon to access this files, you need to
relabel them using restorecon if they are under the standard
httpdirectory tree, or use chcon -t http_sys_content_t. You can
look at the httpd_selinux man page for addtional information.

def __init__(self):

def analyze(self):
if self.avc.sourceTypeMatch("httpd_t httpd_sys_script_t httpd_user_script_t
httpd_staff_script_t") and \
self.avc.targetTypeMatch("user_home_t staff_home_t user_tmp_t staff_t
mp_t tmp_t"):
return True
return False

Now if you are interested in helping in this effort. We could use help:
* proof reading these plugins. They are in /usr/share/setroubleshoot/plugins directory.
* If you have ideas about additional plugins, bring them up on the fedora-selinux list. Patches Welcome.
* Testing.

This tool is a work in progress.

There are some gotchas in this tool and it has been known to go into an infinite loop. Usually when it reports bugs about itself.