|
|
Subscribe / Log in / New account

Kernel Summit: Development process

For all practical purposes, the final topic of the 2004 Kernel Summit was the traditional discussion of the development process. This space is usually set aside as the time to pressure Linus for a deadline; often inquiring minds want to know when a feature freeze will happen, but the issue of interest this time was: when will the 2.7 series begin? The answer was somewhat difficult to interpret, but certainly surprising.

Linus said that he is ready to open 2.7 at about any time; he had originally thought he would do it in June. There are, however, a few concerns which need to be addressed. One is that he wants better synchronization with the ongoing 2.6 tree, so that fixes can be forward ported and interesting developments backported after they stabilize. Then there is the fact that a number of people have grown used to having a BitKeeper tree around for 2.6, and that somehow that should continue after Linus moves on. Linus has also been very happy with how the 2.6 development process has gone, with Andrew Morton vetting patches in his -mm tree and passing on the ones that work for the mainline.

The end result, says Linus, is that the kernel development process needs "a BK person for Andrew, and an Andrew for me." He then batted his eyes at Alan Cox and asked: "Will you be my Andrew? Will you be mine?" One might have expected Alan to blast a hole in the wall while effecting a hasty exit, but, instead, he agreed - as long as people are not concerned about his continued employment with Red Hat. Andrew, meanwhile, stated that he has no religious problems with BitKeeper, and was planning on picking it up at some point. Thus, he does not necessarily need a "BK person" to help him out.

At this point, one might naively conclude that everything has been worked out, but things are not destined to be so simple.

Linus talked about how happy just about everybody is with 2.6. It has been almost two years since the alleged 2.5 feature freeze, but there still is no great pressure to start a new development series. Linus asks: could things just go on the way they are for a while yet, until enough pressure forms to force the 2.7 fork?

Bdale Garbee pointed out that, in the absence of a 2.7, many people will conclude that 2.6 has not yet stabilized sufficiently. There may be a need to do the fork just to convince people that 2.6 is ready. Alan Cox had a different idea: given that there is not a great deal of stuff to merge into 2.7, perhaps the developers could actually do a six-month release cycle for a change?

Andrew pointed out that, during the 2.6 process, he and Linus have been merging patches at a rate of about 10MB/month. There is, he says, no reason to believe that things will not continue that way. The traditional stabilization mechanism, where almost no patches are accepted for long periods of time, does not strike him as a good idea. Instead, Andrew would like to see a 2.6 tree which continues to change and evolve, and let the distributors do the final stabilization work. In his vision of the future, the kernel.org kernel will be the most featureful and fastest kernel out there, but it will not necessarily be the most stable.

The idea here is that restricting changes creates an incredible "patch pressure," which eventually leads to massive amounts of changes going into the kernel suddenly. At that point, things really do become unstable. It is better to keep the flow rate on patches higher; that keeps the developers happy and gets new code out to users quicker. Andrew really believes this: there are, seemingly, very few patches that he is not willing to accept into 2.6 - as long as they make sense and survive testing in -mm.

These patches include API changes, incidentally. Stable internal kernel APIs have never been guaranteed, but the developers have usually tried to not make big changes during a stable kernel series. That looks to change now. Among other things, it was said that API changes should be merged before an eventual 2.7 fork, since that would make synchronization between the two trees easier. Your editor, who really would like to see Linux Device Drivers not go obsolete before it hits the shelves, finds this idea somewhat dismaying.

What may happen is that Linus creates a 2.7 tree in the near future, but that tree will be restricted to truly experimental, destabilizing changes. This tree may have no future: if it doesn't work out, or can't be kept in sync with 2.6, it might simply be dropped. Or it could yet develop into 2.8, if that makes sense.

Nothing was truly resolved in this discussion, which will certainly continue in the hallways and bars over the next few days. But the Linux kernel is maturing, and the development process looks like it will be changing in response. This could be cause for concern, except for one important thing: the 2.6 series really has worked very well. The kernel is in good hands, and that is one thing that looks like it will not change.

Index entries for this article
KernelDevelopment model


to post comments

Dismaying Indeed

Posted Jul 21, 2004 3:37 UTC (Wed) by ken_i_m (guest, #4938) [Link] (7 responses)

Putting the onus of stablization on the "distributions" spells the eventual end of independent Linux users/developers. I have been hearing rumours that glibc is heading down this same path. Without stable, reference milestones it will artificially increase the barrier to moving beyond dependence on a single vendor. A co-dependency mentality sets in, "why should I make any effort in this area when the bar for doing anything different is so high". Creative energy moves on to other areas of life (or other OS) where it is more effective. Innovation outside of the corporate sponsored environment dries up. Linux loses mindshare in the hearts of the people as an underdog. It becomes just another product manufactured by corporations. Goodbye, Linux.

Dismaying Indeed

Posted Jul 21, 2004 4:25 UTC (Wed) by garloff (subscriber, #319) [Link]

Well, the mainline 2.6 kernels have been quite stable; certainly stable
enough most private users.
The extra hardening is required for servers that do extreme stuff or
have extreme hardware. The mainline kernel may thus not always be
suitable for all of those.
This is not necessarily as bad as it sounds. Most people who use
such machines do want to have a support contract anyway ...

Dismaying Indeed

Posted Jul 21, 2004 11:09 UTC (Wed) by corbet (editor, #1) [Link] (1 responses)

On the other hand, imagine a world where the distributors actually ship something close to the mainline kernel (because it already contains their patches) and developers can actually hope to see their work distributed in a reasonable period of time. These changes could actually make vendor kernels more alike and life easier for independent developers.

Dismaying Indeed

Posted Jul 22, 2004 17:38 UTC (Thu) by piggy (guest, #18693) [Link]

I for one am very excited. It is difficult to get customers to pay for kernel work in the development branch. Almost everyone wants their work done against the production branch. A longer active development lifetime for 2.6 means more potential for contributions from commercial developers.

Dismaying Indeed

Posted Jul 21, 2004 12:10 UTC (Wed) by gallir (guest, #5735) [Link] (2 responses)

Even if you are right (which I doubt), you didn't hear about Debian or
Gentoo patched kernels, for example. Did you?

Dismaying Indeed

Posted Jul 21, 2004 17:20 UTC (Wed) by broonie (subscriber, #7078) [Link]

They are patched, though.

Debian kernel patches

Posted Jul 22, 2004 11:49 UTC (Thu) by shapr (subscriber, #9077) [Link]

Debian kernel images are built from the kernel-tree-2.x.x package, which depends on the kernel-source-2.x.x and kernel-patch-debian-2.x.x packages.

I build my own kernels with kernel-patch-debian-2.x.x as well, but often with additional kernel-patch packages and custom settings for my hardware.

Not convinced

Posted Jul 29, 2004 11:27 UTC (Thu) by ringerc (subscriber, #3071) [Link]

I'm not convinced that side of things is a big deal. It's possible that
some distribution-neutral "linux-STABLE" will evolve over time, with
distributions sharing maintainance. I'd say it's even likely. If not, I
don't see that much difference between basing a distro on another distro
rather than mainline (this is, after all, what most people do anyway).

Kernel Summit: Development process

Posted Jul 21, 2004 4:39 UTC (Wed) by hp (guest, #5220) [Link] (2 responses)

The following works very well for GNOME, three releases have come out on time and with better quality than with the previous feature-based model:
http://mail.gnome.org/archives/gnome-hackers/2002-June/msg00041.html

Key points are:
- time based
- concept of "active stream" people are using
- forced feature freeze on stable branch
- always dogfoodable unstable branch
- the timeline and rules are the same every iteration,
so there's never confusion or big discussions about the
release process

If the unstable branch is reliably going to stabilize and release, people can work on it instead of destabilizing the stable branch while the unstable branch becomes a research project.

Kernel Summit: Development process

Posted Jul 21, 2004 5:32 UTC (Wed) by jamesm (guest, #2273) [Link] (1 responses)

The resarch branch idea sounds like one of the ideas that Linus mentioned, where 2.7 would be only for really strange/radical things.

Perhaps you could post a summary of the Gnome experience to lkml after the conference.

Kernel Summit: Development process

Posted Jul 21, 2004 18:59 UTC (Wed) by hp (guest, #5220) [Link]

Ah, I meant the "research project" to be a bad thing. ;-)
The unstable branch should be dogfoodable...

Kernel Summit: Development process

Posted Jul 21, 2004 5:27 UTC (Wed) by jamesm (guest, #2273) [Link]

I liked Alan's idea that we could move to six-month release cycles for kernels, now that the overall system has stabilized somewhat. I think we're getting closer to a model of continual refinement instead of radical changes over longer timeframes. It's a sign that the software is more mature.

Kernel Summit: Development process

Posted Jul 23, 2004 12:52 UTC (Fri) by erich (guest, #7127) [Link] (2 responses)

Avoiding Fragmentation
Having read the "let distributors do the stabilization" concept, i'm scared:
This will lead to incompatible kernels, i fear.
I really do prefer if there is the-one-and-only true linux kernel, and if that satisfies the needs of distributors without much modifications.
Otherwise we're gonna need a "Linux Kernel Standardization Base" sooner or later...
I really do not want to see "SuSE Linux Kernel 9.2", "RHEL Linux Kernel 2.7.1", "Fedora Linux Kernel 2.7.3", "Debian Linux Kernel 2.7.4-foobar.1", "Gentoo Linux Kernel 0.2.7.1" and so on.

Kernel Summit: Development process

Posted Jul 23, 2004 13:16 UTC (Fri) by chloe_zen (subscriber, #8258) [Link] (1 responses)

What, do your eyes not work now? Because if you can't see that, it's because you're not looking. I did kernels for VA Linux, and we had over 250 patches back in the 2.2.18 days. Red Hat had about 200 patches too, IIRC.

Kernel Summit: Development process

Posted Jul 23, 2004 14:38 UTC (Fri) by erich (guest, #7127) [Link]

Yes, i know that we already have a lot of fragmentation, but we should try to reduce that, not increase that.

Kernel Summit: Development process

Posted Jul 24, 2004 0:05 UTC (Sat) by giraffedata (guest, #1954) [Link]

There is some confusion over what "stable" means. Many people seem to think it means "doesn't crash." There are better terms for that (like "doesn't crash"). What "stable" really means is "doesn't change." It describes the branch, not the code at the head of it.

The correlation, of course, is that the way you make code bug free is to stop changing it, and when it's bug free, you don't have to change it.

The 2.6 branch is not stable at all. It's changing like crazy. Until there is at the very least some other branch to draw the attention of the coders, it will continue to change.

The death of the Linux stable/development system is due to the long release cycles. Developers want their stuff to get into users' hands some time before they retire. The users won't touch an unstable branch, so the stable branch is where it goes. Which destabilizes the branch.

Incidentally, the common strategy of putting bug fixes and not features in the stable branch is wrong. The right strategy is to put code changes with a low risk/reward ratio in the stable branch and those with a high risk/reward ratio in the development branch. Both bug fixes and features can fit in either of those categories.

Kernel Summit: Development process

Posted Jul 24, 2004 1:56 UTC (Sat) by set (guest, #4788) [Link] (1 responses)

H. Peter Anvin tried to clarify the situation thusly in the
'A users thoughts on the new dev. model' thread on lk:

I think the discussion we had at the kernel summit has been somewhat
misrepresented by LWN et al. What we discussed was really more of a
"soft fork", with the -mm tree serving the purpose of 2.7, rather than
a hard fork with a separate maintainer and putting ourselves in
back/forward-porting hell all over again.

Note that Andrew's -mm tree *specificially* has infrastructure to keep
changes apart and thus backporting to 2.6 mainstream of patches which
have proven themselves becomes trivial.

Thus:

- Andrew will put experimental patches into -mm;
- Andrew will continue to forward-port 2.6 mainstream fixes to
-mm;
- Patches which have proven themselves stable and useful get
backported to 2.6;
- If the delta between 2.6 and -mm becomes too great we'll
consider a hard fork AT THAT TIME, i.e. fork lazily instead
of the past model of forking eagerly.

Why the change? Because the model already has proven itself, and
shown itself to be more functional than what we've had in the past.
2.6 is probably the most stable mainline tree we've had since 1.2 or
so, and yet Linus and Andrew process *lots* of changes. The -mm tree
has become a very effective filter for what should go into mainline,
whereas the odd-number forks generally *haven't* been, because
backporting to mainline has usually been an afterthought.

I for one welcome our new -mm overlords.

-hpa

Kernel Summit:Development process

Posted Jul 24, 2004 21:48 UTC (Sat) by Duncan (guest, #6647) [Link]

When I originally read this LWN article, I had the same thought, that the
reason the dynamics had changed was because the mm tree has been serving
as the testing ground that the development tree formerly occupied. I
don't see that LWN misrepresented that.

My only question at the time was why all the comments seemed so skewed
toward the loss of the development tree, and no one was pointing out how
mm was now serving that function. However, as I was reading it on a total
of about a half hour sleep the day before, and no one else was picking up
on it, I thought I obviously must have missed something somewhere and
resolved that I'd get back to the story after I caught up in sleep.

Now that I have, it seems to say the same thing it did before, and at
least ONE other poster actually seems to /get/ it (altho I was a bit put
off by the /. "overlords" reference).

I believe it's worth stating again, here in a slightly different way.
Linus seems to have finally found a teammate he can work closely enough
with to make it effective. The mm tree DOES seem to be serving the
purpose of development/testing, allowing a much smoother and more timely
flow of pre-tested patches into the "stable" kernel. Thus, the need for a
traditional development kernel is far lessened, and the pressure just
isn't there to create it, as it would just be an artificial construct, at
this point, with what's currently there working as naturally as it is.

It's likely that eventually some huge changes will build in the wings, to
much for even mm to take on, without disrupting its current roll as the
testing ground for stable. The flurry of patches after OLS will likely
test just how far off that point will be, as they equally test how well
the mm/linus feeder mechanism works with even larger changes thrown at it.
If the current system continues to work, it might be some time before 2.7,
or indeed the current system may become more or less (most?) permanent,
until something comes along to change it, anyway. If however it shows
signs of stress under the additional flexing necessary for the larger and
more intrusive patches, people will be forced to back off, and the
pressure for a 2.7 will then begin to build.

In some ways, then, the kernel summit and OLS can be seen to have invited
a serious test of the system, either potentially triggering that buildup
of pressure that hasn't really happened until now, or stress testing the
current system such that developers can be comfortable that it WILL hold
up under such conditions, and thus comfortable in allowing the new system
to really settle in.

As for the stability/distributions thing, IMO the new system should be far
better in that regard than the old one was. At least.. it wouldn't seem
to be worse. As another poster already mentioned, 2.6 has already proven
quite stable enough for a desktop system, and the folks wanting real
stability either tend to roll their own, or get a vendor version complete
with support contract anyway. In fact, if anything, the more timely flow
of updates to the "stable" kernel will likely mean *LESS* need for
patching from individual vendors than was previously the case, what with
the maze of backports and other-source patches in most vendor kernels
particularly the couple of releases previous to the switch to the next
stable kernel.

I'm definitely looking forward to seeing how this "new kernel paradigm"
plays out!

Duncan


Copyright © 2004, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds