LWN: Comments on "LinuxConf.eu: Documentation and user-space API design" https://lwn.net/Articles/247788/ This is a special feed containing comments posted to the individual LWN article titled "LinuxConf.eu: Documentation and user-space API design". en-us Sat, 02 Nov 2024 23:41:17 +0000 Sat, 02 Nov 2024 23:41:17 +0000 https://www.rssboard.org/rss-specification [email protected] Once upon a time (;-)) https://lwn.net/Articles/249735/ https://lwn.net/Articles/249735/ larryr <blockquote><em>how about a requirement that one write a white paper, a man page, **and** a tutorial before you can add a new feature</em></blockquote> <p> Is it ok if it says </p <pre> xyz is fully documented in the texinfo documentation. To access the help from your command line, type info xyz </pre> <p> Larry </p> Thu, 13 Sep 2007 18:17:49 +0000 Taste https://lwn.net/Articles/249647/ https://lwn.net/Articles/249647/ lysse You're still expecting *decency* from me? Weird.<br> <p> And the funny thing is, for all your assertions that I'm a troll, *you* initially replied to *me*. And then threw a tantrum when I declined to respect your authoritah.<br> <p> And now you won't stop, because you Just Have to have the last word, even though the sane thing to do would have been to stop responding at least two comments ago.<br> <p> In real life, you'd be that kid on the playground who tries to butt into a conversation I'm having with my friends and then starts calling me names and complaining to the teachers because I gave you the brush-off.<br> <p> And I haven't responded to an argument because you haven't MADE an argument. "Real-world GC is slow in my experience" is an assertion, and a subjective one at that, with a huge great amorphous blob of a term in the middle of it. (I've already mentioned hypocrisy, haven't I? Just checking.) Either put something objective and quantifiable on the table, or take your invisible ball and fuck off back to the infant playground.<br> Thu, 13 Sep 2007 11:59:11 +0000 Taste https://lwn.net/Articles/249560/ https://lwn.net/Articles/249560/ mikov Another post without information. I did not expect that. You don't even have the decency to admit that:<br> - WebSphere Real Time is not free<br> - in any case that fact is irrelevant for the point I was making<br> <p> At least we will both be confident in the knowledge of that. <br> <p> Go away, troll. <br> Thu, 13 Sep 2007 03:53:57 +0000 Taste https://lwn.net/Articles/249556/ https://lwn.net/Articles/249556/ lysse I didn't get as far as saying that your point wasn't valid. I said that your fact-checking was laughably unthorough, with the implication that your argument was founded primarily on personal prejudice and you weren't about to let a good fact get in the way. Generally, if I find one claim that's directly negated by the evidence presented in support of it, I don't hang around to see how much of the rest of someone's argument turns out to be a big pile of manure.<br> <p> And your response to being caught in a falsehood (whether you like it or not, Monotone is unequivocally not non-free; "a commercial version is now available" != "it is not free") was to defend the falsehood and attack your challenger. The former undermines your credibility even further, and gives me no reason to change my initial assessment. The latter has no place in civilised debate, and you should be ashamed of yourself for doing it.<br> <p> However, the fact that having done so, you then have the hypocrisy and presumption to upbraid me for an "apparent urge to be rude", demonstrates pretty clearly that you *have* no shame. You are not worth communicating woth, frankly, let alone debating. You want to know why I didn't see any good reason for wasting my time on you? Reread your own posts. You've given me no reason to think you're worth a damn, and plenty of cause to decide you aren't - and that's even *before* we consider the merits of your arguments.<br> <p> I was hoping to avoid telling you exactly what I thought of you, but if you're going to accuse me of rudeness when I am showing restraint, I have nothing to lose. So here it is; I hope it justifies your every prejudice about me, and gives you a few you hadn't thought of. You're not the kind of person I *want* thinking well of me.<br> Thu, 13 Sep 2007 03:26:10 +0000 Taste https://lwn.net/Articles/249550/ https://lwn.net/Articles/249550/ mikov If you aren't saying anything on the subject of garbage collection, why do you bother replying ? I don't know who you are or what your credentials are, so my opinion of you doesn't matter. And vice versa. <br> <p> My point is clear enough - garbage collection, as it is experienced in practice in everyday life - is slow. There exist solutions, but they are proprietary and far from common.<br> <p> Please, try to restrain your apparent urge to be rude and either say why you think my point isn't valid (or what your proposed solution is), or go away.<br> Thu, 13 Sep 2007 03:06:47 +0000 Taste https://lwn.net/Articles/249548/ https://lwn.net/Articles/249548/ lysse You are free to conclude exactly what you please, of course, but consider what conclusions I might have formed that made me decide to be "arrogant and dismissive". Ironic, n'est-ce pas?<br> Thu, 13 Sep 2007 02:39:04 +0000 Once upon a time (;-)) https://lwn.net/Articles/249310/ https://lwn.net/Articles/249310/ davecb When one wanted to provide a chunk of functionality <br> to Multics, one wrote a white paper arguing its <br> desirability. This was Good. Then we wrote<br> tutorial and manual pages, because we were<br> writing, ayway.<br> <p> The Unix folks from Bell labs, who worked on<br> Multics, decide that, when developing Unix,<br> writing man pages was Almost As Good. As<br> were tutorials.<br> <p> All joking aside, how about a requirement that<br> one write a white paper, a man page, **and** a<br> tutorial before you can add a new feature<br> to the Linux or BSD kernel?<br> <p> --dave<br> <p> <p> Wed, 12 Sep 2007 00:14:54 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248887/ https://lwn.net/Articles/248887/ jzbiciak Two words: Static linking.<br> Sat, 08 Sep 2007 20:55:00 +0000 Taste https://lwn.net/Articles/248849/ https://lwn.net/Articles/248849/ IkeTo <font class="QuotedText">&gt; Fair enough, but his opinion was peer-reviewed. :)</font><br> <p> Peer review process seldom block correct but practically irrelevant work. :)<br> Sat, 08 Sep 2007 06:41:41 +0000 Taste https://lwn.net/Articles/248841/ https://lwn.net/Articles/248841/ mikov "Metronome Garbage collection is now commercially available under the name WebSphere Real Time. This product is the result of a collaboration between the Metronome team and IBM Software Group. The first implementation of Metronome was in the open-source Jikes Research Virtual Machine (RVM)."<br> <p> Do I misunderstand the meaning of the words "first implementation was" and "commercially available" ? Besides, if you had bothered to read my post, you'd see that the "freeness" of an implementation is a secondary point. You'd also probably know that extracting a GC from a research project and transplanting it into another JVM is not a mere technical detail.<br> <p> Judging by the arrogant and dismissive tone of your post, I can only conclude that you have nothing informative to say on this subject. I also dare say that your attitude negates the value of all of your posts on the subject, (which even though I didn't agree with completely I initially found interesting).<br> Sat, 08 Sep 2007 02:03:14 +0000 Taste https://lwn.net/Articles/248833/ https://lwn.net/Articles/248833/ lysse <font class="QuotedText">&gt; You could say that I should use another vendor's implementation (e.g. - <a href="http://domino.research.ibm.com/comm/research_projects.nsf">http://domino.research.ibm.com/comm/research_projects.nsf</a>... - which AFAIK isn't free)</font><br> <p> The last sentence of *that very page* says otherwise. Is the rest of your argument constructed with as much care as this?<br> Sat, 08 Sep 2007 01:48:54 +0000 Taste https://lwn.net/Articles/248828/ https://lwn.net/Articles/248828/ jsnell <p><i>Lest anyone be confused: CMUCL and SBCL's garbage collectors are essentially identical, and CMUCL has had a generational garbage collector since...a very long time ago. </i> <p> While the code in the two GCs might be essentially identical, that doesn't really mean that their performance characteristics are. There are many important performance improvements in the memory management of sbcl which never made it back to cmucl. Some of those improvements were in the GC, others in related areas like the fast path of the memory allocation sequence. As a result of those, cmucl can take 2x the time sbcl does to run an allocation-heavy program and spend 5x as long in the GC for it [*]. <p> But ultimately those improvements were just tweaks on a 10 year old GC that uses only concepts that are 20 year old, and which was bolted on to a compiler that doesn't really provide any support for the GC. It's not hard to imagine that newer GC designs or ones that are properly integrated into the compiler would perform even better. <p> [*] Those results are from the Alioth binary-trees benchmark with n=20, since I don't have any better benchmarks accessible right now. Incidentally, in the shootout results the Lisp version of this program is faster than the C one. Fri, 07 Sep 2007 23:53:15 +0000 Taste https://lwn.net/Articles/248832/ https://lwn.net/Articles/248832/ lysse Fair enough, but his opinion was peer-reviewed. :)<br> Fri, 07 Sep 2007 23:48:52 +0000 Taste https://lwn.net/Articles/248682/ https://lwn.net/Articles/248682/ foom As I said, the mistake was that nobody had properly re-tested the assumption that the GC did not <br> work acceptably for too long. But, attacking straw men is more fun, right? I'm sorry you have such <br> an acute dislike for garbage collectors that you need to make things up to prove them terrible.<br> <p> Lest anyone be confused: CMUCL and SBCL's garbage collectors are essentially identical, and CMUCL <br> has had a generational garbage collector since...a very long time ago.<br> <p> While Lisp has most certainly not taken over the world, garbage collectors have.<br> Fri, 07 Sep 2007 02:43:03 +0000 Taste https://lwn.net/Articles/248676/ https://lwn.net/Articles/248676/ ncm <i>"I'm sure QPX ran the GC while you were around" </i> <p>Long experience has taught me to be very distrustful of anything a GC advocate is "sure" of. <p>Evidently ITA's "mistake" in avoiding GC cycles was to insist on running their application for several years before a Lisp runtime with a tolerable GC was available to them. They certainly were not lax in trying to obtain one: they used Allegro CL at the time I started, and dropped it for CMUCL while I was there. SBCL was under active development. They employed one of the primary CMUCL maintainers. (I think he would be surprised to find his competence disparaged here; he was always admirably forthcoming with me in acknowledging CMUCL's then-current and Lisp's inherent limitations.) <p>This exchange illustrates well some of the reasons why Lisp hasn't exactly taken the world by storm. Chief among them must be Lisp advocates <i>still</i> unable to understand why not. <p>But this is all off-topic, and I apologize again to those reading the article to learn about system call interfaces. Fri, 07 Sep 2007 00:40:25 +0000 Taste https://lwn.net/Articles/248665/ https://lwn.net/Articles/248665/ foom <font class="QuotedText">&gt; (Are you saying QPX runs GC cycles now?)</font><br> <p> Yes. With a generational collector, triggering minor GCs is not actually a terrible thing. I'm sure <br> QPX ran the GC while you were around as well, although, as you say, some effort was put in to try <br> to avoid it happening very often.<br> <p> But, as it turns out, the strange things QPX did to avoid allocating new memory for objects that <br> need to be on the "heap" was actually *slower* than allocating the objects normally and letting <br> the GC do its thing. <br> <p> Basically, the GC works fine, and it was a mistake to try to avoid using it. (This mistake wasn't <br> made because of stupidity or premature optimization, it was an optimization made for another <br> lisp implementation with a poor GC, and was kept around without re-assessing its necessity <br> perhaps as soon as should have been done.)<br> <p> Of course, not allocating memory at all is going to be faster than allocating memory, but when <br> you do it, a garbage collector is a fine thing to have. <br> Thu, 06 Sep 2007 23:22:08 +0000 Taste https://lwn.net/Articles/248651/ https://lwn.net/Articles/248651/ mikov In practice, with the current GC implementations and languages, GC is both slow and noticeable. There is no point in arguing this at all, because I experience it every day - in the Java applications I develop, as well as the ones I use.<br> <p> You could say that I should use another vendor's implementation (e.g. - <a href="http://domino.research.ibm.com/comm/research_projects.nsf/pages/metronome.metronomegc.html">http://domino.research.ibm.com/comm/research_projects.nsf...</a> - which AFAIK isn't free), or use this or that magical runtime option (naturally after doing heavy profiling), or move to a quad core CPU, but that doesn't change the _default_ situation.<br> <p> Plus, technically speaking, GC is extremely complex and complexity isn't free.<br> <p> You cannot have fast concurrent GC without affecting and significantly complicating the generated code - you need to track asynchronous changes, synchronize threads, etc - all very complicated and error prone operations that have a definite cost. It is no accident that there are no accurate concurrent open source collectors. AFAIK the Mono developers are working on something - <a href="http://www.mono-project.com/Compacting_GC">http://www.mono-project.com/Compacting_GC</a> - but it is not ready yet.<br> <p> Thu, 06 Sep 2007 20:39:55 +0000 Taste https://lwn.net/Articles/248646/ https://lwn.net/Articles/248646/ ncm At the time I was at ITA, they did go to great lengths to avoid ever triggering a GC. (Are you saying QPX runs GC cycles now?) The only integer type that could be used without accumulating garbage was 30 bits. It's one thing to know you're overflowing your ints and quite another to avoid doing it; sometimes you really need more bits. Using floating-point values did accumulate garbage. There was discussion of sponsoring a a 64-bit CMUCL port, which would have offered a bigger fixed-size integer type, and enough address space to tolerate more accumulated garbage, and (maybe?) a native floating-point type. I suppose that port, or the SBCL equivalent, is in use now. Restarting the program once a day while other servers take up the load is an extremely reliable sort of garbage collection, but you need lots of address space to tolerate much accumulated garbage.<br> Thu, 06 Sep 2007 20:06:47 +0000 Taste https://lwn.net/Articles/248633/ https://lwn.net/Articles/248633/ vmole <p><i>Lisp has many qualities that cause people who learn it early in life to love it forever.</i> <p>Actually, what I think most of the worse-is-better crowd loved was not the "Lisp, the language" (although that is certainly part of it), but the Lisp Machine development environment, which completely blew away the then current C/Unix/Sun3 environment (and still blows away the now current C/Unix/whatever environment.) <p>For a fun read, I recommend the <a href="http://research.microsoft.com/users/dweise/unix-haters.html">Unix Hater's Handbook</a>, available as a PDF, and with the preface online: "...What used to take five seconds now takes a minute or two. (But what's an order of magnitude between friends?) By this time, I really want to see the Sun at its best, so I'm tempted to boot it a couple of times." Classic. Thu, 06 Sep 2007 18:21:11 +0000 Taste https://lwn.net/Articles/248609/ https://lwn.net/Articles/248609/ IkeTo <font class="QuotedText">&gt; It's available online: <a href="http://citeseer.ist.psu.edu/appel87garbage.html">http://citeseer.ist.psu.edu/appel87garbage.html</a></font><br> <p> Thanks. Just read it briefly. I would not agree that GC is faster than stack allocation because of that, though. I echo Stroustrup's joke that if you have that much memory you are supposed to use them to prevent any process from getting into the swap. =)<br> <p> Thu, 06 Sep 2007 17:03:20 +0000 Taste https://lwn.net/Articles/248596/ https://lwn.net/Articles/248596/ lysse <font class="QuotedText">&gt; What I mean is that many "short-comings" that others talk about GC are not intrinsic to the availability of GC, but instead they are due to particular languages which have made certain choices, like which of the allocations they choose to tax the GC system. Again, most people should not care at all.</font><br> <p> In that case, then I thoroughly misunderstood you - I thought you were making exactly this mistake yourself. Sorry.<br> <p> <font class="QuotedText">&gt; I'm interested in this work. Is it available on-line, or if not, can you give the name of the journal/conference where it appear in?</font><br> <p> It's available online: <a href="http://citeseer.ist.psu.edu/appel87garbage.html">http://citeseer.ist.psu.edu/appel87garbage.html</a><br> Thu, 06 Sep 2007 16:16:01 +0000 Taste https://lwn.net/Articles/248573/ https://lwn.net/Articles/248573/ IkeTo <font class="QuotedText">&gt; Allocating objects on the stack and passing parameters by reference are,</font><br> <font class="QuotedText">&gt; contrary to your apparent belief, neither innovations in C++, nor rendered</font><br> <font class="QuotedText">&gt; impossible in a garbage collected language; again I cite Oberon, which is</font><br> <font class="QuotedText">&gt; just fine with both and yet fully GC'd.</font><br> <p> I never say they are never "rendered impossible" (Even C++ does that!), and the "apparent belief" seems very speculative (e.g., Even assembly does stack based allocation!). Let me remind the beginning of my original post.<br> <p> <font class="QuotedText">&gt; "I think one problem of *many* GC systems is that..."</font><br> (emphasis added here)<br> <p> What I mean is that many "short-comings" that others talk about GC are not intrinsic to the availability of GC, but instead they are due to particular languages which have made certain choices, like which of the allocations they choose to tax the GC system. Again, most people should not care at all.<br> <p> <font class="QuotedText">&gt; Appel (1987) shows that garbage collection can still end up faster than</font><br> <font class="QuotedText">&gt; stack-based allocation.</font><br> <p> I'm interested in this work. Is it available on-line, or if not, can you give the name of the journal/conference where it appear in?<br> <p> Thu, 06 Sep 2007 16:00:12 +0000 Taste https://lwn.net/Articles/248571/ https://lwn.net/Articles/248571/ foom Here's some links:<br> <p> <a href="http://www.lisp.org/HyperSpec/Body/dec_dynamic-extent.html">http://www.lisp.org/HyperSpec/Body/dec_dynamic-extent.html</a><br> <p> <a href="http://www.sbcl.org/manual/Dynamic_002dextent-allocation.html">http://www.sbcl.org/manual/Dynamic_002dextent-allocation....</a><br> <p> <p> Thu, 06 Sep 2007 15:51:53 +0000 Taste https://lwn.net/Articles/248565/ https://lwn.net/Articles/248565/ foom <font class="QuotedText">&gt; ITA goes to great lengths never to trigger a GC cycle, because the first one would</font><br> <font class="QuotedText">&gt; take longer than restarting the program. </font><br> Not true. CMUCL/SBCL's garbage collector isn't the most advanced in the world, but it's most <br> certainly not *that* bad.<br> <p> <font class="QuotedText">&gt; Therefore, they call out to a huge C++ library to do floating-point calculations, </font><br> Not true.<br> <p> <font class="QuotedText">&gt; XML parsing, database lookups</font><br> Okay, yes. <br> <p> <font class="QuotedText">&gt; or anything that needs dynamic memory allocation. </font><br> Nope.<br> <p> <font class="QuotedText">&gt; They use explicit integer type declarations throughout, and use peculiar idioms known to </font><br> <font class="QuotedText">&gt; compile to optimal machine code for inner loops.</font><br> It's a nice feature of lisp that you can make it very fast when you need to, without changing <br> languages. (+ x y) can compile into a single machine ADD instruction, if you tell the compiler <br> that you expect the arguments and result to fit in a machine word. And if you don't tell it that, <br> your integers can be as big as you like, not limited by hardware.<br> <p> <font class="QuotedText">&gt; Bugs often arise from integer overflows because they don't have a 64-bit integer type.</font><br> The compiler will check that the actual type matches the declared type if you're running in lower <br> optimization modes (which are still fast enough for development and testing)., so it will notice <br> that and throw an error. So, you can of course write buggy code, but unlike C, integer overflow is <br> not completely silent. <br> <p> PS: yes, I work at ITA on this product. It's possible all of the things you say may have been true <br> when inferior lisp implementations had been used in the past. Maybe one of those was in use <br> when you worked there.<br> Thu, 06 Sep 2007 15:42:57 +0000 Taste https://lwn.net/Articles/248563/ https://lwn.net/Articles/248563/ lysse <font class="QuotedText">&gt; You focus too much on the Java side</font><br> <p> ...which is presumably why I didn't reference two other languages that allow precisely what you're complaining about garbage collected languages not allowing - oh, wait...<br> <p> <font class="QuotedText">&gt; and missed the point intended: on the C++ side, no object "creation" or "destruction" cost is needed for passing arguments by reference.</font><br> <p> *sigh* What I said went completely over your head, didn't it...?<br> <p> Again, that's EXACTLY the point I caught and responded to. Allocating objects on the stack and passing parameters by reference are, contrary to your apparent belief, neither innovations in C++, nor rendered impossible in a garbage collected language; again I cite Oberon, which is just fine with both and yet fully GC'd.<br> <p> And the issue of whether every first-class, dynamically-allocated object a language deals with must be allocated on the heap is a different one again; lots of optimisations, of varying degrees of complexity, are known that reduce the heap burden substantially. (Indeed, the MLkit compiler statically tracks object lifetimes and allocates up to eight different stacks in heapspace, giving compiled ML programs the speed of stack allocation with the correctness of garbage collection.) But even when all objects must be heap-allocated, it's not necessarily the end of the world in performance terms; Appel (1987) shows that garbage collection can still end up faster than stack-based allocation.<br> Thu, 06 Sep 2007 15:42:52 +0000 Taste https://lwn.net/Articles/248532/ https://lwn.net/Articles/248532/ IkeTo You focus too much on the Java side, and missed the point intended: on the C++ side, no object "creation" or "destruction" cost is needed for passing arguments by reference. The integer being passed is simply created on the stack, allocation cost shared with other variables (by just subtracting a larger number on entry of the function) or deallocation cost (it simply restore the value of the base pointer from a fixed location on the stack). What I mean is that in traditional language you can do many actions without allocating/deallocating an object. Yes garbage collection might be "fast", but they cannot beat "no cost". People doing high performance computing should know this, even though most people really should not bother too much with performance.<br> <p> Thu, 06 Sep 2007 12:51:33 +0000 Taste https://lwn.net/Articles/248509/ https://lwn.net/Articles/248509/ lysse As far as I can tell, the subjects of "creating objects for everything" and "passing certain parameters by reference" are completely independent of each other. For instance, Oberon and Modula-3 mandate GC, yet also allow integers to be passed by reference without creating new objects to hold them. Java is far from the last word in either GC design or call-by semantics...<br> Thu, 06 Sep 2007 11:20:22 +0000 Taste https://lwn.net/Articles/248497/ https://lwn.net/Articles/248497/ IkeTo <font class="QuotedText">&gt; "it's slow" (which has by now been thoroughly discredited; there are well</font><br> <font class="QuotedText">&gt; known ways to make GC fast, and incremental and concurrent collectors</font><br> <font class="QuotedText">&gt; completely abolish GC pauses, making GC suitable for realtime applications)</font><br> <p> I think one problem of many GC systems is that they make everything an object that requires the GC to handle. GC is perhaps faster than manual object creation and destruction, but it is definitely slower than no object to create/destruct at all. You can say a piece of Java code that generates an Int and throw it away 2G times is faster than a piece of C++ code that new an int and delete it 2G times. But that's not the point if the only way to pass a modifiable parameter to a method is to create such an Int (or actually worse, create an object containing a public int field and pass that object around), while a C++ programmer will happily just use an int&amp; as a parameter, making sure that there is no exchange of objects in the trade.<br> <p> Not that I think performance should always be such a big sell. For me, I like the Python system better: it does GC and thus saves the programmers the hassle to manually deal with them; it uses reference counting most of the time so the GC cost is mostly spread across all accesses, and it uses full GC is those corner cases involving containers so that a cycle won't eat your memory. So it more or less combines the best of the world, except for the GC overhead which I care very little anyway.<br> <p> Thu, 06 Sep 2007 10:20:39 +0000 Taste https://lwn.net/Articles/248483/ https://lwn.net/Articles/248483/ lysse That oh-so-tiresome allegiance to garbage collection probably has something to do with forty years' worth of studies finding that GC is both faster and safer than manual memory management (indeed, as you back-handedly acknowledge, if your program quits before a GC cycle is needed GC is even cheaper than stack allocation). Moreover, many concurrent algorithms are known which eliminate long GC pauses in exchange for a memory float (and exhibit far better performance than reference counting), so unless ITA's Lisp is using a 1960-vintage mark'n'sweep collector it's not clear why they would have to care about GC cycles; and given Lisp's customisability, nor is it clear why they haven't replaced the collector with something a bit more modern.<br> <p> Most people, when they argue against GC, take two tacks - "it's slow" (which has by now been thoroughly discredited; there are well known ways to make GC fast, and incremental and concurrent collectors completely abolish GC pauses, making GC suitable for realtime applications) and "it wastes memory", which has more truth to it, but only because there's a fairly obvious tradeoff between time spent doing GC and memory overhead (if you have enough memory for everything you'll ever allocate, GC is free; work down from there). If your arguments are more substantial, I'd love to hear them; but "if it ain't C it ain't fast enough" is a meme that really needs to be put out of its misery at the earliest opportunity, at least until developer time becomes as cheap as computer time again.<br> <p> Not least because when it really mattered, C wasn't even fast enough.<br> Thu, 06 Sep 2007 09:36:19 +0000 Taste https://lwn.net/Articles/248314/ https://lwn.net/Articles/248314/ ncm Lisp has many qualities that cause people who learn it early in life to love it forever. It also has an extraordinary amount of baggage which, collectively, makes it unsuitable for most serious industrial work. Those who associate Lisp with their own youth, however intelligent and knowledgeable they may be, tend to be unable to distinguish the lovable qualities from the baggage.<br> <p> Garbage collection seems to be among the worst of the baggage, for what must be subtle reasons, because these same knowledgeable and intelligent people seem unable to perceive them. Unfortunately GC sabotages many more modern languages as well, not least Haskell. It's not as clear what other features of Lisp make it unsuitable for industrial use. Gabriel himself says that pathologically slow algorithms, when written in Lisp, are the most natural and esthetically pleasing. Its dynamic type binding, which makes it (like most scripting languages) fun to use for coding small programs, becomes an increasingly debilitating liability for big programs.<br> <p> The poster-boy application of Lisp in industry, ITA Software's QPX airline-fare search engine used by Orbitz.com, makes a good example. ITA goes to great lengths never to trigger a GC cycle, because the first one would take longer than restarting the program. Therefore, they call out to a huge C++ library to do floating-point calculations, XML parsing, database lookups, or anything that needs dynamic memory allocation. They use explicit integer type declarations throughout, and use peculiar idioms known to compile to optimal machine code for inner loops. Bugs often arise from integer overflows because they don't have a 64-bit integer type.<br> <p> I've gone on at rather some length, off-topic, because you asked and I don't know how to answer any more concisely.<br> Wed, 05 Sep 2007 18:30:30 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248215/ https://lwn.net/Articles/248215/ njs <font class="QuotedText">&gt;One point that Michael made in his talk was that it's useful to have the documentation written by somebody other than the author of the code, in order to increase the chances of finding bugs in the process.</font><br> <p> Sure. But everyone also keeps saying that schemes that involve too much overhead won't fly -- and reasonably so, it's already very hard to find patch reviewers, requiring patch submitters find someone to write up docs for them before their patch can be accepted may just be unworkable. So I was wondering if one could get 80% of the benefit with 5% of the work.<br> <p> (Note that I'm only suggesting the patch author write up example code, documentation proper could well still be done by someone else.)<br> <p> <font class="QuotedText">&gt;Another problem we still need to work on is documentation for all the existing code -- it's hard to make documenting new stuff a requirement when there are so many examples of where we haven't done the documentation in years. This is even more true for kernel internal interfaces than for user APIs.</font><br> <p> Don't know if I believe this... it's totally common for projects to say "hey, we used to do things such-and-such way, we've realized it was a bad idea, from now on we're doing them differently" and to grandfather in the old stuff in the process. And internal interfaces are both less stable and more aimed at experts, so the documentation/design problems are far less urgent.<br> Wed, 05 Sep 2007 13:12:50 +0000 Taste https://lwn.net/Articles/248204/ https://lwn.net/Articles/248204/ ruoccolwn <font class="QuotedText">&gt; "Of course some of us find it comical how he and his colleagues torture </font><br> <font class="QuotedText">&gt; themselves over what are really quite easy questions (particularly when</font><br> <font class="QuotedText">&gt; the obvious but intolerable answer is, simply, "not Lisp")."</font><br> <p> Can you elaborate your point ? Gabriel and Lisp-ers in general are quite knowledgeable developers, as far as I can understand.<br> <p> sergio<br> Wed, 05 Sep 2007 11:40:15 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248186/ https://lwn.net/Articles/248186/ mkerrisk <blockquote> <em> Could it help to state that a particular system call released in stable n release isn't considered stable until n+2 ? and that we allow breaking it in that time-frame ? That's probably just pushing the problem... </em> </blockquote> This is an idea that has been getting some consideration. It might help matters a little, but it causes other types of pain (e.g., a new interface becomes moderately used by a number of apps, but then a critical design issue causes a interface change at n+2; or, userland developers refrain from using an interface until n+2, because they know it might change). <p> My hypothesis is that we could get a lot of the benefit, and avoid the sorts of pain I just described, if we could improve and rigorously apply a documentation and testing process for new interfaces (i.e., a process that occurs in parallel with the implementation of the interface, and is completed by the time of initial stable release, rather than after the fact). Wed, 05 Sep 2007 06:29:56 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248184/ https://lwn.net/Articles/248184/ lacostej It looks like issues are detected within a time frame or +2 releases. Some of the issues reported here also look like no-one really used them, otherwise they would have had their issues revealed.<br> <p> Could it help to state that a particular system call released in stable n release isn't considered stable until n+2 ? and that we allow breaking it in that time-frame ? That's probably just pushing the problem...<br> <p> Better would be to wait until it has n reported users (n&gt;=3). So what about only adding new APIs to the kernel until enough people have used them. That should also increase the testing of the non official stable trees.<br> Wed, 05 Sep 2007 05:55:24 +0000 Requiring a shared library to access the kernel https://lwn.net/Articles/248173/ https://lwn.net/Articles/248173/ bartoldeman 'info libc' for Glibc 2.6.1 tells me:<br> This is Edition 0.10, last updated 2001-07-06, of `The GNU C Library<br> Reference Manual', for Version 2.3.x of the GNU C Library.<br> which does not look very promosing.<br> <p> It would be great if someone could fund a technical writer to work on this manual... the POSIX threads documentation still talks about Linuxthreads instead of NPTL, and many of the new functions mentioned in NEWS are not documented at all, or documented elsewhere.<br> <p> For instance, ppoll(2) is in the man pages but not in the glibc manual, and there are many others.<br> <p> I usually try to check both man pages and info to be sure.<br> <p> Wed, 05 Sep 2007 00:29:40 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248166/ https://lwn.net/Articles/248166/ arnd One point that Michael made in his talk was that it's useful to have the <br> documentation written by somebody other than the author of the code, in <br> order to increase the chances of finding bugs in the process.<br> <p> Of course the requirement of having a documentation for everything is a <br> very good idea nonetheless and having both written by the same person can <br> only be better than no documentation at all.<br> <p> Another problem we still need to work on is documentation for all the <br> existing code -- it's hard to make documenting new stuff a requirement <br> when there are so many examples of where we haven't done the documentation <br> in years. This is even more true for kernel internal interfaces than for <br> user APIs.<br> Tue, 04 Sep 2007 23:05:24 +0000 LinuxConf.eu: Documentation and user-space API design https://lwn.net/Articles/248129/ https://lwn.net/Articles/248129/ njs Surely it would not be too much to ask people submitting patches to write, if not documentation in English, at least documentation in code?<br> <p> I mean, they're testing their patch somehow, usually with some ugly hacked up code. It would not be dramatically more work to just require them to additionally meet basic code cleanliness standards, exercise the full interface, and always send in the code? Then at least one person has had to bang their head against using the interface, and there is some kind of full specification written down (even if not in the most convenient form)...<br> Tue, 04 Sep 2007 19:37:10 +0000 Libraries https://lwn.net/Articles/248040/ https://lwn.net/Articles/248040/ nix glibc provides an interface between kernel syscalls and userspace, yes, but interfaces don't appear in glibc until the kernel syscall ssemantics have been nailed in stone, and glibc then maintains those semantics forevermore, using compatibility code to ensure that if necessary (see e.g. the behaviour of nice()).<br> <p> Tue, 04 Sep 2007 14:56:50 +0000 Requiring a shared library to access the kernel https://lwn.net/Articles/248029/ https://lwn.net/Articles/248029/ jreiser I have written more than handful of useful software that must access the kernel directly through an absolute system call interface. The interfaces of glibc cannot provide the required services, which include self-virtualization, auto relocation, introspection, control over binding, small size, speed, etc. <p>The history of glibc with regard to interface stability is not pretty, either. For example: @GLIBC_PRIVATE, hp_timing, _ctype_, *stat(), errno. It's important that <i>both</i> the kernel interfaces and the libc interfaces be well designed and well implemented and well documented. Tue, 04 Sep 2007 14:05:47 +0000 Libraries https://lwn.net/Articles/248033/ https://lwn.net/Articles/248033/ mjthayer Anything can be done wrong :) (Note that I have never programmed Alsa, so I can't comment there.) However, glibc essentially does the same, with the difference that at least on Linux the underlying interfaces are guaranteed. And unlike Alsa the interfaces and the library are maintained by different people, which might not be such a bad thing.<br> Tue, 04 Sep 2007 13:49:10 +0000