Bringing the Android kernel back to the mainline
Android kernels, he said, start their life as a long-term stable (LTS) release from the mainline; those releases are combined with core Android-specific code to make the Android Common Kernel releases. Vendors will pick a common kernel and add a bunch more out-of-tree code to create a kernel specific to a system-on-chip (SoC) and ship that to device manufacturers. Eventually one of those SoC kernels is frozen, perhaps with yet another pile of out-of-tree code tossed in, and used as the kernel for a specific device model. It now only takes a few weeks to merge an LTS release into the Android Common Kernel, but it's still a couple of years before that kernel shows up as a device kernel. That is why Android devices are always running ancient kernels.
There are a lot of problems associated with this process. The Android core has to be prepared to run on a range of older kernels, a constraint that makes it hard to use newer kernel features. Kernel updates are slow or, more often, nonexistent. The use of large amounts of out-of-tree code (as in millions of lines of it) makes it hard to merge in new stable updates, and even when that's possible, shipping the result to users is frightening to vendors and not often done. There is no continuous-integration process for Android kernels, and it's not possible to run Android systems on mainline kernels. All told, the way Android kernels are developed and managed takes away a lot of the advantages of using Linux in the first place, but work is being done to address many of these issues.
With regard to older kernels: the Oreo release required the use of one of the 3.18, 4.4, or 4.9 kernels — an improvement over previous releases, which had no kernel-version requirements at all. The Pie release narrowed the requirements further, saying that devices must ship with 4.4.107, 4.9.84, or 4.14.42 (or a later stable release, in each case). The Android developers are trying to "push things up a notch" by mandating the incorporation of stable updates. This has improved the situation, but the base kernel remains two years old (or more), and the Android core still has to work on kernels back to 3.18.
Patil noted that some people worry about regressions from the stable updates, but in two years of incorporating those stable updates, the Android project has only encountered one regression. In particular, 4.4.108 broke things, which is why nothing later than 4.4.107 is required at the moment. Otherwise, he said, the stable updates have proved to be highly reliable for Android systems.
One reason for that may be that the situation with continuous-integration testing is improving; the LKFT effort is now running functional testing on the LTS, ‑rc, and Android Common kernels, for example. More testing is happening through KernelCI, and Android developers are contributing to the Linux Test Project as well. Kernel patches go through pre-submission testing on an emulated device called Cuttlefish, which can run both Android and mainline kernels. More testing is being done by SoC vendors, none of whom have reported problems from LTS kernel updates so far. They do see merge conflicts with their out-of-tree code, but that is unsurprising.
Even so, kernel upgrades remain a huge issue for Android vendors, who worry about shipping large numbers of changes to deployed devices. So devices generally don't get upgraded kernels after they ship — a bad situation, but it's better than the recent past, when kernels could not be upgraded for a given SoC after its launch, he said. Google plans to continue to push vendors to ship updates, though, eventually mandating updates to newer LTS releases even after a device is launched. At some point, LTS releases will be included in Android security bulletins, because there really is value in getting all of the bug fixes. Patil echoed Greg Kroah-Hartman's statement that there are no "security bugs" as such; "there are just bugs" and they should all be fixed.
The problem of devices being unable to run mainline kernels remains; the problem, of course, is all of that out-of-tree code. The amount of that code in the Android Common Kernel has been reduced considerably, though, with a focused effort at getting the changes upstream. There are now only about 30 patches in the Android Common Kernel, adding about 6,500 lines of code, that are needed to boot Android. The eventual plan is to push that to zero, but there are a number of issues to deal with still, including solving problems with priority inheritance in binder, getting energy-aware scheduling into the mainline, and upstreaming the SDCardFS filesystem bridge.
Project Treble, he said, introduced a new "vendor interface" API that implements a sort of hardware abstraction layer. Along with this interface came the concept of a generic system image (GSI), being a build of the Android Open Source Project that can be booted on any Android device. If the GSI can be booted on a specific device, then the manufacturer has implemented the vendor interface correctly.
For now, the kernel is considered to be part of the vendor interface — the vendor must provide it as part of the low-level implementation. The plan, though, is for Android to provide a generic kernel image based on the mainline. Devices will be expected to run this kernel; to make that happen, vendors will provide a set of kernel modules to add the necessary hardware support. Getting there will require the upstreaming of kernel symbol namespaces among other things.
This design will clearly not eliminate the out-of-tree code problem, since those modules will, in many or most cases, not come from the mainline. But there is still a significant change here: vendor-specific code will be relegated to loadable modules and, thus, be unable to change the core kernel. The days of vendors shipping their own CPU schedulers should come to an end, for example; all out-of-tree code will have to work with the generic kernel image using the normal module interface. That will force that code into a more upstream-ready state, which is a step in the right direction.
In conclusion, Patil said, the Android kernel team is now aggressively trying to upstream code before shipping it. There is a renewed effort to proactively report vulnerabilities and other problems and to work with upstream to resolve them. Beyond the above, the project has a number of goals, including getting the ashmem and ion modules out of the staging tree, improving Android's use of device trees, and more. But things are progressing; someday, the "Android problem" may be far behind us.
[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my
travel to LPC.]
Index entries for this article | |
---|---|
Kernel | Android/Generic kernel image |
Conference | Linux Plumbers Conference/2018 |
Posted Nov 16, 2018 2:31 UTC (Fri)
by gerdesj (subscriber, #5446)
[Link] (1 responses)
few weeks -> LTS -> years -> device kernel
So, what is a device kernel? There is only one instance of that phrase in the article.
Posted Nov 16, 2018 2:44 UTC (Fri)
by neilbrown (subscriber, #359)
[Link]
I assume it is something that is "used as the kernel for a specific device model" (to quote the article).
Posted Nov 16, 2018 7:42 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (23 responses)
Posted Nov 16, 2018 9:29 UTC (Fri)
by Sesse (subscriber, #53779)
[Link]
Posted Nov 16, 2018 14:09 UTC (Fri)
by matthias (subscriber, #94967)
[Link] (21 responses)
I would expect to be offered a git repository which started with a mainline kernel and one additional commit with all the out of tree changes. In the extreme case we would get a git repository with just one commit adding the complete source. Both kinds of git repositories I can easily derive from a tar ball.
What you are thinking of is a git repository which contains all the history of changes done by the company. But this is certainly much more than what the GPL asks for. There is no mentioning of the whole history. How could there be? It is no obligation to do the out of tree development in git. Even if the companies are using git internally, the repositories might contain information that should not be published, e.g., intermediate versions not meant for publication. Also a git could contain information that the company is not allowed to publish at all. For example you can extract working and holiday times of employees from the commit messages, which is considered personal information by some legislations.
I agree that it would be very nice to have the history and be able to extract just a subset of the commits, but this is not what the license asks for. The preferred form for making modifications applies to the code itself, i.e., we want C-code instead of assembly (assembly could reproduce the same binaries, but is not what we want to modify). And for automatically generated C-code we want to have the source and the program to automatically generate the code, as it is preferred to change the source instead of the automatically generated product.
Posted Nov 16, 2018 23:40 UTC (Fri)
by JoeBuck (subscriber, #2330)
[Link] (20 responses)
Posted Nov 17, 2018 8:51 UTC (Sat)
by mjthayer (guest, #39183)
[Link] (14 responses)
Posted Nov 17, 2018 11:40 UTC (Sat)
by matthias (subscriber, #94967)
[Link] (13 responses)
The GPL predates much of today's version management systems. The use case "port 2-3 years of kernel development to an ancient kernel with many out of tree changes" was certainly not on the radar, back then. The use cases where like fixing a bug or programming a new feature (not merging a feature from a different branch).
Having the history available would be very nice. But this can only be accomplished by convincing the companies that working together with the community provides some benefits. And without the companies willing to work together, I think, even a git repository with history is not really helpful. How should we integrate code into the mainline, that nobody in the community knows and that is not supported. If we just want to backport some security fixes, the tar ball is not much worse than a full repository. If we want more, we need someone knowing the code who helps with integration.
Posted Nov 17, 2018 17:15 UTC (Sat)
by pallas (guest, #128204)
[Link] (3 responses)
Posted Nov 17, 2018 23:49 UTC (Sat)
by JoeBuck (subscriber, #2330)
[Link] (2 responses)
Posted Nov 19, 2018 4:29 UTC (Mon)
by jeffm (subscriber, #29341)
[Link]
Posted Nov 19, 2018 20:47 UTC (Mon)
by tao (guest, #17563)
[Link]
"You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change."
The best way to do this *is* typically to provide a changelog entry, but if you just distribute patches that should be enough; after all patches by their very nature show what files are modified, and assuming that the patch was created when the modification was made, it'll also have the time of change in the patch header.
Posted Nov 18, 2018 3:13 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (1 responses)
There isn't really one thing such as "the" history, it's a continuum and any line drawn would be in an arbitrary place. Should it be possible to access every version that was ever tested by some internal Trybot / 0day even before getting approved and merged internally? There's a lot of value in test results after all. Going even further, should it possible to get every version that is in the developer's reflog? If it was committed at some point then it must have some value. Getting inside the thought process of a good developer is surely a useful learning experience, observing mistakes made helps not repeating them.
BTW *open-source* developers rewrite history, that's part of the public review process. Sometimes some of these git histories get lost!
> But this can only be accomplished by convincing the companies that working together with the community provides some benefits. And without the companies willing to work together, I think, even a git repository with history is not really helpful.
This is the best summary.
Posted Nov 18, 2018 11:15 UTC (Sun)
by emj (guest, #14307)
[Link]
Thanks for the links! At work I've started keeping tracking every branch and all the rebases and squashes. It helped me immensly once and I suspect if there were tooling for keeping the history for both WIP repos and "official" gitseries repos in the same repo I could solve a lot more problems.
Posted Nov 19, 2018 22:16 UTC (Mon)
by cyphar (subscriber, #110703)
[Link] (5 responses)
This is somewhat incorrect (depending on your interpretation of "history" in this context). §2a of GPLv2 states:
> You may modify your copy or copies of the Program [...] provided that you also [...] must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
Unfortunately it appears most people have forgotten this part of the GPL. There was a big argument several years ago when RedHat decided to start providing big patch-blobs rather than individual patches, but it seems the community has settled that this is "okay". But just providing a tarball with a modified kernel isn't full compliance with the GPLv2.
Posted Nov 20, 2018 2:16 UTC (Tue)
by pizza (subscriber, #46)
[Link] (4 responses)
That said, a simple tarball of modified sources is arguably another matter -- while perhaps a technical violation, IMO using that alone as the basis for accusing someone of GPL violations is ludicrous -- but there exists a modern [1] tool called 'diff' which makes it a fairly trivial exercise to determine what has changed versus the original, unmodified sources.
[1] First released all the way back in 1974
Posted Nov 20, 2018 6:55 UTC (Tue)
by mjthayer (guest, #39183)
[Link] (3 responses)
Posted Nov 20, 2018 7:04 UTC (Tue)
by mjthayer (guest, #39183)
[Link] (1 responses)
Posted Nov 20, 2018 11:54 UTC (Tue)
by nix (subscriber, #2304)
[Link]
should suffice, more or less -- or, alternatively, instead of the rebase you can git checkout the new kernel version and do a git cherry-pick onto it. (In non-ancient versions of git these end up using exactly the same machinery for application, even if you cherry-pick a range.)
Posted Nov 21, 2018 17:39 UTC (Wed)
by tao (guest, #17563)
[Link]
Posted Nov 27, 2018 7:54 UTC (Tue)
by nhippi (subscriber, #34640)
[Link]
Chromium OS has excellent Git repos, with relatively well enforced commit message, bug tracker and CI testing results available in open, and still you will be overwhelmed easily in forest of trees with numerous branches... Any internal repo would need lots of institutional knowledge to understand what is going on.
Posted Nov 17, 2018 19:37 UTC (Sat)
by Otus (subscriber, #67685)
[Link] (4 responses)
I definitely often rely on git blame and git log to understand what and why a piece of code is trying to do. IMO git history and commit messages are comparable to comments. Though I have no idea whether stripping those before distribution is ok.
Posted Nov 17, 2018 21:21 UTC (Sat)
by matthias (subscriber, #94967)
[Link] (3 responses)
Posted Nov 17, 2018 21:39 UTC (Sat)
by Otus (subscriber, #67685)
[Link] (2 responses)
Removing git history is clearly at least tolerated.
Posted Nov 17, 2018 21:56 UTC (Sat)
by matthias (subscriber, #94967)
[Link] (1 responses)
Posted Nov 19, 2018 18:12 UTC (Mon)
by jezuch (subscriber, #52988)
[Link]
Posted Nov 16, 2018 8:57 UTC (Fri)
by matthias (subscriber, #94967)
[Link] (1 responses)
I think live kernel patches are more or less loaded as modules. Furthermore, they can change the core kernel. I would not really be surprised if device manufactures would abuse this functionality. Let us just hope that this will not happen.
Posted Nov 19, 2018 14:46 UTC (Mon)
by mjthayer (guest, #39183)
[Link]
It would still be more understandable for someone wanting to forward port them to a new kernel than a single source blob. (Hopefully...)
Posted Nov 16, 2018 22:51 UTC (Fri)
by hailfinger (subscriber, #76962)
[Link] (4 responses)
Looking at the smartphone market, Sony have probably been the most active vendor in trying to get mainline working on their devices. An early success report of running a Sony Xperia Z3 with an almost-mainline kernel in 2015: https://plus.google.com/102276447148493441479/posts/amRvp...
A nice side effect from being able to run mainline kernels on a device is the ability to upgrade the kernel to a new major release without having to port millions of lines of code. Feature upgrades by way of kernel upgrades (especially security features) for old hardware would extend the lifetime of such devices and make them better at the same time.
Posted Nov 17, 2018 2:17 UTC (Sat)
by pabs (subscriber, #43278)
[Link] (3 responses)
Posted Nov 17, 2018 13:34 UTC (Sat)
by hailfinger (subscriber, #76962)
[Link] (2 responses)
This means you need SoC mainline support more than a year before the first devices are shipped.
Looking at the example of the Qualcomm SDM845 (released in March 2018), the first LTS kernel with incomplete SDM845 support is 4.19 released a few weeks ago, and the first LTS kernel with full support will probably appear in Q4/2019. If you're lucky, an Android Common Kernel with that LTS kernel will appear a month or two later, and then it depends on whether and how fast Qualcomm moves their BSP to that Android Common Kernel. If you're extremely lucky and there's a vendor willing to create a new device based on a two-year-old SoC, you could get that device with full mainline support sometime in Q1-Q2/2020.
How about upgrading the kernel of existing devices to something newer and closer to mainline? Most vendors shy away from this because it creates unnecessary risk without a matching payoff. What you can hope for is vendors who offer a different "at your own risk" aftermarket firmware which tracks mainline more closely. Sony does that for the devices in their Open Devices Program, and there you can get builds with kernel 4.9 for phones which are almost 3 years old.
Answering the other half of your question about mainlining support for the phones (not just the SoC): I'd say very few of them will do so initially, but you'll see it happen once the market demands longer support periods for phones (instead of just buying a new phone every two years).
Posted Nov 21, 2018 17:53 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
> If you're extremely lucky and there's a vendor willing to create a new device based on a two-year-old SoC,
This timing aspect is not really about "luck" and much more about latest shiny hardware. Random low-end example: the Nokia 2 was released in November 2017. It features a snapdragon 212 which was available for sampling in 2015 and just an evolution of the 210 (HW designers re-use^H copy-paste too)
https://www.gsmarena.com/nokia_2-8513.php
Posted Jul 8, 2019 13:15 UTC (Mon)
by lpoijk5566 (guest, #133013)
[Link]
Here is my steps to enable my device with mainline kernel :
Posted Nov 17, 2018 11:22 UTC (Sat)
by kugel (subscriber, #70540)
[Link] (1 responses)
Doesn't that mean that it'll become impossible to replace the running kernel because the binary-only modules are going to depend in them? And it will promote binary-only modules even more.
Not sure this is an improvement from a POV of users that want to run their own kernel/ROMs on Android devices.
Posted Nov 17, 2018 13:35 UTC (Sat)
by excors (subscriber, #95769)
[Link]
I think it's trying to address the problem where the vendor provides a tarball with full kernel source code, but it includes e.g. large invasive changes to the scheduler code to support big.LITTLE or whatever, which makes it nearly impossible to port to a new kernel version. Google is saying that vendors shouldn't make any changes to the core code, their changes should all be split into modules with well-defined interfaces. (Not necessarily stable interfaces - they might only work with a single kernel version - but that still makes it much easier to port to a newer kernel.) (Obviously they still need to release the source for these modules.)
Presumably any invasive changes the SoC vendors want will have to be developed and pushed upstream a couple of years before they want to ship products with it, to give time for multiple rounds of reviews and testing and fixes before it ends up in a stable release that gets picked for Google's next generic system image that is used for the conformance tests on those products before they're released. Then once the products are released, they can finally tell whether the patches they designed a couple of years earlier actually work (in terms of improving the user experience, and in terms of not having one-in-a-million bugs that were impossible for QA to find but happen lots when you've got ten million users).
That sounds rather impractical, unless the hardware stagnates and doesn't need major new features any more, so I assume what would actually happen is that vendors will make their device work just well enough on the generic kernel to pass Google's conformance tests, without caring about performance or power efficiency or any optional features, and then replace it with an invasively hacked kernel for their shipped product. But at least it does push them towards minimising invasive changes when they can.
Posted Nov 18, 2018 11:52 UTC (Sun)
by auerswal (subscriber, #119876)
[Link] (2 responses)
For me, there is a huge difference between "just a bug" and a security relevant bug: in most cases the "just a bug" will not trigger in my use, but someone will attempt to trigger the security relevant bug for me, although my normal use would not trigger it either.
Bug fixes are often accompanied by new bugs. Thus one might prefer not to apply a fix for a bug that cannot be triggered (e.g. the bug is in how specific hardware is controlled, resulting in problems for users of said hardware, but just loading and interfacing with the driver does not trigger any misbehavior without the hardware), but to apply fixes for those bugs that can be triggered willfully and that if triggered actually affect system integrity. Identifying the latter helps in this decision.
Of course, a seemingly non security relevant bug may later prove to be security relevant, and a fix to a security relevant bug may later prove to add an even bigger security relevant bug. But obfuscating what is currently known about a bug, e.g. hiding its security relevance behind "just a bug" rhetoric, does not help at all.
Anyway, all bugs should be fixed. ;-)
Posted Nov 19, 2018 21:48 UTC (Mon)
by roc (subscriber, #30627)
[Link] (1 responses)
Posted Nov 20, 2018 14:17 UTC (Tue)
by hailfinger (subscriber, #76962)
[Link]
The unwillingness to differentiate between regular bugs and security bugs and simply fixing all of them is a result of the above problem.
Posted Nov 19, 2018 9:25 UTC (Mon)
by evgeny (subscriber, #774)
[Link]
Posted Nov 19, 2018 20:13 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (10 responses)
So why do CVEs and embargoes exist? I understand security implications of many bugs are underestimated, but is this statement saying there could/should be a CVE for every bug?
Every time I looked for some elaboration I found nothing but hand-waving.
I also understand the security aspects of some bugs don't matter to *some* people and roles, however that statement often comes without any condition.
Posted Nov 20, 2018 10:09 UTC (Tue)
by dgm (subscriber, #49227)
[Link] (9 responses)
CVEs exist (or should exist) for bugs where the exposition is high. This means exploits exist (or are trivial) and, for servers, when they expose the systems to remote atack.
Embargoes exist (or should exist) where the fix is complex to develop or distribute.
> is this statement saying there could/should be a CVE for every bug?
It could, but shouldn't. Really, it difficult to prove that a bug is never exploitable, but some are easier than others. What would we gain from tracking all bugs?
The problem with all this is that it has been "abused", by individuals and organizations, for fame and financial gain. There's what is called the "security circus" (google it). A "security industry" has been created around security problems. This has several perverse consecuences:
1. it incentivates the *creation* of exploitable bugs.
So, what's the rational thing to do? Linus' solution is to opt out of this vicious circle by removing the status of "special" for security bugs. Is this solution ideal? Nope, but its better than entering the circle. If you have a better idea, you're free to expose it.
Posted Nov 20, 2018 16:52 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (8 responses)
> ... fame and financial gain. There's what is called the "security circus" (google it). A "security industry" has been created around security problems.
The only reason the "security circus" strives is because of a much older and very well known circus that predates it, namely the "good enough, ship it!" circus. The latter has made me sicker and for much longer and if takes fire to finally fight fire then I don't mind too much. Cut too many corners to make more money, be punished and pay even more later? Looks Good To Me. The uncertainty about exploitability is especially good because it might scare and pressure companies into fixing even more bugs, even minor functionality bugs. It may also erode and weaken ancient and pre-security era technologies and processes like C or closed-source.
Posted Nov 20, 2018 18:18 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (7 responses)
The problem with the current security circus is that instead of trying to get people to address the underlying reasons certain classes of bug exist at all, it gets people to address individual high-profile bugs.
So, for example, picking on OpenSSL here, there's a 2011 OCSP stapling vulnerability caused by trusting that a packet sent by a remote host is correctly formed, and parsing accordingly. This individual bug gets fixed, but nothing is done to address the more general problem of insecure handling of packets. Then, in 2014, Heartbleed comes along - caused by trusting that a packet sent by a remote host is correctly formed, and parsing accordingly. Then there's CVE-2015-0291, where - stop me if you've heard this before - OpenSSL trusts that a certificate sent by a remote host is correctly formed, and parses it accordingly.
There's a pattern of bugs here (and OpenSSL is not the only code to have this sort of pattern) where the same class of mistake is being made again and again. If the security circus was fixing the errors of "good enough, ship it!", then the circus around Heartbleed would have prevented CVE-2015-0291 from happening - parsing code used by OpenSSL would have been fixed such that you couldn't accidentally assume that remote controlled data was well-formed.
Posted Nov 20, 2018 18:46 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (6 responses)
It does but very slowly.
> it gets people to address individual high-profile bugs.
Short terms fixes are needed too. For two reasons: 1. to quickly put out fires 2. to very slowly change how people (vendors *and* customers) think.
Posted Nov 20, 2018 19:37 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (5 responses)
When you say "very slowly", exactly how slowly do you mean? I've picked on OpenSSL because the security circus has been shouting about it for over a decade, and yet there's been three bugs with the same underlying engineering errors in that time that are significant enough to be mentioned in Wikipedia.
I don't see the security circus having much impact here - it's still shouting about the same types of bugs as always, and yet we're still seeing core infrastructure being built in a fashion where there's no automatic separation between trusted data where you can assume it'll parse sensibly, and attacker controlled data. The recent DHCPv6 issue in systemd-networkd is more of the same class of bug as OpenSSL has had on and off since 2011 (if not before), and yet despite being a much newer project, modern development techniques as applied in the field don't avoid the "parsed attacker controlled data as-if the attacker would never send invalid packets" bugs.
Posted Nov 20, 2018 20:29 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (2 responses)
Genuine question I'm not an expert.
Posted Nov 21, 2018 10:53 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
Compared to what you'd expect in the absence of the "security circus"? The only change that's not predictable from the state we were in back in 1998 (before the security circus got noisy) is the appearance of Let's Encrypt. Everything that's gone into the standard is either a mix of following the state of the art in cryptography (new ciphers, AEAD modes, end of CBC etc), or reactions to attacks that weren't foreseen at the time the previous standard was written (fixing up padding behaviours, SCSVs etc).
Further, there's no evidence that either OpenSSL or NSS (the two big SSL libraries out there) are changing their development practices in any major way to prevent classes of implementation error in future. Neither are we seeing a new library or a fork of one of the existing two that justifies any claim to higher security - the only significant fork is Google's BoringSSL, which mostly just lags behind OpenSSL and waits to see if there's a bug in an OpenSSL implementation of a feature, rather than trying to change things so that certain classes of implementation error cannot exist in BoringSSL.
In as far as I can see the security circus making any difference at all, it's that it enables managers to tell developers to not even try new security ideas, in case there's a bug - better to be one of thousands hit by the same flaw than to be an outlier.
Posted Nov 21, 2018 21:51 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
No, I didn't ask for speculatively rewriting history and pretending it's possible to have a security industry doing useful work without a corresponding circus.
Fortunately you gave some answers to my actual question anyway.
Posted Nov 20, 2018 20:52 UTC (Tue)
by pizza (subscriber, #46)
[Link] (1 responses)
Quality engineering, especially on an ongoing basis, costs money.
How many of those "shouting" have actually contributed anything to OpenSSL other than hot air?
(And feel free to s/OpenSSL/any other critical infrastructure tool/)
Posted Nov 21, 2018 11:04 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
Not many - but the hot air is what the security circus brings, and worse, because the people making decisions often lack technical judgement (not their skillset), it encourages a tendency towards monoculture - better to be one of a million breached sites than to risk the security circus coming down on you for doing something different.
Posted Nov 30, 2018 21:08 UTC (Fri)
by meyert (subscriber, #32097)
[Link] (2 responses)
Posted Dec 1, 2018 1:00 UTC (Sat)
by pabs (subscriber, #43278)
[Link]
https://wiki.postmarketos.org/wiki/Devices
Posted Dec 1, 2018 10:43 UTC (Sat)
by excors (subscriber, #95769)
[Link]
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
https://public-inbox.org/git/70ccb8fc-30f2-5b23-a832-9e47...
https://github.com/git-series/git-series
So shouldn't these open-source developers be forbidden to "unpublish" and unshare these? It's GPL code after all </devil's advocate>
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
cd original-kernel-repo; git apply --index blob; git commit -m "Blob patch"; git rebase new-kernel-version
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Preferred form for making modifications
Live Kernel Patches
Live Kernel Patches
Running Android devices on mainline kernels
With the current push of SoC support into mainline (substantial amounts of Qualcomm Snapdragon 845 code landed in Linux 4.19, more patches are being merged right now) combined with generic Android feature upstreaming, I see a future where a significant share of current smartphones have a reasonable chance of running either almost-mainline or even unpatched mainline kernels. (AFAIK Sony and Google have been motivating Qualcomm to take care of the SDM845.)
Running Android devices on mainline kernels
Running Android devices on mainline kernels
1a. Having Linux mainline support for the SoC before the SoC is on the market
1b. Having Linux mainline support for the camera and other peripherals in the phone which need a kernel driver
2. A mainline LTS kernel release containing that SoC support
3. Google releasing an Android Common Kernel based on that mainline LTS kernel to the SoC vendor
4. The SoC vendor creating a Board Support Package (BSP) based on that Android Common Kernel
5. The device vendor using that BSP to create the initial firmware for their device
Running Android devices on mainline kernels
https://www.notebookcheck.net/Qualcomm-Snapdragon-212-APQ...
https://en.wikipedia.org/wiki/List_of_Qualcomm_Snapdragon...(2014-17)
Running Android devices on mainline kernels
It's a rebasing branch, which Google use to put android specific patches on that. BUT! Not everything. For example, at least sdcardfs hasn't ported on that.
1. Using android-mainline-tracking
2. Porting sdcardfs on mainline kernel
3. make sure mainline kernel support your SoC
4. MIX them up, then enjoy it. You should be able to boot to GUI.
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
2. it disincentivates the fixing of less exploitable bugs.
3. it disincentivates the fixing of highly explotable bugs when there's no financial gain.
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
(Yes, $big_players finally created/funded the Core Infrastructure Initiative, but it's a long uphill climb..)
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline
Bringing the Android kernel back to the mainline