3.12 merge window, part 2
This development cycle continues to feature a large range of internal improvements and relatively few exciting new features. Some of the user-visible changes that have been merged include:
- The direct rendering graphics layer has gained the concept of "render
nodes," which separate the rendering of graphics from modesetting and
other display control; the "big three" graphics drivers all support
this concept. See this
post from David Herrmann for more information on where this work
is going.
- The netfilter subsystem supports a new "SYNPROXY" target that
simulates connection establishment on one side of the firewall before
actually establishing the connection on the other. It can be thought
of as a way of implementing SYN cookies at the perimeter, preventing
spurious connection attempts from traversing the firewall.
- The TSO sizing patches and FQ
scheduler have been merged. TSO sizing helps to eliminate bursty
traffic when TCP segmentation offload is being used, while FQ provides
a simple fair-queuing discipline for traffic transiting through the
system.
- The ext3 filesystem has a new journal_path= mount option that
allows the specification of an external journal's location using a
device path name.
- The Tile architecture has gained support for ftrace, kprobes, and full
kernel preemption. Also, support for the old TILE64 CPU has been
removed.
- The xfs filesystem is finally able to support user namespaces. The
addition of this support should make it easier for distributors to
enable the user namespace feature, should they feel at ease with the
security implications of such a move.
- Mainline support for ARM "big.LITTLE" systems is getting closer; 3.12
will include a new cpuidle driver that builds on the multi-cluster power management patches to
provide CPU idle support on big.LITTLE systems.
- The MD RAID5 implementation is now multithreaded, increasing its
maximum I/O rates when dealing with fast drives.
- The device mapper has a new statistics module that can track I/O
activity over a range of blocks on a DM device. See Documentation/device-mapper/statistics.txt
for details.
- The device tree code now feeds the entire flattened device tree text
into the random number pool in an attempt to increase the amount of
entropy available at early boot. It is not clear at this point how
much benefit is gained, since device trees are mostly or entirely
identical for a given class of device. It is possible for a device
tree to hold unique data — network MAC addresses, for example — but
that is not guaranteed, and some developers think that entropy would
be better served by just feeding the unique data directly.
- New hardware support includes:
- Systems and processors:
Freescale P1023 RDB and C293PCIE boards.
- Graphics:
Qualcomm MSM/Snapdragon GPUs.
The nouveau graphics driver has also gained proper power
management support, and the power management support for Radeon
devices has been improved and extended to a wider range of chips.
- Miscellaneous:
GPIO-controlled backlights,
Sanyo LV5207LP backlight controllers,
Rohm BD6107 backlight controllers,
IdeaPad laptop slidebars,
Toumaz Xenif TZ1090 GPIO controllers,
Kontron ETX/COMexpress GPIO controllers,
Fintek F71882FG and F71889F GPIO controllers,
Dialog Semiconductor DA9063 PMICs,
Samsung S2MPS11 crystal oscillator clocks,
Hisilicon K3 DMA controllers,
Renesas R-Car HPB DMA controllers, and
TI BQ24190 and TWL4030 battery charger controllers.
- Networking:
MOXA ART (RTL8201CP) Ethernet interfaces,
Solarflare SFC9100 interfaces, and
CoreChip-sz SR9700-based Ethernet devices.
- Video4Linux: Renesas VSP1 video processing engines, Renesas R-Car video input devices, Mirics MSi3101 software-defined radio dongles (the first SDR device supported by the mainline kernel), Syntek STK1135 USB cameras, Analog Devices ADV7842 video decoders, and Analog Devices ADV7511 video encoders.
- Systems and processors:
Freescale P1023 RDB and C293PCIE boards.
Changes visible to kernel developers include:
- The GEM and TTM memory managers within the graphics subsystem are now
using a unified subsystem for the management of virtual memory areas,
eliminating some duplicated functionality.
- The new lockref mechanism can now mark a reference-counted item as being "dead." The separate state is needed because lockrefs can be used in places (like the dentry cache) where an item can have a reference count of zero and still be alive and usable. Once the structure has been marked as dead, though, the reference count cannot be incremented and the structure cannot be used.
The closing of the merge window still looks to happen on September 15, or,
perhaps, one day later to allow Linus to get back up to speed after his
planned weekend diving experience.
Index entries for this article | |
---|---|
Kernel | Releases/3.12 |
Posted Sep 12, 2013 8:37 UTC (Thu)
by jnareb (subscriber, #46500)
[Link] (1 responses)
In linked post there is said that render-nodes are not bound to a specific card and that driver-unspecific user-space (which can be relatively unprivileged) should not and can not ask "how do I find the render-node for a given card?”.
But I wonder, as there are hardware where there are graphics cards with different capabilities (e.g. Tesla for GPGPU, GeForce for display), would you be able to ask for said capabilities to select render node?
Posted Sep 12, 2013 14:13 UTC (Thu)
by lambda (subscriber, #40735)
[Link]
I think that this comment from the blog post answers that question:
Basically, the answer is that you open up each render node, query it for its capabilities, and then use the appropriate one for your task.
Posted Sep 12, 2013 14:18 UTC (Thu)
by mirabilos (subscriber, #84359)
[Link] (2 responses)
For that matter, MirBSD feeds the link-layer addresses of network interfaces into the pool in the code that attaches them to the global list of interfaces.
Posted Sep 13, 2013 18:09 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link] (1 responses)
Linux does that too.
Posted Sep 13, 2013 18:36 UTC (Fri)
by mirabilos (subscriber, #84359)
[Link]
(I was responding to the lladdr being mentioned explicitly.)
Posted Sep 13, 2013 3:09 UTC (Fri)
by dlang (guest, #313)
[Link] (1 responses)
another place this could potentially be really useful is on a load balancer, if the load balancer can avoid establishing a connection to the real server for a short time, it can potentially gather more information about what is happening before making it's decision and connecting to the real server.
for example, if you can hold off until you see the packet containing the URL (or at least the beginning of it), you can direct the traffic to different servers based on what is being requested.
I haven't looked at this feature yet, so i don't know how long the system can hold off on making the connection, but this is an interesting possibility.
Posted Sep 19, 2013 4:43 UTC (Thu)
by eternaleye (guest, #67051)
[Link]
3.12 merge window, part 2
3.12 merge window, part 2
With custom heuristics. There is currently no notion of “speed” in the DRM API, but afair Ian was implementing an OpenGL extension to give some useful information to the user. So you could just open all render-nodes, see what they provide and then use them.
3.12 merge window, part 2
3.12 merge window, part 2
For that matter, MirBSD feeds the link-layer addresses of network interfaces into the pool in the code that attaches them to the global list of interfaces.
3.12 merge window, part 2
3.12 merge window, part 2
3.12 merge window, part 2