activity wholist changelog info go back go back go forward go forward
now if you've got a pair of headphones... - Post a comment
you'd better get 'em on and get 'em cranked up
switching graphics argh
There's a new kind of toy that keeps showing up at the office. Laptop vendors are really starting to push multiple-GPU machines. The theory goes that you can use one low-power GPU while you're on battery, and a high-performance GPU when you're plugged into wall power. I've played with a couple of these so far and people are starting to ask about them, so I figured I'd write down what I know and what I think the plans are.

The best case is that you get a laptop like the Lenovo Thinkpad W500, where the BIOS has options controlling the GPU setup. You can pick GPU A, or GPU B, or switchable graphics. If you pick one or the other, that's all that will show up on the PCI bus, and so X will pick it up and run with it. Hooray!

If you pick switchable graphics, we'll see two GPUs on the PCI bus. And now, things get tricky. Which one is active? Well, you could look at the VGA routing bits in the PCI configuration and attempt to figure out which GPU the BIOS enabled. But on the above-mentioned Lenovo, that doesn't work, VGA is just not routed anywhere. Maybe you could look to see whether only one device has memory and i/o decoding enabled, but again, that doesn't seem to be reliable, and what do you do if there's more than one?

Ideally this is where ACPI would come to our rescue, and there'd be some platform method to call to tell us which ASIC to talk to. Maybe there is, but we haven't found it yet. Neither do we seem to get any unexpected ACPI events when switching from battery power to wall power. The Lenovo has an option in the BIOS that claims to automatically detect whether the OS supports GPU switching, but it doesn't seem to be reliable, in that if I turn detection on, I still see both chips on the PCI bus. Nonetheless this does indicate that there's some platform support there somewhere and we just need to look harder.

Of course, if you're unlucky, you got a machine like the Sony Vaio Z540, where the BIOS has no GPU options, period. If you end up in a situation where you see two video devices on PCI, just write out a minimal xorg.conf that picks the driver and the PCI slot, and hopefully things will work. If not, you have two pieces, and you can keep them or not.

Anyway, that gets you as far as doing the same boring single GPU stuff you've always done. As far as runtime switching goes, we're still pretty far off from making that a reality in the open drivers. We could hack up the relevant drivers such that we initialize them both but just only feed commands to one or the other, and then write the serialization exercise to move pixmaps and such from one to the other on switch events. Or, we might be able to start one X server on each GPU and then stick an Xdmx in front of them. In neither case will GLX work the way you expect (if at all), and there will be all kinds of fun corner cases trying to get the second chip to come up exactly compatibly with the first.

Getting this to work well should actually be a lot of fun, and there's lots of opportunity to sweep away old bad design and come up with something good.

In tangentially related news: my LCA talk was accepted! I'll be talking about shatter, which is a project to rewrite the X rendering layer to work around various hardware and coordinate limitations. This is not unrelated to the above problem, hopefully we'll get rendering abstracted far enough away from the driver to make it easier to switch among drivers at runtime.


No HTML allowed in subject


(will be screened)