diff options
author | Kevin Tian <kevin.tian@intel.com> | 2010-06-29 08:54:03 +0800 |
---|---|---|
committer | Richard Purdie <rpurdie@linux.intel.com> | 2010-06-29 12:34:38 +0100 |
commit | 9207cd40153148f71788d30697a055fe846e8927 (patch) | |
tree | 08d8c79bdfd8eb6078822df36f60ab79c20b6506 /meta/packages/kf/files | |
parent | bb3e4dda5d85402fadcecc97589ea4e452c36c98 (diff) | |
download | openembedded-core-9207cd40153148f71788d30697a055fe846e8927.tar.gz openembedded-core-9207cd40153148f71788d30697a055fe846e8927.tar.bz2 openembedded-core-9207cd40153148f71788d30697a055fe846e8927.tar.xz openembedded-core-9207cd40153148f71788d30697a055fe846e8927.zip |
qemu: fix VMware VGA depth calculation error
VMware SVGA presents to the guest with the depth of the host surface it renders
to, and rejects to work if the two sides are mismatched. One problem is that
current VMware VGA may calculate a wrong host depth, and then memcpy from virtual
framebuffer to host surface may trigger segmentation fault. For example, when
launching Qemu in a VNC connection, VMware SVGA thinks depth as '32', however the
actual depth of VNC is '16'. The fault also happens when the host depth is not
32 bit.
Qemu <4b5db3749c5fdba93e1ac0e8748c9a9a1064319f> tempts to fix a similar issue, by
changing from hard-coded 24bit depth to instead query the surface allocator
(e.g. sdl). However it doesn't really work, because the point where query
is invoked is earlier than the point where sdl is initialized. At query time,
qemu uses a default surface allocator which, again, provides another hard-coded
depth value - 32bit. So it happens to make VMware SVGA working on some hosts,
but still fails in others.
To solve this issue, this commit introduces a postcall interface to display
surface, which is walked after surface allocators are actually initialized.
At that point it's then safe to query host depth and present to the guest.
Signed-off-by Kevin Tian <kevin.tian@intel.com>
Diffstat (limited to 'meta/packages/kf/files')
0 files changed, 0 insertions, 0 deletions