fb_joel wrote:Wow, that's a hell of a post allquixotic! Thanks for posting that.
To reiterate on my earlier post - what I was referring to was not about the drivers/Linux at all. I understand the OpenGL 3.0 requirement mishap, and the general trouble that's been going on. However, all of our games on all platforms require more hardware features than most of the Intel integrated graphics chipsets can provide - at least this is my current understanding.
This depends upon what you define as "most". Of course, certain previous generations of Intel IGPs don't support the required hardware features to run Trine; but this is true of Nvidia and ATI cards as well. The age of said IGPs is about on par with the age of the equivalent ATI/Nvidia generation cards that also lack the same features. This is because Intel, ATI, and Nvidia all try to support the latest OpenGL and DirectX specifications as quickly as possible once they are released. In the distant past, Intel was behind the curve, but they are mostly caught up today.
fb_joel wrote:For example, with Shadowgrounds on the Windows platforms, we have listed these chipsets as 100% positively NOT being able to run the game (mostly based on user complaints but we have tested some ourselves too):
- Intel GMA900
- Intel GMA915
- Intel(R) 82915G/GV/910GL (used in Sony Vaio VGC-FB53 etc)
- Intel(R) 82945
- Intel(R) 82865G
(+ - NVIDIA GeForce 5400Go (laptop GPU) and GeForce 4 MX cards)
From the top of my head I would add GMA950 to the list as well.
I would agree that none of these chipsets can run Shadowgrounds, or Trine for that matter, or any OpenGL 2.1 game. Maybe not even OpenGL 2.0 games. However, I don't think any of these chipsets are "common" anymore, except for on rack-mounted, dedicated, headless servers, where the 915 and 845 chipsets are commonly used for the physical console and/or KVM-over-IP (but I don't think anyone would try to play a 3d game on such hardware).
fb_joel wrote:I would say the same about GMA965 but it is a much less frequent chip (sorry if I am using these terms a bit loosely, not sure if they should be called cards or chips or what) and I haven't heard too many complaints - I have not heard of it ever running the game(s) either though. Our official minimums for Shadowgrounds is GeForce 4 Ti 4200 or Radeon replace series, and slightly higher for Shadowgrounds Survivor (technically the same as SG I believe), and for Trine it's Radeon X800 or GeForce 6800 or better (technically GeForce FX 5x series does not have the required hardware but later ones do, and I think Radeon 9x series is the same). I think these are all Shader Model 2.0 cards or thereabout. I know that Trine runs on e.g. X550HM card from ATi, because I have used that in my home computer (not recommended btw, although it was more or less playable with low resolution - I believe I have also tested X300 and it runs but is definitely not playable, and I'm only 99% sure I tested that).
The GM965 (or 965GM) is the mobile version of the "965" generation chipset, which is also branded as "X3100". So actually, with regards to my Lenovo ThinkPad X61T, if you asked me, "Do you have an Intel X3100 chipset?" I would say, "Yes!" -- and if you asked me, "Do you have an Intel 965GM chipset?" I would say, "Yes!" -- but I only have one IGP, one graphics card, in there! Figure that one out.
Actually, I'll let Wikipedia help you figure it out. To see what I mean about the confusion over the Intel chipsets, check out http://en.wikipedia.org/wiki/Intel_GMA#GMA_X3100. Do a Ctrl+F and search for "Crestline" -- this is the chip I have in my ThinkPad. It claims DirectX 10, Shader Model 4.0, and full OpenGL 2.1 support on Windows.
Note that the OpenGL versions listed in the Wikipedia article are based on the Windows drivers. I am not very interested in understanding (or helping others understand) the situation with the Intel Windows graphics drivers, but let me sum up this complicated issue as succinctly as I can:
Depending on whether you're running XP or Vista/7, and depending upon the exact chipset you're using, one or more of the following statements may be true:
- Your GPU supports OpenGL 2.1 (or some higher revision), and your Windows driver properly implements this support in hardware, for fast and efficient 3D. Examples include the Intel 965GM / X3100 chipset, and later chips such as the G43, G45, and of course the latest, Sandy Bridge (included on the CPU die of very recent Core i3/i5/i7 CPUs).
- Your GPU supports OpenGL 2.1 (or some higher revision), and your Windows driver does not properly implement this support at all, leaving you with some lower revision exposed (such as OpenGL 1.5), even if your hardware is capable of higher. This, I believe, is the case for the 945G / 945GM / X3000 series on VIsta/7: because this hardware lacks a "hardware scheduler", no WDDM 1.0 driver exists supporting this hardware. Therefore it cannot implement the higher level API hooks that implement OpenGL 2.1 or DirectX 10. But this is only because a hardware scheduler is included in the requirements for WDDM 1.0, a Microsoft standard -- not because the hardware is incapable of supporting, on some level, OpenGL 2.1.
- Your GPU does not support the hardware features required for full OpenGL 2.1, and your Windows driver tries to properly implement this support, using slower software fallbacks in some cases where hardware features are unavailable. Since the Windows drivers are closed-source, we (the general public) are unable to easily determine for which cards this is the case -- but if you run some profiles on various OpenGL calls, you could probably figure it out if a call takes 5 or 10 times longer than it should.
- Your GPU does not support the hardware features required for full OpenGL 2.1, and your Windows driver simply settles for less, providing you OpenGL 1.5 or 1.4. This is the case with all those obsolete cards you mentioned above, the 915, 910, 845, etc.
Running diagnostic utilities such as GLView can help end-users determine which of these scenarios is the case for their card. You can also refer to the Wikipedia list, which is mostly reliable (although some of the OpenGL API support numbers seem a little incorrect to me, and seem to err on the low side).
I'll end this section by saying that discussion of running Trine or Shadowgrounds on Windows is extremely off-topic for this thread, and for those who are reading along trying to run Trine on Linux, I sincerely apologize for this interlude.
fb_joel wrote:I do seem to remember something about some Intel ones, basically the new X3100 or such, being able to run the games to some degree in the hands of very tech-savvy users (one of our programmers said that he thinks he once got the game running on one such computer using some driver/software-for-hardware hacks but he couldn't be 100% sure).
As I said above (and as Wikipedia says in case you don't take my word for it), the X3100 fully supports DirectX and OpenGL versions greater than or equal to what is required for Trine. And I have personally seen it running on said hardware -- on Linux, not on Windows. I am unwilling to test it on Windows because my primary interest (in this context) is gaming on Linux with the open source graphics stack (Mesa, etc.)
In fact, the X3100 is pretty much the oldest (chronologically) Intel IGP that would run Trine with any kind of success on both Windows and Linux. The most recent Vista/7 drivers should fully support the required OpenGL, and I have instructions in this thread, tested on this chipset, that work on Linux.
fb_joel wrote:So when I get a support request from an Intel user not being able to run the games, I have no verified/tested way of helping that user, so we resort to solving the matter in other ways (e.g. refund where possible).
If their platform of choice is Windows, I would agree. Since open source graphics drivers don't exist on Windows, all you can do is tell the user to grab the latest drivers from Intel or their manufacturer. If the latest drivers don't fully support OpenGL 2.1, the user is out of luck -- either Intel chose not to implement the drivers to live up to the hardware's potential, or the hardware is simply incapable of doing the job. Either way, the user is "[email protected]% out of luck".
If their platform is Linux, and they're using at least an X3100 / 965G / 965GM chipset, they are very likely to be able to garner some success by using the instructions in my previous post. This would also apply to people using ATI/AMD or Nvidia cards, who want or need to use the open source graphics stack rather than the vendor-supplied proprietary drivers.
fb_joel wrote:Do you agree with this assessment, just to confirm? If there's a way to get these chips run the games on Windows I'd love to hear that.
Again, the Windows topic is a bit of a hijack of this thread, and I don't have a good answer. If it doesn't work, and there's no driver update available, you could try replacing your OpenGL DLLs with a fast software implementation -- several exist -- but you're unlikely to get more than 3 or 4 fps, which is even worse than what you can get with Mesa doing hardware acceleration on the 965GM chipset.
fb_joel wrote:As for the Linux support, am I incorrect in assuming that if a graphics card can't run the game(s) on Windows, chances are it would not be able to run the game(s) on Linux either? I've just always assumed that it's the hardware that really dictates things and drivers generally allow the same features on all platforms (give or take, and I guess it takes longer for some platforms to get sufficient drivers etc).
Well, the way that the Windows graphics stack is designed vs. the Linux stack complicates things quite a bit!
First, realize this simple fact: for any given 3d acceleration graphics driver, the driver implementation may elect to use some, none, all, or any subset of the hardware features of the graphics card. Nowhere is it written that graphics drivers must utilize all of the hardware's features. Nor is it true that a graphics driver cannot claim support for features or API versions that are not supported in the hardware: if the driver developer thinks they can deliver a "reasonable" level of performance by emulating certain functions in software, then it's their prerogative to do that. You can even go to the extreme and implement the entire driver in software.
My understanding is that the Mesa 3d stack in particular, on Linux, "tries very hard" to mend the deficiencies of inadequate hardware using software fallbacks. Some of these fallbacks perform just fine, and some of them are unacceptably slow. But in general, Mesa supports a very generous level of OpenGL API support, even on hardware that may not fully support all the functions required. (Aside: Conversely, we've seen several examples of Intel drivers on Windows for various chipsets supporting a disappointingly low version of DirectX and OpenGL, even on more capable hardware.)
This is partly because large swaths of Mesa (and especially Gallium3d, a subset of Mesa with a next-generation architecture) is designed to be shared between all graphics cards. So, whereas on Windows you have ATI, Intel and Nvidia "doing their own thing" (more or less), with Mesa you have this common code base, including things such as a GLSL compiler and hardware-independent implementations of many GL extensions. And the code is written in such a way that drivers just "plug in" to the common infrastructure, and bam! -- this whole set of GL functions is supported. But the common code has to be written in a way that, if the hardware doesn't support XYZ feature, a decent software fallback is attempted if it makes sense, rather than saying "we don't support that extension".
You can also do hardware fallbacks, too! My favorite example is this: ATI won't publish the specifications to their "UVD2" video decoding chip on their graphics cards. So, if we want hardware video decoding in the open source drivers, we can't use the efficient functions of the UVD2, because we don't understand how it works. But! Since the shader pipeline of modern GPUs is generally programmable, we can write shaders that do the same video decoding work as UVD2! This is essentially a hardware fallback. For best performance, we'd like to use UVD2, but using hardware shaders to do the same work is still faster than doing it on the CPU. Similar "hardware fallbacks" may be feasible in other contexts in OpenGL.
But the fallbacks in Mesa generally don't go too far. For example, OpenGL 2.1 is not supported on Radeon "R200" generation chips, nor is it supported on Intel's 910 or 845 chipsets. It's a delicate balance: if you include too few software fallbacks, then a lot of applications that might otherwise work, will crash. If you ship software fallbacks even if they impose a huge performance hit, then applications will start just fine, but they'll be unusably slow. Users will get a false impression of what their hardware supports, and come to us saying, "Hey! Mesa says my graphics card from 1998 supports OpenGL 2.1. So why is it so dang slow playing Unigine OilRush?" -- this is not a situation we want to get into.
Unfortunately, all this knowledge probably doesn't help you support Trine or Shadowgrounds on Windows or Linux any better. The fact of the matter is this:
- On Windows, we can't do anything about deficient drivers because they are closed source. Regardless of why the support isn't there, we just have to live with the fact that the support isn't there, period. But don't dismiss all Intel IGPs blindly; some of the more modern chips (starting, for the most part, with the X3100 / 965) should run all current Frozenbyte titles just fine, albeit at reduced performance and low detail.
- On Linux, all the big distributions are going to avoid shipping the patented S3TC compression support, because, well, it's patented. So you won't get an "out of the box" experience there, either, and you can't ship S3TC libraries with your games without violating S3 Graphics' patent. So it's up to the user to go find an unsupported way to grab the required library. If I were Frozenbyte, I wouldn't "officially" touch this patented issue with a 10 foot pole As of this writing you also have to worry about GL_ATI_draw_buffers, but that issue will disappear once Mesa 7.11 is picked up by distros in the latter half of this year.
If I confused you more than helped, I'm sorry. It's quite a complex issue, and a lot of the facts surrounding Intel IGPs are difficult to dig up. I'll end with some imagery: at some point along the way, Intel (and separately, Nvidia and ATI) transitioned from an older graphics hardware architecture to "the current generation": they moved from a fixed-function pipeline to a generally programmable pipeline. When they made that transition, they opened up support for GL 2.1. Trine, at least, is sitting right past the threshold of that generally programmable paradigm, greeting even the eldest members of the "generally programmable club" with open arms. But you've gotta have your GL 2.1 card to get in the door.