UPDATE:
Please read this post before proceeding: http://www.bluh.org/?p=273
——–
UPDATE UPDATE: Fixed some basic math errors. Oopsies.
——–
There’s a lot of anger going around the internet from PC gamers upset at the announced technical limitations of Dark Souls for PC. Mainly, the locked frame buffer resolution, and the limited framerate.
While I understand why gamers are upset at these limitations, I don’t feel that they will impact the actual quality of what I feel is one of the best games ever made. Being a developer, I also appreciate why From likely made the decisions they did, and would like to share some of my guesstimations of what may have led to those decisions.
But it makes me very angry when I see gamers write off any game for a superficial reason, and when it comes to the size of your pixels, I can’t really think of a much more superficial one. So the tone of this article is one of anger, and for that I apologize up front. But I want From to make a hojillion dollars off of Dark Souls so they can continue to innovate, take risks, and make these crazy awesome games.
One thing to keep in mind is that the Japanese develop for console in such a focused fashion, that their entire pipeline is generally built around them. We’re talking about people who, until this current generation, wrote their own compilers in order to make games. That comes with their own dialects and quirks in whatever their starting language was, which is why you’ve seen almost no ports of PS2 games from Japan.
In North America, most development happens on PC entirely. Even console focused games have maintained PC versions which most developers use, because it’s not worth $3m to give every developer their own dev kit. But in Japan, it often happens entirely on console. Most of those devs never have a version that can run on PC at all.
So when you compare western PC development to japanese, it is immediately unfair because western developers are starting from a much stronger base. It’s not a simple matter of From Software being ‘lazy’ or doing a ‘half assed’ port job. Especially when you consider how easy it is to change framebuffer size, and to change screen resolution. I mean, the game does support changes in screen resolution, it’s just the internal framebuffer that’s different. Why would one assume that testing different framebuffer resolutions wasn’t one of the very first things they tried?
So this article will delve into why it may have been technically difficult or visually undesirable to have higher framebuffer resolutions or framerates. As a bonus, I’ll likely throw in other stuff they dealt with which might have impacted their decisions on rendering related changes. I will also rate each independent point with two 1-10 scales, the first representing how much PC gamers would complain about it, with 10 being the Whiny Entitled PC Gamer Who Chooses Not To Buy It Because It is a Total Deal Breaker, Man, and the second representing development cost, with 10 being “To hell with it, bin the project, it costs too much.”
It’s worth noting however, that I am not saying that any of these are THE REASON why they made their choices. Obviously I cannot know that. Nor am I saying that it is a reason that I would choose. I am just giving some insight into what can happen in game development which might result in particular decisions being made. Some of these things are more technical, and I’ve seen them come up as issues on a coding side. Some of them are more artistic, and I’ve seen them come up on the art side. But they are all real issues that can and have happened many a time in game development, and all of them could possibly contribute to the decisions that From Software has made.
1. UI.
This one is low hanging fruit. But if you know exactly what resolution your game will be, often it is significantly easier to build all of the UI in such a fashion that it just lands on screen where you want it. This means all the game UI could in theory be on one large texture that’s just slapped to the screen and that’s that. Even if they didn’t do that, the resolution is guaranteed to be perfect, and the positions are close to guaranteed to be hardcoded. Meaning that if they were to up-res the framebuffer, you would have huge chunky blocky UI that would be immediately at odds with the rest of the game’s high resolution. To fix it would require rewriting a large part of the UI system to either properly scale everything, or properly position relative to screen edges, and having the artists completely redraw all of the UI such that it would look good or better at higher resolutions.
Complain: 7
Cost: 4
2. Texture mapping (including normal maps).
Given the game’s internal low resolution, the look of their art was probably balanced such that the artists knew the target resolution. Given the rough size of enemies on screen, and the graphical look of the game, I’m expecting they made heavy use of low res normal maps in order to get the level of detail they wanted on characters and enemies. Were you to upres the framebuffer without creating new normal maps, it’s possible for characters to suddenly look like they are all wearing outfits made of small colored bathroom tile, as a single pixel of a normal map would map to significantly more screen space in a roughly square fashion.
The textures will also be nearly the same resolution as the texture maps, because if they are too drastically different they’d look absolutely terrible.
Complain: 5
Cost: 7 (10 if including the game world in these considerations).
3. Low polygon models
The game world is large and open enough that the character and enemy models are likely quite low res, only you can’t notice it at their target framebuffer size. Clever use of texture mapping and normal mapping is what generally lets them get away with this. But at a higher resolution, the magic disappears and suddenly you are looking at blocky models. Which is especially apparent if they have low resolution textures.
Complain: 4
Cost: 10
4. Fill rate.
A lot of the really interesting and cool effects they have for a lot of the enemies, bosses especially (Sif immediately comes to mind), use a ton of fill rate by massively layering transparent polygons or particles. The cost for these kind of effects increases exponentially with render size in pixels. Fixing it would require remodelling, retexturing, and likely redesigning the problem models so they don’t look completely terrible, and don’t drop the framerate to single digits when they suddenly take up the entire screen.
Some math (assuming Sif has about 8 layers of fur, which seems likely from the screens I’ve examined):
Frame buffer at 1024×720, wolf fills the screen:
This means it has to draw 1024x720x8 pixels in a worst case. That’s 5.9m pixels. Per frame, of course. So at thirty FPS it’s trying to use about 177 megapixels of fill rate.
Frame buffer at 1920×1080 (cause if you are a pc gamer, I’m sure you have at least this, otherwise what are you complaining about?):
1920x1080x8 pixels in a worst case. 16.6m pixels. Per frame. That’s 498 megapixels of fill rate.
Of course, videocards don’t measure pixel fill rate, they measure texture fill rate, and when 3d rendering, nearly everything counts as a texture. Lighting? Check. Shadows? Check. Textures, normalmaps, spec maps, alpha maps… check check check. You get the idea. That 500 megapixels very quickly becomes 3-4 gigatexels. For a single character.
But wait! You say. Modern video cards are much faster than the consoles! BZZZZT. They are, but it doesn’t tell the whole story. Console video chips have specific optimizations based on how developers tend to use them. As such they can do things like transparencies and FSAA for free. Or nearly so.
Oh you wanted some kind of AA on Sif? Well on PC that just doubled or quadrupled your frame buffer. So now you are using somewhere between 10 and 20 gigatexels of fill rate.
Complain: 9 (I can’t fight sif! the game slows to a crawl!)
Cost: 9
5. Shader Languages.
This is where they take the biggest hit on the port, and where they have likely focused most of their work. Because they have a 360 and PS3 version, they obviously have some kind of shader abstraction going on. But the problem is, when you hit PC, different videocards support different things when it comes to shader languages, and using the wrong thing at the wrong time can take a 60fps game down to nothing. On 360 and PS3 this isn’t an issue but on PC? You bet it is. In fact, it’s something you can’t ignore, despite the cost of testing, debugging, and profiling on a ton of video cards. On a modern engine? This has been done for you (or mostly). But on the one they used? It’s only there as a rough helping hand.
Even when making simple PC games nowadays, you’ll find features you take for granted that just don’t work on common videocards (Love2d’s canvas support comes to mind). Locking the framebuffer resolution may have allowed them to take shortcuts for problem graphics chips.
Complain: 10
Cost: 7
6. Online Stuff
A lot of noise has been made about Games For Windows Live, but the reality is that making your own online system is a large amount of work. Especially when matched with the infrastructure required to support it. Going with Games for Windows Live meant they could pretty much reuse a lot of the system they had in place, rather than making their own, which would have allowed them to have some actual time to focus on other things.
Complain: 9
Cost: 10 ( Non-GFWL ), 2 (GFWL).
7. Animation Quality.
Animation can take up a lot of space, especially when you have multiple skeletons (they have unique skeletons for everything in the game as far as I can tell), and when there’s a lot of bones per skeleton (oh, there is). One way people get around this is by using very high rates of animation compression. Well, that’s what you do when you can’t use a single skeleton (which is what a vast majority of games these days do).
What animation compression does is reduce the size of the animations in memory, but it also introduces a jittery aspect to the motion. Ever seen a character’s feet float around on the ground when they were standing still? Animation compression.
Using a lower framebuffer can hide some of that jittering, which would otherwise look fairly terrible.
Complain: 4 (6 if you have crashing due to running out of memory from less compressed animations).
Cost: 2 (reduce animation compression), 7 (change animation compression algorithm), 10 (try and change skeletons/reduce raw animation cost).
8: Timing Calculations
For those of you who don’t know how to make games, every frame the game takes a rough count of how much time has passed since the last frame, and calculates a new game state. That’s moving things, rendering things, animating things, etc.
The problem with Delta Time, or DT as we call it, is that if you are working such that you always have a known or high DT (High being lower framerate), there’s a ton of code bugs that will never get seen. From particles that don’t work (It normally looks like fire! But now it looks like a laser beam into the sky!), to physics that freak out (When I kill that enemy he stretches to infinity!), to things that to the layman simply don’t make sense at all (My attacks don’t hit anymore! I fall through the world! The enemy only ever turns left!).
Finding and ironing out all these issues after the fact? It’s close to impossible. Especially when some of those issues may have to do with fundamental architecture assumptions.
Complain: 8
Cost: 10
9. Single threaded game updated.
Given the PS3 only has one general purpose CPU, it’s not irrational to think they may have a single threaded game update. Depending on choices they made, that same game update may have to wait for the frame render to complete in between updates. If this is the case, then, given the fact that we already know their AI eats up a ton of CPU, it’s likely that in this case they have to keep the render costs extremely low in order to have a playable framerate at all.
The reason I think this may be the case is that traditionally japanese developers have worked this way in order to target their games for a locked 60 frames per second. But they are also used to building games with very little update logic (AI and such), so they could traditionally keep CPU costs for things other than rendering low.
But I’ve seen how poorly modern games can perform in these scenarios, so if they did build it this way, they’d have little choice in these matters.
Complain: 8 (poor framerates)
Cost: 10
So yeah. In conclusion, I can’t really get worked up that the greatest game made in recent history, if not ever, has a locked framerate and a low res frame buffer. Plus I already played it on my PC monitor (thus upscaled from the 360 to 1920×1080) and it still looked pretty damn great.
I think that at the end of the day, if you are willing to write off a game of this sheer quality, then you should be ashamed to call yourself a gamer. If you’ve already played it on console and are going to skip the pc version because of this? I don’t blame you. Certainly there’s no reason to believe that the PC version will be any better at this point. But again, remember that it’s a version that wouldn’t even have existed at all if From Software didn’t care about gamers.
So please, try not to give them reason to ignore us in the future.