publisher colophon

CHAPTER 4

The Fastest Texture Mapping in Town

id Tech 1

The design of DOOM was creating level architectures the likes of which no one had ever seen, and these were resting on some groundbreaking technological foundations. The DOOM engine, id Tech 1, was responsible for pushing texture and lighting further than any previous PC engine and at a speed that was unheard of outside consoles. The latter was due to the introduction of the humble binary space partition (BSP) to gaming. By introducing BSPs, id ensured that the engine itself would go down in gaming history, regardless of the actual game.

The atomic unit of id Tech 1 is a vertex, a position in a three-dimensional space. Vertexes join to create lines called linedefs. When you add a height variable to a linedef, this (two-dimensional) vertical space running along the linedef is called a sidedef. These are what textures are mapped onto. Once all vertexes are joined by linedefs, you create a polygon, called a sector, each with its own height variable and lighting. Also, just as every sidedef can be texture mapped, so can the “floor” and “ceiling” areas defined by the sector's shape, called visplanes. Finally, you can cut holes in sidedefs to create windows and doors, with the result that light and sound bleed from one sector to another (fig. 7). The engine does not create true three-dimensional maps, in that sectors cannot be placed on top of one another, but it is still remarkably powerful in creating the illusion of vertically stacked environments. DOOM's engine dispensed with the need for absolute right angles and orthogonality between aspects of the environment as found in Wolfenstein 3D, but it was still locked to absolute vertical and horizontal: there were no sloping floors or ceilings. This meant that runs of texture mapping were applied across fixed planes, and each shift in ceiling or floor height (including dynamically moving objects such as lifts) were, in effect, distinct sectors. John Romero explains,

The DOOM engine—the largest break [was that] all the previous games had been tiled based in the way the levels were created and DOOM used a free-form two-dimensional, or it became coined 2.5D, where you could have a 2D layout with different floor and ceiling heights. And it also had textured floors and ceilings and it had the ability to have different light levels in different areas rather than just a uniform light falloff that would start bright and then go dimmer, so we could have the flashing light areas and we could also dynamically move floors and ceilings without an impact on performance. We were still constrained by not being able to move around the two-dimensional line segments, so you couldn't do a swinging door—all the doors had to be things which would just rise up or down. But without impacting the rendering speed at all, we could have free-form changes in any of the lighting levels and any of the heights. And a lot of the things that became the core gameplay elements of DOOM were based on changing those things dynamically. (JC)

The freedom to define things like corner angles and ceiling/floor heights already shifted DOOM away from Wolfenstein 3D dramatically, but two other factors made the environment radically different. Sidedefs were split into three sections: top, middle, and bottom. While a normal wall only needed to utilized the middle section, cut-throughs used all three, creating more opportunity to add detail to the map rather than relying purely on sprites. So it was possible for any given vertical surface to contain three distinct textures. While the more general solution was to simply paint more than one texture onto a single texture sheet rather than add more vertices, the idea that a wall could change texture according to how high up you were was still a conceptual shift. The result is a more broadly detailed environment that made the most of texture mapping. Textures themselves were, of course, something id had been using for a couple of years; John Carmack had wanted to use the idea since 1990. Following Hovertank 3D, which lacked any kind of textures, he was looking to push forward to the next problem. As Romero puts it,

images

And so that's why Catacomb 3D was texture mapped, because of this “what are you going to do next”; it needed to get a plus-plus. That was EGA texture mapping, which is actually much harder than VGA texture mapping. So the first texture-mapped FPS game out was actually Catacomb 3D, not Ultima. Ultima Underworld came out at the end of April ’92, and one week later we came out with Wolfenstein 3D, with our VGA texture mapping. They did the first VGA texture mapping, but we did the first [EGA] texture mapping with Catacomb 3D in ’91. (JR)

The fact that this kind of competition over graphics standards persists twenty years later gives some indication of the importance of texture mapping to games. More to the point, Romero's comments reinforce the idea that Catacomb 3D was a stepping-stone, a chance to push the technology forward rather than a focused, designed game experience. The real fruition came with Wolfenstein 3D. And, to be fair, we do need to give credit to Blue Sky's game: after all, it did have texture mapping on the floors and ceilings as well as the walls, and it did have angled floors and variable heights; and although it's not the roller coaster that DOOM is, it's not actually that slow. This is an interesting point, however, as the fact that Ultima Underworld isn't necessarily sluggish by contemporary standards may say more about a general drop in speed in FPS games since DOOM, or at least in a significant number of them—something we might attribute to Halo and the rise of the console shooter. For Carmack, the issue is not one of innovation per se but of applied engineering.

No one in gaming invented texture mapping. You can say that earlier games like Wing Commander [Origin 1990], had scaled-sprite graphics and scaling a bit-map is texture mapping of a sort, it's just limited. The more unique bit in the Wolfenstein 3D and DOOM approaches was that they made the sacrifice of the degrees of freedom to get a higher performance…. The design choice I made with all of our early games up till Quake was that the important ability is moving you around in the world, and you really don't need as much the ability to roll your head or have these sloping geometries, and if you make that restriction, there's very significant improvement you can make. Engineering is about trade-offs, we can sacrifice this to get this, and smart engineering is when you recognize that you get more than you're trading for; you're getting good value. (JC)

Anyway, DOOM's texture mapping was a case of complexity and vision feeding into one another. The color palette for the game is initially very muted—Wolfenstein 3D is positively garish in comparison (fig. 8)—with grays, browns, and metals dominating (fig. 9). This was hardwired into the engine build, a clear-cut case of the fusion of artistic and technical development going hand in hand. The idea, according to Carmack was to lock the palette down to a smaller selection, to give the trade-off of being able to ensure that you could retain quality, as lighting “would smoothly ramp down through the lighting [falloff].”

In terms of the actual textures being mapped into the game, there was a subtle revolution occurring. Adrian Carmack and Kevin Cloud were using the new graphical capabilities available to them to explore new concepts in designing and selecting textures. This exploration centered around the use of photographic and scanned material to form the basis of the textures, an idea that flows naturally from the attempt to model “real-world” spaces that was an early focus of the DOOM Bible. Cloud explains,

A lot of games had very vibrant colour palettes, a very cartoon concept. And the thing that was hitting us at the time was all this new scanning, at least from my perspective, which was giving a really gritty, realistic look to the art. We had all these limited colors and all these games that are looking the same, and we wanted to see could we break out of that and really create something a bit more gritty and dirty. So a lot of our things began with a source of scanned material. We'd find lots of pieces of things to scan in even just to set up a color palette or to give us a background texture. That really wasn't happening back then very much—there wasn't a lot of use of photo reference in games—and it really gave us a different style direction that was a little more gritty than what people were used to. (KC)

images

images

That's not to say that DOOM wasn't fairly abstract or vivid in places. Sandy Petersen brought a whole new feel to proceedings by adding bright clashes of primary color as he introduced more hellish architecture to the world (in the alphas, Hell's influence was largely represented by sprites—items scattered around the world as well as the monsters invading it). As the episodes progress, colors get more and more intense and primary, and the architecture becomes less science fiction and more gothic. It's also probably the only game out there to feature a close-up photograph of a game developer's elbow as a skin wall (the elbow in question belongs to Cloud, who says, “That's just the way we rolled back then”).

The fact that, unlike Wolfenstein 3D, id Tech 1 was now using visplanes to map textures across the floors and ceilings certainly contributes a great deal to the flavor of the game, assisting with a sense of cramped claustrophobia to the corridor crawls and then contrasting this with “real-world photo” textures mapped onto giant backdrop sidedefs—the precursor to skyboxes (two-dimensional backdrop images wrapped around the level to create a sense of scale, basically analogous to mattes in cinema).1 The shift between interior and exterior spaces drives the sense of scale in the game, reinforcing the more subtle and constant variations in ceiling and floor heights. Attaching damage to visplanes was another new feature that, coupled with simple animated textures, gave the world nukage and lava alongside crushing ceilings and walls. This may seem simple in retrospect, but it represents a huge leap forward in terms of the spaces that were being represented. Even without agents, the environment could present more of a challenge than just navigating around it. DOOM might have been all about the arcades, but inadvertently or not, it created the basic technological and design tools that were to initiate the deviation of FPS games away from this basic form into games like first-person survival horror such as Amnesia: The Dark Descent (Frictional Games 2010), physics puzzlers like Portal, and platformers like Mirror's Edge.2

The other major new factor in controlling both gameplay and atmosphere that marked the new engine apart was lighting. id Tech 1 allowed for two distinct lighting designs that gave gameplay a unique feeling. The first was variable levels. Whereas Wolfenstein 3D took place under flat, bright strip lights and whereas Ultima Underworld was a uniform smudge of dim illumination, DOOM's environments contrasted not just in scale but in darkness. An automatic drop-off into darkness helped enhance the illusion of vertical space and horizontal distance. Particularly when used in combination with sound flooding, darkness assisted in the sense that the levels were large, holistic environments (rather than a series of set pieces). As a design tool, it created distinction and enabled foreshadowing to be used to powerfully sculpt the player's experience. There was little more terrifying than the sound of monsters growling somewhere in the dark or the prospect of entering a dark room from a light one. Although DOOM didn't have any kind of stealth system, contrast between dark and well-lit areas and co-opting these for tactical advantage in combat were to become staples of the genre in years to come. This culminates in Looking Glass Studios' Thief games (1998, 2000), where the s of FPS stands for “stealth,” not “shooter.” Attaching light levels independently to each sector allowed for far greater control over how similarly textured areas might be represented. Coupled with the lighting drop-off, this created a sense of pervasive gloom throughout the environments, which, for Romero, was fundamental to the feel of the levels.

Out of all the engines I've seen, it's almost the perfect engine for creating levels that are almost forced into being in-theme with the game. Because the engine itself has diminished lighting along the distance, so you can set the lighting for different sectors but the game is going to darken them as the distance goes out, on purpose,…the game is going to render them scary whether you like it or not. So that was a very different thing that most people had never seen: a diminished lighting engine model. (JR)

If diminishing lighting forced a scary edge to the levels, then strobing and flickering lights, not to mention the combination of abrupt lighting shifts with environmental triggers, allowed the level design to tip over into panic-inducing. As a second feature of the enhanced lighting in the engine, being able to turn lights on and off in real time once again added considerable scope to the designer's toolkit. The challenge was balancing these new features, all of which had the potential for slowing down play. Each added new things for the player to have to consider, least of all whether it was such a good idea to enter into an area that was periodically plunged into epilepsy-inducing bursts of strobing darkness; and the results would automatically change, to some extent, Wolfenstein 3D's run-and-gun flavor. What is important about both the new ways of creating variable environments (including moving sectors to create lifts, raising floors, and crushing ceilings, as well as steps and drops) and the new ways of manipulating light is that the technology pushed forward the opportunity for design. A new type of game environment was made possible, and in many ways, Romero and Petersen's exploration of these new potentials drove the design of DOOM. It's a classic example of the interplay between technological advance and creative practice, which defines the games industry perhaps more than any other medium.

Together, vertexes, linedefs, sidedefs, sectors, and their associated variables broadly define the level, but the situation becomes more complicated when it comes to the actual business of representing everything in real time, and this is where the real genius of John Carmack's engine comes through. A game's fps (frames per second) rate refers to the number of times everything visible on the screen is redrawn. A faster fps rate basically means faster, more responsive action, less jerky animation, and a smoother, quicker experience. This was the unholy grail for DOOM. Any game's engine has to work out what is being presented to the player x number of times per second, factoring in any changes in location or representation of its elements. That's on top of all the background, unrepresented data, integers, and algorithms going on—we're just talking graphics. A WAD file (a level file for DOOM—the acronym stands for “Where's All the Data”) is basically a set of instructions that tell the game how to display the level—literally, “draw a point here, and here; join them together and attach texture a to a height of b pixels; then join all of these together and cover the floor in this sector with texture c, the ceiling with texture d, and apply lighting e,” and so on. This is redrawn x number of times per second. Obviously, this is a fairly complex process, and then designers add in a player moving around, agents moving around, fireballs flying about, barrels exploding, lifts rising and falling, and so on. One of the methods of cutting down on the computation per second is to only render what is visible to a player, which simply reduces the quantity of data that requires processing. But this still leaves a substantial amount of work to happen, and this is where frame rates can drop, creating a visual lag that basically demolishes the experience of play. According to Romero, it was this that “revealed a limitation in the way Carmack was rendering the scene, because he was using lists of sectors for drawing, and I drew something that had recursive sectors and that made the game go really slow” (JR).

The list Romero refers to is one where every sector in the WAD is given a unique identifier so that the rendering engine can look up exactly what it is supposed to render and how. The problem is that if an engine is rendering every sector, it is doing unnecessary work. Equally, if it renders every object within a field of view, the normal way of doing this (called the painter's algorithm) is to start with drawing the background, then draw the set of objects next farthest from the player, and so on right up to the foreground. Another way of thinking about this is to imagine it like a set of Photoshop layers or sheets of transparent plastic laid in a stack. The secondary issue is, then, that some objects rendered by this process may not actually be visible, if they are behind other objects. In real terms, if a low wall obscures a lava pit but the animated lava texture is being re-rendered forty times a second, that's a lot of completely wasted processing going on, which will slow down the process of rendering the whole scene, compromising the frame rate. Looking for a solution to this during his work on the Super Nintendo port of Wolfenstein 3D, Carmack decided to try implementing binary space partitions.

A BSP is basically a way of identifying sectors' relationships to one another to avoid redundant rendering. It shifts the workload needed in calculating which objects are visible from given perspectives in the level to the end of the editing process, which culminates in the creation of a BSP table that contains the data the rendering engine requires to establish the correct sequence and need for object rendering, thus avoiding this having to be carried out in real time, as the game runs. The BSP breaks the level right down into units called subsectors, which are polygons contained within each sector. Each subsector has a list of sectors associated with it. The rendering engine moves down the tree until it finds a subsector, then checks against associated sectors, rather than having to work out associations in “longhand,” so to speak, each time it redraws the screen. In essence, a BSP functions a little like an index in a book, allowing associations, dependencies, and hierarchies to be quickly and easily found. As such, like any kind of indexing algorithm, BSPs are phenomenally powerful in reducing rendering time and, therefore, upping the frame rate. Traditionally, Carmack's “discovery” of BSPs is attributed to the work of Bruce Naylor, who had published a number of papers on BSPs in the early 1980s, although he references a manual, conceptual model for BSPs, or an “incipient version” (Naylor 1981) suggested by Schumacker over a decade earlier (Schumacker 1969, 142). Carmack, however, says he first came across the idea in Computer Graphics: Principles and Practice (Foley et al. 1990) and had been puzzling over it for some time. His experience of implementing BSPs is worth recounting, as it paints a vivid picture of the pre-World Wide Web computer science scene of the time.

It's not a really supercritical aspect of it, but it is interesting that when I did the early work on BSPs, Bruce Naylor came down and visited here and gave me copies of a bunch of his papers. It's interesting to talk to people about the old days. Of course, you've got the Internet now. You can find anything nowadays. But back then, it was really something to get reprints of old academic papers. There were some clearinghouses I used to use: you'd pay twenty-five dollars or whatever, and they'd mail you xeroxes of old research papers. It was just a very, very different world. I learned most of my programming when I had a grand total of like three reference books. You had to figure everything else yourself. So I was finding I was reinventing a lot of classic things, like Huffman encoding or LZW encoding. So I'd be all proud of myself for having figured something out, and then I'd find it was just classic method and they did it better than I did. (JC)

Integrating BSPs meant Carmack had co-opted the technology for gaming for the very first time, and the impact of this solution has been hardwired into the core of a huge number of game engines since then. Along with DOOM's many other contributions, Carmack's adoption of the binary space partition to games is an extraordinary legacy for the medium. What were the results? As Romero puts it simply, “That's when everything went superfast and the BSP was born for computer games.” In his typical fashion, Carmack is far less willing than others to see his contribution in quite such a groundbreaking light.

People like to look for the magic special sauce. They like to look for that narrative. But for almost anything, there are multiple valid ways to get to the same end result. And DOOM started off with a different approach that wasn't getting the speed I wanted. I first used BSPs on the Super Nintendo Wolfenstein 3D port, where I had to come up with more speed than the raycasting approach. And then when I came back to working on DOOM, I wound up working in that way because it seemed like a good approach. Conversely though, the Build engine (Duke Nukem 3D) didn't use BSPs, and it was every bit as effective as the way DOOM was implemented. But certainly because of DOOM's success, thousands of people learned what BSP trees were and followed up on some of those academic threads. (JC)

As a final point, it's worth bearing in mind how long many of the techniques that came to fruition in DOOM had been gathering pace in the background. Carmack himself sees a relatively smooth curve from the earliest work on Hovertank 3D in terms of achieving a vision of where 3D gaming could go. Cloud reaffirms this:

John, I think, knew. John has an uncanny ability to be able to see things that are happening in computers and games and be able to predict ahead. I think you have to have that talent to be a successful engineer, ’cause you are working on things that may not see the light of day for four, five, six years. And some of the things he's doing today he talked about ten years ago. (KC)

Whether you accept Carmack's line that someone else would have done it if he hadn't or the more traditional “special sauce narrative,” it is clear that game technology and game design fused in a particularly magical and effective way in idTech1. The result is history.

Additional Information

ISBN
9780472028931
Related ISBN
9780472051915
MARC Record
OCLC
867740530
Pages
34-44
Launched on MUSE
2013-10-21
Language
English
Open Access
Yes
Creative Commons
CC-BY-NC-ND
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.