You might remember a couple of months ago I posted a bunch of charts of video game data. The obvious question that went unanswered in those postsTo the genuine annoyance of some. was, “Where did this data come from?” So let’s talk about that.
Actually, before we talk about that I should make it clear that this is a programming project. I should note that that this project pre-dates that crazy stuff I was doing with BSP loading a couple of weeks ago, but I’m posting them in the opposite order. For some reason.
Maybe reading yet another programming project sounds fun, but this isn’t a game-focused project with cool screenshots to show off my project. This is pretty dry and you’ve already seen the end result. I’d talk you out of reading more, but we both know you’re going to read this stupid thing no matter what I say. So Let’s just get this over with.
For years, I’ve been wondering about the stuff we’re always discussing / arguing about in gaming culture. The division between fans and critics. The difference between platforms. The changes to the industry over time.
The problem is that we never have any numbers to work with. We just sloppily take our anecdataAnecdotes extrapolated into “data”. and project it onto the industry as a whole. Just about everyone realizes this isn’t a scientific way of going about things, but we don’t really have any alternatives. It’s either guessing based on personal experience, or we chow down on the PR slop the various publishers feed usOr should we read quarterly reports aimed at shareholders, and swallow THEIR slop?.
Do particular DRM schemes impact audience reaction or sales? Do console generations impact PC sales? Do single-player games with tacked-on multiplayer actually sell / score higher than games without those features? Does review-bombing impact sales, or is the practice just a harmless but cathartic way of expressing outrage? It feels like critics and consumers have been drifting apart in terms of what they say about games, but is that perceived gap reflected in the review scores?
I suppose at the root of it was a general curiosity about the decision-making happening at the big publishers. We can’t see what game budgets are, we don’t have access to reliable sales figures, and without those numbers we have no way of even guessing about how much particular games are making or losing. Sites like VGCharts and SteamSpy give us some estimates to play around with, but for the most part we’re stuck in the dark.
However, it seemed like there was some data out there. We can’t answer all our questions, but maybe we can fill in a few more blanks. Wikipedia has a lot of information on game features and developers. Steam has information on DRM and system requirements. And of course Metacritic has the key information regarding critical reception.
So the obvious question is: If there’s a bunch of data available to the public, then why don’t we just round it up? (Preferably without having to do it by hand.)
How Do You Do That?
The process of having a program load web pages and pull out desired information is called Web Scraping. I’ve never written a web scraper before, but I’d always wanted to try it out. It just seems like a fun idea to have a program surf the web for you and bring back a great big haul of information. Maybe, deep down, this project was more about my desire to write a web scraper than to study the resulting data. But this project seemed like a fun way to satisfy both of these curiosities.
As I discovered, the process of building a web scraper is pretty easy. For a project at this small scale, I’d even say it goes from “easy” to “trivial”. All told, this whole project was much less than a week of work. If you handed this project off to someone who knew what they were doing, they could probably finish in a couple of days.
In the old days, I would have done this with C++. But now I’ve spent time time with Unity and learned just enough C# to be dangerous. Since that project I’ve wanted to play around with C# apart from Unity so I could get a feel for what C# is “really” like. The environment that comes with Unity has a ton of game-specific features, and it’s not always clear to a newbie which things you’re using are “standard C#” and which bits come with UnityI’d sort of assumed that Unity-specific stuff would have Unity-specific includes, but it’s also possible Unity comes bundled with some third-party things and conventions.. In Unity projects, the engine controls the loop. Tens of thousands of lines of invisibleInvisible to the game developer. I’m going to assume people working on the engine can see their own code. code might be run before Unity gets around to reaching the bits of the program you’ve written. In vanilla C#, program execution begins and ends with your codeOkay, there’s probably a little bit of stuff the program does that’s invisible to a regular C# programmer, but that’s NOTHING compared to the gargantuan task that Unity does when it creates a window, launches a rendering pipeline, initializes the sound system, loads assets, and a thousand other things., and I wanted to get a feel for how that worked.
The Hardest Thing is Realizing how Easy it is.
The biggest thing that held me back was my learned habits. I’m used to the C++ world where you need to do everything by hand or spend time trying to figure out how to make alien code work with your program. Want to parse some text? Write a text parser. Want to read web pages? You’d better know how to implement your own HTTP stack, including networking, DNS lookups, HTTP requests, and a dozen other things I also don’t know how to do. (Or you could import a library that might not do what you want, or might not have documentation, and might not even compile.)
I kept assuming tasks were going to be hard. I’d get half an hour into writing something from scratch, and then I’d realize there was already a tool for it that was effortless to import and completely intuitive to use. A lot of this project was less about programming and more about learning how to find out what (if any) programming needs to be done.
The best example of this is when I tried to write code to parse web pages. At first I did the naive thing:
- If you’re a new programmer that learned to code on a very high-level language with lots of convenience features, then the naive assumption is that there’s a library out there that will do all the work for you, and all you need is to copy a couple of lines of code from StackOverflow.
- If you’re a dusty old greybeard with knowledge of the Old Ways and ANSI C, then the naive thing is to assume you’ll need to do everything by hand, painstakingly juggling small blocks of memory and writing dozens of lines of code to accomplish simple things.
I was the second kind of naive. I wrote a text parser that would take the contents of an entire webpage as one big string and look for fragments I was interested in. For example, maybe I’m scraping data from Metacritic and I want to get the title of the game from the webpage. By inspecting the raw Metacritic HTML manually, I’ve discovered that the title of the game is contained in a <div> tag with a class of “gametitle”It’s more complex than this in practice, but this works as an example.. So the HTML code might look like:
<div class="gametitle">Shoot Guy IV: Shoot Harder</div>
So my program downloads the page, loads it into memory, and I have it search the HTML for “gametitle”. Then I look forward for the nearby closing bracket “>”. Then I’d search for the next opening bracket “<“. In theory, the title of the game should be between those two points.
The problem with this sort of approach is that it’s incredibly fragile. If the website suffers a redesign, then it could lead to chaos in my code. Maybe in the new design, the “gametitle” div is a container for the title of the game, plus the cover image, some publisher info, and some random branding logos. There’s no telling how my parser would handle that, and the odds are extremely high that it would extract a random block of HTML markup / CSS as the title of the game.
I knew this wasn’t the “Right” way to do it, but I was anxious to get the thing up and running before I began learning the “right” way to do things, which I assumed would take a long time.
The next day I came back to the projectAnd perhaps to my senses. and started looking for something to help me parse these web pages. I realized I was going to have to make different parsers for all the different websites I might need to deal with, and rather than making three or four parsers, it would probably be smarter to just bite the bullet and use someone else’s library.
The Lazy Way is Also the Right Way?
As an old-school C / C++ programmer, my expectation is:
- Spend ages going through a half dozen similar libraries. Some are in production but incomplete. Some are more complete but were abandoned a decade ago. Some seem more or less complete but have very little documentation in English. Spend a couple hours trying to figure out which of these seems like the least bad, and then download it.
- Spend ages trying to figure out how to get this to compile, because there are a dozen ways to do this and everyone thinks their method is obvious / optimal.
- Read the docs and figure out how to use the damn thing. Spend hours incorporating it into my code.
- Discover that this library lacks some obvious, fundamental feature and I’m going to need to do some ugly workaround to fix it.
- Get frustrated and disillusioned. Tell myself I’ll try one of the other libraries tomorrow.
- Shelve the project and never come back to it.
That’s the workflow I’m used to for hobby projects. Here is what I actually experienced while working on this project:
- I spend two minutes searching and discover that just about everyone uses Html Agility pack. It promises to do everything I need and it doesn’t appear to be abandonware.
- I’ve never used an external library in C# so I have to endure a 5-minute learning curve to figure out where you go to do this. It turns out there’s a handy package manager, like they have in Linux-land. Once I know how to find it and talk to it, the process is completely seamless. It downloads the code and I can start using it right away.
- I read the docs and realize I barely need them. Everything is pretty straightforward.
- I discover that Html Agility pack contains far more features than I realized. Not only can it parse HTML for me, but it can fully understand the HTML and do complex searches for me. With one line of code I can do a complex query like, “Find the first element with the class of “gamelist”, then find the first <OL> element inside of THAT, and then return an array of all of the <LI> items inside of it.
Even though I didn’t know anything about the library, I didn’t know how to obtain and use libraries, and wasn’t sure what I was doing, this way was faster and easier than what I did yesterday. As a bonus, it’s way less code. Yesterday’s parser code was about a page long. This one is less than a dozen lines of code.
I feel vaguely guilty. I feel like a gardener who’s been shoving around a manual push reel mower for his entire career and now I discover someone has been giving away free riding mowers for the last 20 years. I don’t know if I feel guilty for using this decadently easy system, or if I feel guilty that I spent two decades of my life breaking my back with this ancient hunk of metal when easier alternatives were free for the taking. Maybe somehow I feel both kinds of guilt at the same time.
The other thing that made this trivial is that my performance requirements were incredibly lax. If this program was going to be running at scale on a dedicated server, then I might need to worry about efficiency. Maybe I’d need to watch the memory footprint, or do something with multiple threads, or whatever. But this program was going to use my mid-tier residential internet connection with a single IP address. Network throughput will always be the bottleneck in that setup, so any other optimizations exist only as amusements to gratify the programmer’s particular obsessions or passions. You can optimize that text parser until it runs like Carmack-level assembly code, but it’ll never make the program faster in a way that will be detectable to the user.
Next time I’ll talk about what the scraper is actually doing. If you thought this one was boring, just wait until I start talking about databases.
 To the genuine annoyance of some.
 Anecdotes extrapolated into “data”.
 Or should we read quarterly reports aimed at shareholders, and swallow THEIR slop?
 I’d sort of assumed that Unity-specific stuff would have Unity-specific includes, but it’s also possible Unity comes bundled with some third-party things and conventions.
 Invisible to the game developer. I’m going to assume people working on the engine can see their own code.
 Okay, there’s probably a little bit of stuff the program does that’s invisible to a regular C# programmer, but that’s NOTHING compared to the gargantuan task that Unity does when it creates a window, launches a rendering pipeline, initializes the sound system, loads assets, and a thousand other things.
 It’s more complex than this in practice, but this works as an example.
 And perhaps to my senses.
Who Broke the In-Game Economy?
Why are RPG economies so bad? Why are shopkeepers so mercenary, why are the prices so crazy, and why do you always end up a gazillionaire by the end of the game? Can't we just have a sensible balanced economy?
What is Vulkan?
What is this Vulkan stuff? A graphics engine? A game engine? A new flavor of breakfast cereal? And how is it supposed to make PC games better?
The Best of 2013
My picks for what was important, awesome, or worth talking about in 2013.
A programming project where I set out to make a Minecraft-style world so I can experiment with Octree data.
The Best of 2012
My picks for what was important, awesome, or worth talking about in 2012.