So now I’m done messing around and being silly. It’s time to actually scrape the web for stuff. There are three different sites I’m interested in:
- Metacritic, for critic scores.
- Wikipedia, for credits regarding director, writer, producer, composer, etc. This information is spotty and I can’t think of how it might be useful right now, but I’m going to include it as part of the exercise. Also, Wikipedia often notes what franchise a game is from, which might be handy if I want to do a search that includes “all Resident Evil games” or somesuch.
- Steam, for PC -specific info like DRM, controller support, multiplayer, etc.
There’s also a bit of information that can come from any of these sources: The url for the game’s official website might be handy, and we also need to get the publisher, developer, and release date from one of these places.
Of the three sites, it seems like Metacritic is the best one to start with. It has games listed by platform, which is necessary in a structural sense. For the purposes of our database, it’s possible for the same game to have vastly different information depending on platform. For example, maybe a game is released on the Playstation 3 in 2010 by Beloved Developer, but then a year later it gets ported to the PC by Shovelware Games. Metacritic is the only place where we can get this information reliably. Steam obviously isn’t going to have non-PC data, and Wikipedia entries aren’t guaranteed to have all the per-platform data in an easy-to capture locationIt might be in the info box on the right, or it might be buried in the article text (good luck capturing THAT) or it might not be listed at all..
Metacritic even has a handy index page that you can go through:
You can see that I can choose a platform by changing the bit where it says
/pc/ and I can choose a page by changing the number at the very end. This particular index will only list games with a rating of 30 or above, which will filter out a bunch of dross that we’re not interested in right nowThere’s another index that lists all games alphabetically with no filter. I might switch to that someday if I’m curious about all of the bottom-of-the-barrel stuff..
So the plan is:
Start at page 0. This page will return a list of 50 games. On this index page, all we get is the title, publication date, and a link to the full metacritic page for the game. We grab the names and the URLs, then move onto the next page. If we find a page with no games, then we’ve run past the end of the list and it’s time to stop.
Once we’re done, we have the base info for these games: Title, platformJust PC-only for this first run., and release date. The latter is important to avoid confusion over same-name sequels like Tomb Raider, Doom, Sim City, etc. So to uniquely identify a game in our database we’ll need all 3 pieces of info.
Once we read the whole index, we can go back and load the Metacritic page for each game and get the more detailed info: Developer, publisher, critic score, user score.
That’s the plan, anyway. Now all we need to do is start scraping.
Ask for Permission, or Forgiveness?
The thing is, it’s very important to make sure your bot is well-behaved. If you’re careless, it’s possible to – completely by accident – launch a denial-of-service attack on a website by simply building an ill-behaved bot.
It’s not that my humble residential connection is any threat to the mighty Metacritic, but that’s no reason to be careless.
Now, technically you’re supposed to have your bot read the robots.txt file. That’s a plain text file that tells robots how to behave. It tells the bot how often it’s allowed to make requests, and it tells the bot where it is and isn’t allowed to go.
Normally, I’d be a Good Citizen and follow the rules. My problem is that a full implementation of robots.txt compliance can be fairly complicated:
1) Read and store all of the directories and files you’re not allowed to scrape.
2) Every time you need something from the site, compare the prospective URL to the deny list to make sure you’re not inside of any of the forbidden zones or grabbing a forbidden type of file.
3) Create fallback behavior for when you can’t get to something you need.
4) Set up a test on my own webserver to prove that the system works as intended, or else all of the above was a waste of time.
It’s not hard, but it would be time-consuming and it would end up making the project a lot bigger. In a practical sense, it would be better to MANUALLY view the robots.txt file, see what it says, and then scrap the entire project if it’s too restrictive.
Making a well-behaved bot in this case would mean writing tons of code that, if it ever got used, would mean the entire project was pointless and I shouldn’t have bothered with any of it.
But most importantly, if Metacritic says bots aren’t allowed to crawl for publicly-available information, I’m going to do it anyway because this sort of prohibition is stupid.
I realize this comes off as uncharacteristically Renegade of me. I’m usually very Lawful Good about this sort of thing, but there comes a point where I think it’s important to draw the line.
It’s Just Me, and My Robot Friend
Let’s say I’m walking down the street and I see a sign on a building that says “KEEP OUT!”
Okay. Cool. I’m going to stay out. This is someone else’s property, and they haven’t given me permission to enter. That’s fine. I’ll piss off.
Now let’s say I’m walking down the street and I see a great big sign, which is brightly lit and easily visible to anyone walking by. I read the sign, and then at the bottom it says that I’m not allowed to tell other people about the sign. Or maybe I can tell them about it, but not take a picture of it. Or perhaps they don’t want me reading it aloud or writing it down.
Some people see this as a perfectly reasonable request, but I can’t help but see it as an encroachment on my freedom. You can’t put something in my head and then insist I’m not allowed to tell other people about that thing. Your public-facing, reachable-by-Google page is a giant lit up sign facing the street, and you have no right to tell me I can’t use my camera in public and you certainly don’t have any right to inhibit my speech by demanding I keep your sign a secret.
If the sign is facing the public street, then presumably I’m allowed to read it. (And If I’m not, then it’s your fault for putting your sign there.) But see, I’m a slow reader. So I’m going to have my friend here read the sign for me. The fact that my friend is a robot is beside the point. He’s a friend. The point is that he’s helping me to read and remember what the sign says, and if I have permission then he does too.
The information on Metacritic is visible to all. I’m just building a robot to look at it for me. It’s true that I can’t encroach on your property, but you can’t tell me what I can and can’t do with my robot.
On the OTHER Hand…
Some people have a totally different mental model of all of this. To them, going to a website is like going INSIDE someone’s building. Certainly you have the right to ban photography within your own building. You should be able to prohibit drones. Demanding people not tell others about what they see inside is a bit iffy without an NDA, but I think most people would agree you have rights over me while I’m in your home that you don’t have if we’re standing on the street.
Using this mental model, the “no bots allowed” demand makes a lot more sense.
The problem is that neither of these mental models are correct, because the internet is very much its own thing. We try to map it to familiar ideas so we can import our existing collection of moral assumptions, norms, and etiquette. That works most of the time, but sometimes the novelty of this system is inescapable.
Some people take this even further. Remember the whole controversy over deep linking? To one person, linking to an article is like telling someone else where you found the article. To another person, deep linking is somehow plagiarism / copyright infringement. I can’t really understand how anyone can come to the second conclusion, but I’m willing to bet the analogy / mental model they use to understand the internet is very different from mine.
Anyway. Maybe you agree and you see a web scraper as just an automated tool for browsing the internet. (I could, after all, visit all these hundreds of pages myself and manually enter their contents into a database.) Maybe you think I’m a scoundrel and I should keep my robot out if I see a “No robots allowed” sign. That’s fine.
In either case, I would agree that none of this is an excuse for making a poorly-behaved bot. If nothing else, I’m going to make sure my bot is very quiet and doesn’t make too many demands on the webserver.
Bots are Not Created Equal
All told, we’ve got dozens of things to download before we have the full contents of the page. Rather than waiting for things to trickle in one at a time like in the old dial-up days, my browser will start downloading several of these things at once. I’m not sure how many simultaneous downloads is normal. The last time I paid attention to this stuff was in the mid 90s, when the typical number of simultaneous downloads was, like, five or something. I’m not sure how the technology works today, but I’d be surprised if that number hadn’t gone up.
The average size of a webpage has gone up quickly over the years. Different sites give different numbers, but everyone seems to agree that it’s at least 2 megabytes to download the average webpage here in 2020. That is, the size of the download to check the front page of your average news site will be larger than the entire install of DOOM in 1993. That’s your average page, and when you’re talking about a corporate front page with an image slideshow, I’d be very surprised if it came in under 10MB.
This is obviously MASSIVELY bloated, considering you’re just here to read some text. But keeping things small and efficient is expensive and time consuming and users seem to have grown accustomed to waiting a few seconds for a site to load on their phone. The public doesn’t care, so nobody’s willing to spend the money to make this stuff smaller. This also means that every time mobile networks get faster, pages will grow to consume more bandwidth until you’re back up to 5 seconds of loading time again. This reminds me of the 90s, when the latest version of Windows was guaranteed to eat up all the new RAM you just bought.
My point is that a typical scraper bot doesn’t give a damn about any of the extra content. It downloads the raw HTML, and it doesn’t download any of the required CSS, images, scripts, or other nonsense. To the bot, the site is only a hundred or so kilobytes – almost nothingI just looked, and my site seems to have about 50Kb of overhead. That is, a post with no content and no comments would still be a 50KB html file. That sounds big, but like the big corporations I’m too busy / lazy to investigate further. .
So bots are harmless, right?
The problem is that while bots don’t typically download all the bloat, they read millions of times faster than a human. You might click a link every minute or so, but the bot will happily devour a hundred pages a second if you allow it. Like, a hundred 100KB files? That’s not even a big deal. Your bot could do that on one core while you’re playing Doom Eternal on the rest.
So the first step to making a well-behaved bot is making a bot that doesn’t get too greedy. Amazingly, this is one of those rare instances where the lazy thing is the optimal thing.
What you’re supposed to do is request a webpage and have it download in a background thread. Then your main program keeps running. Maybe it even kicks off more threads. Then your program comes back around and checks to see if the downloads are complete. If done properly, your bot can keep many plates spinning at the same time, because multitasking is easy for computers.
That’s a lot of work. It’s also possible to NOT put the download in a background thread. You can, if you want, start the download in your main thread. If you do that, then your program will sit there effectively locked up until the download completes or fails. If you do it this way, then you’ll never have more than one active download going at a time. If you put a cooldown timer on it, then you can make sure your bot will never hit the server hard enough for anyone to care.
I put a one-second cooldown on the bot, meaning my bot will never load more than one page a second. In practice, it’s more like a page every other second because there’s a little overhead to starting each download.
I aim the bot at my own site just to make sure I didn’t do something really stupid. Once it successfully downloads a few things, and I confirm it’s working properly, I aim it at Metacritic and begin harvesting data.
I haven’t read the robots.txt file from Metacritic so I don’t know if my bot is welcome here, but the bot is using a ridiculously small amount of resources.
Next time I’ll talk about parsing these pages and combining their information with stuff from other sites.
 It might be in the info box on the right, or it might be buried in the article text (good luck capturing THAT) or it might not be listed at all.
 There’s another index that lists all games alphabetically with no filter. I might switch to that someday if I’m curious about all of the bottom-of-the-barrel stuff.
 Just PC-only for this first run.
 I just looked, and my site seems to have about 50Kb of overhead. That is, a post with no content and no comments would still be a 50KB html file. That sounds big, but like the big corporations I’m too busy / lazy to investigate further.
What is this silly word, why did some people get so irritated by it, and why did it fall out of use?
Silent Hill 2 Plot Analysis
A long-form analysis on one of the greatest horror games ever made.
Bethesda NEVER Understood Fallout
Let's count up the ways in which Bethesda has misunderstood and misused the Fallout property.
A Telltale Autopsy
What lessons can we learn from the abrupt demise of this once-impressive games studio?
Who Broke the In-Game Economy?
Why are RPG economies so bad? Why are shopkeepers so mercenary, why are the prices so crazy, and why do you always end up a gazillionaire by the end of the game? Can't we just have a sensible balanced economy?