What The Heck is a Fractal and How Does It Apply to Games?

By Shamus
on Apr 21, 2015
Filed under:
Column

This week I’m changing things up. Instead of complaining about pontificating on the state of AAA games, I’m doing a faux-educational bit on fractals and programming. This will be pretty remedial if you already know about fractals, but some people don’t and I’m hoping they’ll find it interesting.

Let’s talk about the Mandelbrot set:


Link (YouTube)

The Mandelbrot set actually kind of freaks me out. I find it kind of scary and hypnotic. As the image zooms in, I keep waiting for it to reach the “end”. I know there isn’t one and there can’t be one, but I keep waiting for it anyway. I think this video gives a much better sense of “infinity” than (say) looking up into the night sky and talking about the white dots in terms of millions of light-years.

In the column I said the math for the Mandelbrot set can fit on an index card. It’s a little larger (but not much) if you express it in computer code instead:

For each pixel (Px, Py) on the screen, do:
{
  x0 = scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
  y0 = scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
  x = 0.0
  y = 0.0
  iteration = 0
  max_iteration = 1000
  while ( x*x + y*y < 2*2  AND  iteration < max_iteration )
  {
    xtemp = x*x - y*y + x0
    y = 2*x*y + y0
    x = xtemp
    iteration = iteration + 1
  }
  color = palette[iteration]
  plot(Px, Py, color)
}

So you have this grid of x, y values that correspond to the pixels in your final image. You stick them trough this function, and then take the answer and feed it back into the function. Repeat over and over.

You’ll get one of two outcomes:

1) The values will “run away”, getting larger and larger as you go. Like multiplying 1.1 by itself, you end up with a value that climbs.
2) The values grind along, or trend towards zero. Like multiplying 1 by itself.

So you watch for your number to go above some value. Then you look at how many times you cranked through the function to get there. You use that count to determine the pixel color. Great. Now you’re done with one pixel of the final image.

When you “zoom in”, you’re looking at smaller and smaller ranges of numbers. At the start of the video above, you’re looking at numbers between -2.5 and +1. But at some point you’re looking at numbers between (say) 0.1 and 0.2. Eventually you’ll be dealing with stuff like 0.10000000001 through 0.10000000009. At some point you’ll have values so precise that they won’t fit inside typical variables offered by most computer languages. And then you have to write your own low-level system for doing math on numbers of arbitrary precision. I’ve never done any work like that, but I imagine you’ll quickly run into some serious speed concerns. The video above apparently took 4 weeks to produce. You can find other videos on YouTubeLike this one. that go super deep, so that by the end the values they’re tracking are a thousand digits long.

Enjoyed this post? Please share!

Footnotes:

[1] Like this one.


A Hundred!3103 comments. Quick! Add another to see if this message changes!

From the Archives:

  1. krellen says:

    Does the presence of this post mean you were successful in reinstalling your SSD, or are cat pics required?

    • Shamus says:

      Good news: SSD isn’t dying.
      Bad news: My main HD (the largest, and the one with the boot record on it) is.
      Good news: Was able to get a replacement. (Yes! Thanks to you-know-who.)
      Bad news: Replacement was dud. (A bad Western digital drive! Unheard of in my experience.)
      Good news: I got Windows re-installed, and this install isn’t tied to a stupid annoying useless frustrating MS account, so no password, no unwanted cloud settings, and lots of other annoyances are gone or smoothed out. Also, the boot record is now on the SSD, so booting is WAY faster.
      Bad news: Sort of out of HD space, no room for games.
      Good news: Shipped the dud back today. Ordering new HD this evening.
      Bad news: Still have a good day or so of downloads, updates, and re-installs to look forward to.
      Good news: I’m in a creative downturn right now, so this lets me blame my lack of productivity on my computer instead of taking responsibility for it myself.

      • Shamus says:

        Addendum: Anyone have any suggestions /warnings for a good HD? I’m shopping in the 4TB range. For my purposes, reliability trumps speed.

          • Florian the Mediocre says:

            Damn, you beat me to it.

            A bit older and with a smaller sample size, but you might want to look at this, too:

            http://www.tomshardware.com/reviews/hdd-reliability-storelab,2681.html

            • krellen says:

              Those two articles make a pretty compelling case that Hitachi/HGST is the way to go if you’re looking for reliability.

              • Rich says:

                Hitachi has always been my HD goto. But I have no prob with recommending WD. Never had an issue with a WD drive in my own or client systems. Like Shamus said, a dodgy WD out of the box is surprising.

                • guy says:

                  Electronics manufacturing is never entirely reliable, so all the companies occasionally have faulty components get past quality control. In general, if you send a defective component back and get a new one, it is highly likely the replacement will work.

                  • Rich says:

                    Yep. That’s what I said below. :-)

                  • krellen says:

                    I do hardware warranty repairs for a living. I have occasionally had two parts in a row be bad – but I don’t think I have ever had two parts in a row be bad in the exact same way.

                  • Humanoid says:

                    Systematic design flaws aside, sure. But before Hitachi bought IBM’s hard drive business, there was a time where you’d be more or less correct in assuming a 100% failure rate with their glass platter drives. They weren’t nicknamed DeathStars for no reason. Hitachi have done a good job in rehabilitating the reputation of the DeskStar, such that it’s now a byword for reliability. Seagate also had some systematic bricking just a few years ago, and that reputation still sticks a bit today.

                    That said, it’s been a long time since my last hard drive failure. I think the last outright failure was a 2GB Fujitsu back in the 90s, maybe replaced a couple after that when they started to sound unhealthy, so they may or may not have failed within their useful life.

                    __________

                    More on topic, I’m a proponent of going for a ‘slower’ 5400rpm drive for desktops, primarily for noise reasons. Not only is the practical speed difference mostly negligible, the presence of an SSD means even the difference is rarely even tested. For people needing 2TB capacity or less I’d even recommend buying a 2.5″ “laptop” hard drive instead because they’re even quieter.

                    I’ve since transitioned out of traditional spindle drives for my desktop altogether, which I can get away with because I put all my media type content on a NAS, and 1TB worth of SSD is plenty for what’s left over, i.e. games.

                    Um, so digression aside, I’d go with a Hitachi CoolSpin (their only non-7200rpm line) or WD Red 3.5″ for your purposes. WD Green is cheaper and supposedly is mechanically the same as the Red, with a different firmware and shorter warranty, but I haven’t owned one for some years so I’m not exactly sure where it stands now.

                    • Abnaxis says:

                      I find it odd going from “I’m a proponent of 5400 rpm drives for desktops” to “I have 1 TB of SSD in my machine”

                      I mean, sure, if you can blow $400 on an SSD that can hold any software you want to load fast, you’re golden. The rest of us could use that faster platter for the stuff that’s not cached on the SSD.

                    • Humanoid says:

                      Yeah, probably should have been clearer. I started out with a fairly standard setup of a 250GB SSD and a large spindle drive. But a combination of moving the bulk of my media to a NAS and buying extra SSDs meant the spindle drive was seeing increasingly little use, so I ended up removing it and putting it in the NAS. For a single desktop household that would be less justifiable, and I maintain the SSD + 5400rpm HDD would be the preferred setup.

                    • Wide And Nerdy says:

                      I was seriously thinking of getting a 1TB SSD. At first I thought “Well surely 250 is enough” but no, I keep swapping out games and not wanting to use my other drives for games. I’ve gotten too used to the glories of fast loading. Takes me back to the Nintendo days. In my case, drive failure wouldn’t be devastating as I can reload everything.

                    • The benefit to games with a SSD vs a HDD should not be major.
                      If a game disktrashes or does that much random reads then that is a issue with the game and not due to a slow disk.

                      For performance reasons the game data is usually stored in large files usually a proprietary archive format or database (or a hybrid), and the data for a level or area are usually clusters together or even sequential.

                      I’d rather add more RAM to my system than a SSD to speed up gaming.
                      Windows is great at using shadow ram (utilizing the RAM that is not otherwise used) to cache files.

                      This means that if a game reads from the same 2GB sized file a lot and you have a lot of unused memory then Windows may just decide to cache the entire 2GB file in ram.

                      I noticed this myself some time ago when reading 2GB of various sized files from a USB stick took the expected time the first time but the second time it was almost instant. Turned out Windows had cached all the files on the USB stick on shadow memory.

                      When a program opens a file for reading it’s possibly to tell Windows if you intend to do sequential or random reads from (there is a programmer flag you can set) and Windows will cache the file differently (it may automatically read ahead using idle CPU/disk time for example).

                      An SSD is great however where a lot of short random reads and writes occur.
                      The Windows pagefile is one such file that benefit from this. Other stuff are browser cache, system temp file folder, maybe the user profile settings folder. The registry etc.
                      This will speed up not just Windows but any software running on the system.

                      However when SSD prices drop to match HDD prices so that you get the same sized SSD for the same cost as a same sized HDD then there is little incentive to keep using HDDs.

                      But my guess is that HDDs will remain cheaper for a long time, and for storing large amounts of data they will still be the best alternative.

                      There is a new generation of SSDs on the horizon that may cause a storage size jump though, we’ll see what that does to price vs storage size though.

                      http://www.hardocp.com/article/2013/12/10/hdd_vs_ssd_real_world_gaming_performance/
                      In terms of raw video game performance our conclusion is that upgrading to an SSD made absolutely no difference in gameplay performance.

                      https://www.youtube.com/watch?v=IuSrdwdMud8
                      HDD vs SSD load time comparison, this is much more interesting and shows what to expect.

                      The SSD looks snappier on loading, but interestingly if the is a lot of loading (look at the ANNO game loading) there is not that much difference, now this could be sloppy coding in the loading routine of the game though or maybe a giant file was read into memory directly.

                    • Abnaxis says:

                      This is completely unscientific, but I have a half TB worth of HDD across 2 drives in a RAID 0 array (had to buy a system board with a special chip set to handle the throughput at the time), and I am still always the first person to load in multiplayer environments on a three-going-on-four-year-old tower, with a middle of the road CPU and 16GB 1600 MHz RAM*. Lately it’s been slowing down a bit relative to my peers, but that’s because games are getting ludicrously huge lately and SSD performance wanes as the drives get full.

                      I know for a fact a few of my regular compatriots have SSD cache, and I’d guess a lot of them have SSD for primary storage. Nevertheless, I am that guy who grabs the katana in L4D2 while everyone else is still controlled by bots.

                      *Ironically, back when I built the computer, I had $600 in the HDD, but was only using integrated video on the CPU. On a gaming computer =p. I hate waiting for loading.

                    • guy says:

                      Being stored as a large file with related information stored close together doesn’t necessarily mean it actually ends up contiguous on the drive itself. The file system can arbitrarily distribute it, so you can save a 10gb file without having a contiguous block of 10gb available. Also so you can make a file with no adjacent free space larger without relocating the entire thing. Obviously, the best performance happens when it is contiguous, so the file system will usually try to do that. However, it is not universally successful.

                      The effect you’re actually seeing is that disk access speed has no impact on the speed of things that do not require disk accesses, because obviously. The entire point of loading is moving things from the disk into RAM because RAM is faster. You’ll see framerate improvements from an SSD only if the game needs to access the paging file because there isn’t enough space in RAM. As the system used in the comparison you linked has 16GB of RAM, I highly doubt it needed to.

                      I guess you might also see it in a game that does dynamic loading rather than having loading screens, but usually they’d set it up so the running game generally doesn’t have to wait for the stuff being loaded to actually arrive and it would only cause a framerate hit if you moved faster than expected and got to an area it hadn’t finished loading yet. Any overhead from sending requests to the disk would be identical on both.

              • Florian the Mediocre says:

                At least that’s what I went with the last time I needed a hard drive. (HGST, that is. My reply came a little late.)

                Afaik, there is no real reason not to put a NAS drive in a desktop pc, so if you see one that looks right, you could use that. (I had an easier time getting my hands on a HGST Deskstar NAS than a regular Deskstar.)
                (Main difference is that NAS drives have firmware that gives them somewhat less time to recover from read errors so that they don’t cause trouble in Raid, plus they usually have some feature that’s supposed to help with vibration and a stamp that says they can be on 24/7. They usually have lower power consumption, but the Deskstar one is more of a performance drive. (In the same league as the WD Black according to one review.))

                This is the 4TB version of the drive I have.

                • A NAS drive is designed to fail quicker: http://arstechnica.com/civis/viewtopic.php?p=26655341#p26655341
                  Quote:
                  Is there any specific disadvantage or danger to using NAS drives for non-NAS uses?
                  Drives optimized for RAID use typically have shorter error correction cycles (aka TLER, time-limited-error-recovery) so that they don’t time out to RAID controllers and get dropped from arrays.

                  For desktop use you actually want to allow a longer error correction cycle to give better odds of recovering your failed data from a weak/failing block.

                  So if you have a choice between the two I’d go for a non-NAS drive myself.

          • Zak McKracken says:

            Love those statistics, though from those numbers, WD would still be my favourite (but then a business has a different perspective on this than me)

            I’d either go for the Green series (slow but quiet and low-power) or the Black one (“regular”). Not sure if the Red ones would be more reliable in a PC (are made for NAS applications, whatever that implies). I usually take whatever gets more warranty.

            • Humanoid says:

              The Black series is bloody loud to the point I’d rather have the budget model Blue drives, but then the latter only comes in 500-1000GB capacities. Both are 7200rpm drives though.

              The main difference between the Green and the Red as far as I can tell is that the Green has the controversial head-parking feature (which some people think harms its reliability, though I’ve seen no actual evidence of such) and the Red has three years warranty as opposed to two. Both are 5400rpm drives. An end-user can ‘fix’ the Green firmware without too much hassle anyway to disable head parking, which then makes them for all intents and purposes a cheaper Red drive with shorter warranty

              Anyway, these days I have a “No PC left behind” policy, which is to say, no PC should be without an SSD. In my eyes, this then essentially obsoletes 7200rpm drives, because the meagre speed advantage they offer is now of more marginal benefit than ever.

          • Caffiene says:

            While the Backblaze tests are interesting, theyre not really that relevant to consumer purchasing decisions.

            Theyre using 45-drive bays with quite different heat and vibration characteristics to a home PC, and running the drives 24/7/365.

            It tells you something about the drives in that usage, but doesnt necessarily tell you about them in a consumer use situation.

            • Rich says:

              Well they were doing time to failure. And so it fits Shamus’ criteria of ‘reliability trumps speed’.

            • Zak McKracken says:

              The main difference there is that in a PC a hard drive will be turned off and on more often. I’ve no idea whether that has an impact on the end results.

              Of course it would be cool to have the same statistics for a large number of private consumers but I guess only the manufacturers or large stores that sell these things would have access to that data, and even then it would not be complete because not every customer returns their faulty drives (especially after warranty runs out) via the same channel.

            • Yeah. While their environment may be consistent (in temperature and use), the failure rate for the same drives at home are probably lower.

              I have to agree (so far) with those stats though. I don’t think I’ve seen a WD fail in like years and years. I did have a Seagate crap out some years ago though.

              Sadly HGST aren’t sold anywhere here (that normally sell just consumer drives), so availability and cost is an issue here with those.

          • FYI! BackBlaze pulled (are pulling) all the 3TB Seagates out of their systems due to excessive failure rates.

            A HD will either fail in the first few weeks (under heavy use) or they will run for years and then fail. (It’s called the bathtub curve IIRC)

            Drives will all fail at one point or another. So, backup backup backup backup.

            • Alexander The 1st says:

              “Drives will all fail at one point or another. So, backup backup backup backup.”

              That’s not a great solution when your backup fails before the stuff it was backing up does.

              • So? Replace he backup drive and back up to the new one, the work drive and the archive drive should make that easy enough to do.

                Also RAID has one major issue (read one of the links in my other comments) in that when a RAID drive fails and is replaced, the probability of a 2nd drive failing in that array is very high.
                And you do not want that 2nd drive to fail before the new drive has been rebuilt.
                If that 2nd drive fails during rebuilding then the whole array is useless and need to be reformatted as you can’t restore anything.

                So the best solution is:
                Work drive (this is where you do all your work).
                Backup drive (this is where you daily backup the important stuff and weekly backup other stuff)
                Archive drive (this is where you backup stuff from the work and backup drive that you do not wish to loose)

                Myself I have that setup more or less now. Only I got a USB drive in addition and I use a USB stick as to hold a 5th copy of some small but super critical data.
                Ideally my work drive would be just that but it’s partitioned in two and used for system and work. If I could afford it I’d have maybe used a smaller/cheap SSD for a system drive.

        • Rich says:

          Ultimately I’d say go with another WD drive. What are the odds of two WD drives being duds out of the box? OTOH if you don’t want to go that route and have the cash, go Hitachi.

        • Shamus says:

          I see some drives have the NAS descriptor. As far as I can tell, this stand for “Network attached Storage”. Specifically, I’m looking at this one:

          http://www.amazon.com/HGST-Deskstar-3-5-Inch-Internal-0S03664/dp/B00HHAJRU0/ref=lh_ni_t?ie=UTF8&psc=1&smid=ATVPDKIKX0DER

          I’m a little worried about getting something that might not work, so I just want to make sure NAS isn’t a problem. (I can’t even make sense of what it means in this context, and I don’t want to be hurt by what I don’t know.)

          • Rich says:

            A NAS drive is essentially set up for a RAID array. Unless you are doing that you don’t need it. It’s not a bad drive for a stand alone system, it just has firmware that you don’t need. Essentially, you could save a few bucks buying the same or similar drive that isn’t NAS compatible.

            • Bropocalypse says:

              I learned about RAID arrays after building my last computer. If I had the cash to throw around I’d probably set up a RAID 5 on a few small SSDs for my boot disk.

              • Humanoid says:

                The nature of SSDs is that larger SSDs are inherently faster than smaller ones. This is because unlike traditional hard drives, there is a large degree of parallelism in how an SSD operates. If you imagine a storage drive as a beer keg, doubling the size of a traditional hard drive just means doubling the size of the keg. In comparison, doubling the size of an SSD also doubles the number of taps on the keg. It’s not quite that simple of course, but it’s more or less how they work.

                The upshot is that it’s almost always best to buy just one SSD, and make it the largest one you can afford.

                • Zak McKracken says:

                  …except if you want to do something like actual RAID (R being “Redundant”), where at least one drive is story parity information … in that case you’d want to have separate drives.

                  I do have a RAID 5 of a bunch of smaller hard drives which were obsolete in their original place, and it does allow me to sleep a bit better, and makes at least reading a bit faster.

                  With SSDs, I think it’d make sense to have a regular backup of your SSD to a hard drive, no RAID required. There’s free software which does that for you (regular incremental backups of the entire system partition), and that should do it. No need to spend money on a second SSD, and the slower speed of the hard drive does not really come into play this way because backups run in the background.

                  • Humanoid says:

                    Yeah, RAID or no RAID, the sensible approach back up properly to an external drive either way, redundancy is better than nothing, but it’d probably take a similar time to restore a full backup compared to restoring the array in case of failure anyway.

                  • MrGuy says:

                    It’s been awhile since I played with RAID, but just a few things:
                    * While it’s conceptually appealing to build a RAID array of 10 otherwise-obsolete small drives (say, taking 10 80Gb drives) and getting something with the storage of a modern drive (a ~720Gb array), with some additional reliability to boot, the energy efficiency is tremendously poor – that’s a LOT of spindles to keep running. I found it was literally cheaper to buy the drive I needed with a nightly backup drive than to keep RAID running.
                    * RAID is protection from ONE risk to your data – losing it all in the event of a single disk failure. Having a daily backup using reasonable software protects from TWO major risks to your data – drive failure, and accidental deletion of the data by the user. For most people, the second risk is the more common one. RAID behaves like a single disk – delete it and it’s gone.
                    * As I understand it, the reason RAID approaches (that involve striping) are faster from an I/O perspective has to do with the way spinning drives work – more write heads on more spinning platters can write the same amount of data in less time. SSD’s aren’t subject to the same physical limitations – will striping actually help SSD’s?

              • RAID is not a magic bullet. You still need to make backups.

                A guy I know had the raid card fail and he had a helluva time to find a compatible RAID card.

                Another issue with RAID is that you need 4 mirrored drives for proper redundancy. (and you still need to make backups)

                For regular use you are better off with a system drive, a primary/work drive, a backup drive and a archive drive and schedule backups for your data.

                Also read this on RAID 5 http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
                And this on RAID 6 http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

                • Wide And Nerdy says:

                  Or if your files are small enough and security isn’t an overly huge concern, use an online drive for backup.

                  • Wide And Nerdy says:

                    Put it this way, if data redundancy is more important that data security, use an online backup.

                    • Online Backup is not a magic bullet either. What happens if the cloud goes down?
                      #1 You can’t access the backup.
                      #2 If they have a data loss then they may be unable to fully restore it.
                      #3 You might want to encrypt your data before you upload it to a cloud.

                      If you are going to use online backup then you can’t just use a normal cloud,
                      you need to use a online backup service (some clouds may charge extra for backup services).

          • Florian the Mediocre says:

            I have the 3TB version of this drive as the only drive in my pc right now.

            If you buy it, make sure you have or order a sata cable for it – the package contains only the drive and screws to secure it in the case.

            • Humanoid says:

              I bought the 6TB version (and a 6TB WD Red at the same time), though I actually have both in a NAS now so no real comparison I can make between them for desktop use. I’ve had Red drives in my desktop before, and have been happy with them. In comparison, the Hitachi should be a little faster but also louder, being a 7200rpm drive compared to a 5400rpm drive.

              Can’t draw any meaningful performance comparisons in my NAS either because I have them both set to “minimum performance” profiles in Nas4Free.

        • evileeyore says:

          Seriously a Western Digital crapped out? That sentence doesn’t even make sense.

          Well, aside from WestDig… I’ve never had a bad PNY, they’ve always been rock solid for me.

          • Zak McKracken says:

            Every hard drive has a failure rate, only some are higher than others.

            If you read the backblaze post (linked somewhere above), you’ll see that WD actually fails more often than some others in the short term, but if you order a new one and it doesn’t break immediately, it has the best reliability of them all, by far.

            Combining my private computers and what I have at work, I’m operating 15 WD drives (mostly Red), and one of them did not work at all from the start and was replaced by the store I bought it from. All others have never had a problem. Although I’m certain that one will fail eventually. Nothing lasts forever.

        • If the PC has room for it I would suggest two 2TB or two 3TB drives.
          That way if one fail then you still got backups of stuff on the other.

        • Tom says:

          I find the Aubuchon hypothesis remains a pretty reliable test when it comes to buying hard drives.

      • krellen says:

        Did you at least get pizza?

        • Zak McKracken says:

          I’d say that all very much depends on the scenario you’re working in, the type and amount of data you’re storing, whether you need a versioned backup anyways, how much money you’re willing and able to spend extra, how important (or expensive to reproduce) the data is to you and a number of real-world concerns.

          …and if you use Windows exclusively. I bet those storage spaces won’t work too well with other operating systems.

        • tmtvl says:

          Pfft, guy doesn’t even know the difference between BSD and Linux.

          • For the layman there is no difference as BSD and Linux are both Unix derivatives.
            Also FreeNAS is based on FreeBSD and OpenZFS.

            He also under no circumstance say that FreeNAS is Linux (I just re-read the article). He just says that it’s popular in Linux circles.
            And it’ more likely that a Linux PC user would use a FreeNAS box than a Windows PC user would use a FreeNAS box.

            I’m looking forward for ReFS to mature though, NTFS is getting long in the tooth.

            I’m also surprised by the comments on that page, people are accusing him of being paid off by MicroSoft. Despite the fact that he is criticizing previous efforts of MicroSoft (Software Raid/Storage Spaces etc) and saying that ReFS is not ready yet to replace NTFS.

            Do Linux and BSD users get their emotions hurt so easily by just a few words on a web page?

            • Moridin says:

              Linux, the kernel, has nothing to do with Unix. And GNU, which is a large part of what people generally call Linux operating system, literally stands for GNU is not Unix. So technically speaking, calling Linux a UNIX derivative is inaccurate.

              • MrGuy says:

                This has always seemed like an overzealous claim by Linux advocates to me.

                As I understand it, the open POSIX standard was developed describe the basic common standards for core OS functionality for the various flavors of UNIX to ensure cross-compatibility.

                Linux and BSD are both fully independent, ground-up implementations of the POSIX standard.

                You can claim they’re independent, so they’re “not Unix,” but by being POSIX compliant, they’re designed as Unix-like. You can object to the word “derivative” if you like, but the POSIX standard wasn’t written in a vacuum.

                From an outside perspective, BSD, Linux, AIX, Solaris, etc., are all very, very similar to each other, and by design will largely run each others’ software.

                • One could get pedantic. What I meant was “unix like”.

                  But amusingly many call MacOS X a Linux based OS, but MacOS X is actually descendant from BSD. http://en.wikipedia.org/wiki/Unix

                  Built to clarify, by “derivative” I meant the same CLI commands, file structure (bin, usr, var, etc folders.) the file permissions/flags, and as MrGuy said all the POSIX stuff.

                  Some might say the GUI defines an OS (like with Windows vs others) but in the case of Unix or Linux it does not, you can usually choose between 2-3 GUIs to use, so clearly there is no unifying GUI which means what defines the OS must lie lower (CLI and POSIX etc.).

  2. Dev Null says:

    I discovered Mandelbrots back in college, with my roommate Eric. We made his PC draw the set, and pre-calculate the values so that we could pick any of the pixels on the screen and blow it up – once – to a full screen. This took a little while to run.

    So we took our shiny new accounts on the university’s mainframe, dragged our code over there, and told it to pre-calculate values so we could do the same zoom a few more times. I don’t remember exactly how many, but I don’t think it was ludicrously large…

    The next morning we received a polite email from the admin asking us to please never do that again.

  3. Charnel Mouse says:

    A load of schools and university maths department were busy making Menger sponges out of business cards last year. Some got to a pretty decent size.

  4. Rich says:

    If you’re interested there is a cool iOS app called Frax. I am not affiliated with them. In fact I saw a recommendation for it from Stephen Fry on his Twitter stream.

  5. mhoff12358 says:

    I got my first start in computer graphics making a 3D serpinski gasket in VR. Being able to walk around and stick your head in that sort of thing is amazing, and gives you a nice sense of the whole infinity thing.

    These sort of straightforward but visually interesting problems are a great way to get into graphics, as its way quicker route to feeling like you’ve made something interesting.

  6. Tizzy says:

    From the mathematical point of view, “fractal” is a slightly slippery notion that does not have a proper mathematical definition. But on the other hand, you know it when you see it.

    This can hardly be stressed enough. There are very few mathematical concepts that don’t come with a very clear definition. This is not really the kind of things that mathematicians like to have lying around. Yet, we have to out up with it. (The only other concept I can think of that’s been hanging around in that shameful state would be chaos, which is closely related to fractals. No matter what Jurassic Park may say, it is NOT the mathematician’s way of saying that Shit Happens.)

    • Charnel Mouse says:

      My chaos theory is pretty rusty, but I’m sure there was a strict definition for something to be chaotic.

      • Tizzy says:

        Well, Chaos. means sensitivity to initial conditions in a deterministic system. But that’s hardly a mathematical definition, more of a “I know it when I see it”. If you want to put math into it, then there is more than one way to formalize this. As far as I know, there is no such thing as the definitive formulation that encompasses all the definitions that have been given. Of course, it doesn’t help that you have to consider both discrete and continuous time.

    • droid says:

      A fractal is a thing that has
      [non-integer dimension]
      or
      [a mismatch in dimension between two possible definitions of dimension (such as the hausdorff dimension and the topological dimension.)]

      So the Mandelbrot set (being the set of black points in the above illustration) is not a fractal, it has finite non-zero but hard to calculate area so it is a 2-d shape. But the border of the mandelbrot set is a fractal: it has topological dimension 1 but infinite edge length.
      This would make the Mandelbrot set a pseudofractal, but that’s getting a bit pedantic, most of the time you can call a pseudofractal a fractal and everyone is fine.

      I would recommend The Fractal Geometry of Nature by Mandelbrot himself, it is relatively accessible and has plenty of pretty pictures.

      • Zak McKracken says:

        I think a reasonable layman’s definition for what a fractal is, would be “something that is made up of tiny versions of itself, which are made up of tiny versions of itself and so on”

        I.e. something that looks similar, no matter the magnification.

        I bet they do use something fractal-y for the landscapes in No Man’s sky: The land mass of a continent, a mountain, a bump in the surface…
        Coastlines are said to be fractal in nature too. If you look at a picture of Earth, then zoom in to a bit of coast and keep zooming, you’ll fine more roughly similar detail at every level, down to individual small rocks. That’s not infinite at all, but It seems like something that could be modeled using a fractal with 5-10 iterations of recursive depth.

    • Nevermind says:

      “A fractal is a set whose Hausdorff dimension is strictly smaller than its topological dimension”. That’s an exact mathematical definition, but I no longer remember what it actually MEANS. So don’t ask (-8

  7. Wide and Nerdy says:

    I watched the entire 16 minutes of that video. That was amazing.

    • Kacky Snorgle says:

      Indeed.

      Hmmm…for very long stretches of that video, it looked like we were always zooming toward the “center” of the pattern. But every once in a while, we’d veer off and head toward a point that wasn’t the current center of symmetry (e.g., just after 5:00). Was that a deliberate choice by the makers of the video, or is that just what a steady zoom into the Mandelbrot set looks like?

      Likewise, during those symmetrical stretches, there was a trend toward increasing the order of symmetry: we’d see a pattern with twofold rotational symmetry, then fourfold, then eight, sixteen, and so forth. In general the doublings seemed to happen faster and faster, so that by the time we got to sixty-four-fold symmetry or so, the pattern was out toward the edges of the screen and getting hard to see in the video–and then some new pattern would take over, beginning at two and gradually doubling again.

      For example, one long sequence of doublings culminates in the two extremely bright blobs at 11:40. More doublings, and we get four extremely bright blobs at 14:00. More, and eight at 15:15. Sixteen at 15:55. Thirty-two at 16:20. Sixty-four at 16:33. But then the zoom slows at the end of the video, so we don’t get 128 until about 16:49, and many of them are still on screen at the end…. Until the slowdown, each period was just over half as long as the previous, so if the pace had kept up, we would have hit the limit (apparently that mini-Mandelbrot on which the video ended) around 16:50 and some new pattern would have taken over.

      I wonder whether it has to go by powers of two. If we looked elsewhere in the set, could it (say) start with threefold symmetry and double from there, or even triple from there? Just what can this set do?

      • Zak McKracken says:

        There were some three-fold spirals.

        If you kept zooming exactly into the center of those spirals, for example (or any other symmetrical pattern), you’d see nothing but those three spirals until the end of the video, which is why they put the center of the zoom just a tiny bit off so that eventually you loose sight of the pattern’s center and head for a much smaller pattern’s center, and so on. I bet the way that video was created, they zoomed in large steps, then changed picked a slightly different location after a few orders of magnitude and so on, then zoomed created the video by zooming all the way out in a straight line. Those small changes at the zoomed-in level all fall within much less than a pixel in the wider view, so you wouldn’t see the difference of shifting the point anyway: You think you’re zooming straight into the center of a pattern but really it’s 100th of a pixel to the left.

    • Volfram says:

      All the spirals and size and detail complexity reminded me of the universe, and of Tengen Toppa Gurren Lagann.

      • Wide And Nerdy says:

        It reminded me of Children of Men. Near the end of that movie, they had a long take that was 6 minutes 18 seconds and was just breathtaking (a product of the length of the take itself and what was happening during the take which I will not spoil).

        In this case, it clearly something that is too precise to be hand animated and yet it also feels to detailed and varied to be purely computer generated (though of course it is, the trick is it took about 4 weeks for the guy’s computer to crunch out that sequence.)

  8. John says:

    Pedantry, ahoy!

    Shamus, I think you meant “abritrary precision” rather than “arbitrary complexity”. In mathematics, a “complex” number is a number that can be expressed as a + b*i, where a and b are real numbers (like 2, -1.53, or even pi) and i is the square root of -1. We sometimes call a the real component and b (or b*i) the imaginary component. Computers can handle complex numbers quite easily. Fortran actually has a default data type for complex numbers. In an object oriented language or, really, in any language that allows for programmer-defined data types, it’s nearly trivial to write your own. All you really need to do is store two real numbers. (And define some operations. And possibliy overload some operators. And that’s why I said nearly trivial.)

    Arbitrary precision–as in “numbers so close to 0 that the computer can’t tell the difference between them”–is indeed as you suggest: painful, awkward, and difficult. For my sins, I once encountered that problem when trying to compute certain weighted averages. My problem looked a little bit like:

    (a1*b1 + a2*b2)/(a1 + a2)

    (In practice, there were something like 10,000 elements in the sum.) My algorithm was mathematically correct, but the values for a1 and a2 turned out to be so small that the computer treated them as 0. So of course it treated a1*b1 and a2*b2 as 0 as well. The result was 0/0, a cascade of errors, and misery and despair for all concerned. I solved the problem by replacing a1 and a2 with larger numbers. That is, instead of computing a1 and a2, I picked a really big number s and computed b1 = s*a1 and b2 = s*a2 instead. If you pick s correctly, neither b1 nor b2 will be too small, and the scaling doesn’t affect the ultimate value of the weighted average.

    Now if I had only ever gotten the rest of that program to work . . .

    • Atle says:

      The problem with floats in general are that the precision changes depending on where on the scale the operation is performed. And operation done around zero will have a different precision than one done around one million.

      Depending on needs, possible solutions are converting algorithms to using integers (100 doesn’t need to represent 100, but could represent 0.0000001 or any maximum precision needed), or using a mathematical library supporting arbitrary precision.

    • Scott Schulz says:

      You’re right in this case that “precision” was the more apt word, but complex numbers are not the only use of the word “complex” in mathematics: there’s an entire field of computational complexity theory which looks at how the amount of necessary computation time increases with the size of the problem being solved. One of the remaining great open questions in math is whether N = NP (whether a large class of problems (NP) which cannot be solved in a polynomial number of steps as a function of the size of the problem can, in fact, be solved by a polynomial time algorithm (P)). (Shout-out to any one else who’s done an NP-completeness proof: they’re really cool and different kind of proof to do.)

      As for arbitrary precision computation goes, IIRC, Mathematica (which I think served as the basis for Wolfram Alpha) is built from the ground up to perform arbitrarily precise arithmetic. It gets slow, of course, but it’s cool that it implements the idea in a general way so that you do not even need to consider how it’s coded.

      • John says:

        Fair enough. I’ve always been math-oriented rather than computing-oriented, so I wasn’t thinking in those terms.

        I’ve used Mathematica a little, and from what I can tell it’s an amazing program. Unfortunately, as far as I know, it is not well suited to heavy-duty statistical computation. Alas.

        • Scott Schulz says:

          It would indeed be difficult to do any heavy lifting on large datasets, but, on the other hand, SAS can’t even begin to produce an indefinite integral, and you’d really have to work hard to make SAS compute an arbitrary definite integral. I love Mathematica, but I’ve had no real call to use it in the past twenty years, and I’m a professional statistician.

          • John says:

            I only had to use SAS professionally for a year or so. My background is in econometrics, and I was used to programs like Gauss and Matlab or structured programming languages like Fortran. SAS sort of blew my mind because it required me to start thinking in terms of tables rather than matrices or arrays. Once I got over that, I liked it a lot. It really is very powerful, with surprisingly good support for time-series econometrics.

  9. MichaelG says:

    Oh you youngsters are so spoiled with your fast processors. I heard about the Mandelbrot set sometime in the early 80s, and we had to use a NETWORK of a dozen PCs, with scanlines farmed out to each machine and the results returned to the viewer machine — just to get the top levels of the set!

    I also think it’s amusing that you could have explained the set to Newton, but then you’d have had to tell him “and it only requires billions of 20 digit multiplications to see it!”

    • Bryan says:

      To be fair to this hypothetical Newton, the actual set (of complex numbers) itself isn’t really the interesting part. Or, at least not to me.

      The interesting part is the edge between the numbers in the set and the numbers outside the set. In these visualizations, the numbers inside the set are always black. Meh. The colors are driven from how long it took each point (x) to break the |x| > 4 barrier when the iteration is done to it. :-)

      The point is only in the set if its repeated multiplication never breaks that barrier.

      • MichaelG says:

        I think Newton (and a lot of later mathematicians) would have thought of the rational numbers as a continuous field. And they probably would have said that an iterative function couldn’t change that — the results would also be smoothly varying. The idea that two neighboring points separated by epsilon distance could have arbitrarily different results in the function would have seemed very odd. The idea that the function could result in pretty pictures of infinite depth would have been even more bizarre.

        Newton could have done all the arithmetic, but would never have expected the result.

    • parkenf says:

      I wrote a mandelbrot display tool on a Commodore 64 and printed the results to the MPS-801 dot matrix printer. I managed the top level and at least one level of zoom (maybe 2). Can’t recall if I did it in assembly or basic but if it had been assembly it would have been with my own custom float multiplication setup as I never worked out how to do that properly. I recall it being pretty cool.

  10. evileeyore says:

    “We’re here to talk about game development, not mathematics or linguistics.”

    Well, poo. I really like when you talk linguistics. By which I mean coding, which is just really semi-logical linguistics.

  11. General Karthos says:

    Can’t believe this hasn’t been mentioned, especially as you have a retweet of a Jonathan Coulton tweet in your twitter feed there.

    https://www.youtube.com/watch?v=ZDU40eUcTj0

    Jonathan Coulton’s “Mandelbrot Set”.

    Incidentally, Mandelbrot is in heaven now.

  12. Grudgeal says:

    That video really feels like you should have some BGM while looking at it. I’m thinking some Yellow Magic Orchestra or Kraftwerk. For some reason it reminds me of a techno music video I saw on MTV 15 years ago about zooming in and out of holiday snaps, but I can’t for the life of me remember what the song was called.

  13. Jakob says:

    Fractals like the Mandelbrot set and Julia set has always freaked me out, especially once you start zooming in.

    I do however want to point out to people that what Shamus is doing in his code is not exactly the correct way to determine if a point is in the Mandelbrot set. In short: He is not looping forever. I encourage those interested to read wikipedias link on the Mandelbrot set . In short, Shamus won’t catch the points that escape once we have been through the function a 1001 times or more. It is of course necessary to stop the loop at some point, since otherwise we risk (and most likely will) running into an endless loop.

    As the image shows, and most likely the video too, points that are close together have widely different behavior. One can escape early, the other later or never, and no matter how far we zoom or how close we pick the points, they never seem to group up nicely with regards to when they escape. This seems to suggest that there isn’t really any nice way to make a program that absolutely will be able to tell whether a point is in the Mandelbrot set or not.

    So in short, the Mandelbrot set is an insanely complex beast, despite how benign it may seem first time you encounter the definition.

    Oh, and another factal I personally find really cool. It is the Koch Snowflake. It is a figure with a finite area, but an infinite perimeter. A similar object is Gabriels Horn , which has infinite surface area but finite volume. Or perhaps better decribed “It can be filled by a bucket of paint, but we can never coat the surface”.

    I better stop, I am getting way off topic here.

    • Atle says:

      It’s closely related to chaotic systems, where an infinitesimal small difference in initial conditions (here, x and y) can lead to an arbitrary large result in outcome.

      It is from this the infinite details comes.

      • Jakob says:

        I can see that to some extent, though I feel that chaotic systems still behave, for lack of better term, nicely. Meaning they are continuous in time, but small differences in the initial conditions leads to wide fluctuations in the end result. They are chaotic more because of a lack of our abilities to measure precisely enough rather than as an inherent part of their behavior.

        I think of it as something like this: If you pick two points within a certain distance, you are still able to give an area where they approximately both will end up (even if the area is very large).

        But in regards to the Mandelbrot it seems that you can’t even get that much. No matter how small of a distance you pick, you will not be able to get an upper limit on how much faster one points escape compared to the other.

        The way to compare my point: Even in a chaotic system, as you zoom in, things behave more and more similar, though you require greater zoom than most things. In a fractal, as you zoom in, you never get that, the “chaos” keeps repeating.

        To end this overly long post, I will say that I studied algebra at university, not any form of analysis, meaning that it is quite possible that I have misunderstood things, so I encourage people not to take this post as fact, but merely as some musing from a guy that has had a very short introduction on the subjects.

  14. Nick Powell says:

    The escapist seems to be down right now.

  15. asterismW says:

    What’s even cooler about the Chaos game (the procedure you described to create a Sierpinski triangle) is that you don’t have to start at one of the 3 points on your paper. You can pick any point as your starting point, inside or outside the triangle, and the dots will still converge to create a Sierpinski triangle. There may be a few outliers (depending on your starting point), but that’s it. I find that fascinating.

    Another interesting fact is that this doesn’t work with squares. Playing the game with four points, you just fill up the square with dots.

  16. HiEv says:

    If you want a good bit of hysterical/historical software reference to tie this into games, look up “fractal compression” some time.

    It was a rather awful compression method from back in the 90’s that tried to compete with JPEG and failed miserably, but a number of people were gullibly swindled out of their money in the process by the company Iterated Systems which promoted this go-nowhere technology. Most infamously it’s known for the “graduate student algorithm”, which has been described as:

    1.) Acquire a graduate student.
    2.) Give the student a picture and a room with a graphics workstation.
    3.) Lock the door.
    4.) Wait until the student has reverse engineered the picture.
    5.) Open the door.

    The software/games tie-in comes from the fact that Microsoft’s “Encarta” used to use fractal image compression, and “Falcon Gold” and “Star Trek: The Next Generation A Final Unity” used fractal video compression. This compression method would take many hours just to compress a single minute of video back then. Two different firms claim to own the exclusive license to Iterated Systems’ technology now, but neither have produced a working product out of it.

    Amusingly, or possibly as a tip of the hat to all of that nonsense, the open source library for fractal image compression was called “Fiasco”.

    My old boss once tried to warn a bunch of investors who thought that fractal image compression was going to be the next big thing that it was actually all a big scam. …Wow. You should have seen it. Almost all of them went into conspiracy mode, claiming he was afraid of it, or trying to damage their reputation, or promote his own stuff, etc.. It was pretty amusing in a sad sort of way. In any case, my old boss still runs his own company and Iterated Systems is just an amusing footnote in history now.

    Oh, and there was also a hoax “fractal compression” program called OWS. If you used it it would look like it could compress any file down to a few kilobytes. However, it didn’t actually compress the file, it just kept a link to the original file. So it would appear to work until you deleted the original file. Not sure how many people fell for that one and deleted their files thinking them stored in a compressed file.

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>