Unity Week #7: Why Would You Want to do That?

By Shamus Posted Tuesday May 22, 2018

Filed under: Programming 143 comments

“Huh. I’m keeping an awful lot of these widget objects in memory. I need them while generating the scene, and I occasionally need them later, but once the game is running they’re mostly just taking up memory. I wonder if it would be better to keep them around all the time, or throw them away once I’m done making the scene and re-create them if they’re needed later?”

Let’s assume, for the sake of argument, that these objects have a non-trivial size and also require a non-trivial bit of processing power to create. We create lots of them, we use most of them at startup, and then as the program runs we occasionally need a few of them. (But we can’t predict which ones ahead of time.)

This is a classic memory vs. CPU performance problem. If we had infinite memory, then there would never be a reason to get rid of these temporary objects. If we had infinite processing power and could re-create the objects for free, then there would be no reason to keep them around. But in this universe both of these resources are finite, so we need to study the problem to know what the right thing to do is.

So I’m writing a program in C# and I need to know how big something is in memory. In C++ I would just call sizeof (thing) and it would tell me how many bytes of memory thing is usingYes, you have to make sure you’re getting the size of things and not pointers, which means you might need to step down through the object hierarchy. The point is, this is easy to do.. This is a trivial operation, which means in C# it’s probably going to be a monumental pain in the ass. I do the usual Google search and as I feared I’m dropped directly into forum hell:

“You don’t need to know that.”

“Why would you want to do that?”

“That’s not possible in C#, and if you’re asking this question it means you’re doing something wrong.”

Here’s my favorite:

Short answer:
You dont.
Long answer:
You can only do that if you type has a fixed layout and has no managed members. Structs are fixed by default. Classes can attributed to have a fixed layout.
(I am not showing how, as you really do not need it. It is only important when doing interop.)

Sure, *I* know how to do it, but I’m not going to show you because you’re a lame scrub and you don’t deserve my wisdom.

Here’s another:

You may have a conceptual problem here – the size of that class is not really supposed to be exposed to you, the developer in this case.

The underlying mechanisms used to store it are compiler details – also the compiler is optimising – which means that it is perfectly at liberty to change the sizes of certain objects if it provides a performance upgrade.
[…]
Sizes of classes is a C++ concept, and even then is flaky – sizeof() in C++ can also measure the size of the virtual function table pointer IF the compiler in question implements one in a specific way.

If you really need to know more about the size of this object internally, can you let us know why? This may make things clearer :)

Absurd dogma. “You can’t have perfect knowledge of memory usage so therefore any attempt to know about memory usage is a waste of time.”

My contempt for these sorts of people is boundless. I’m sorry, forum idiots, but we live in a universe of finite resources and so sometimes we need to ask important questions like, “How many resources do we have?” and “Can we afford to use more?” If you’re going to reply to questions, then at the bare minimum you should answer the question. If you don’t know the answer, then don’t post. Please stop polluting the search results with your arrogance and stupidity.

It turns out it is supposedly possible to get the size of something in memory. It’s just needlessly convoluted. I find those pages in the docs and try it out. When I run my program it throws an exception:

ArgumentException: Type Widget cannot be marshaled as an unmanaged structure.

Searching for an explanation of what I’m doing wrong just sends me back into the nest of morons asking “Why would you do that?”, and I’ve read enough of their nonsense for one day.

So I give up and just do it manually. I look at each variable in my class:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public class SpaceMarine : Meathead
{
  string name;
  string profane_nickname;
  const bool is_dude = true;
  float beard_stubble;
  double burlyness;
  int bullets;
  int genades;
  int hitpoints;
  int armor;
  long aliens_killed_with_firearms;
  long aliens_killed_with_melee_weapons;
  long aliens_killed_with_bare_hands;
  long aliens_killed_with_shouting;
  int dead_wives;
  bool cigar;
};

Then I look up all the type of each variable to get the size, just in case it’s different from what I’m used to in C++. Then I use my primate hands to type these numeric values into a calculator so I can know what the total is. And the whole time I’m gritting my teeth at this completely stupid task because this is exactly the sort of dumb busywork we invented computers for. And the only reason I can’t use a computer to solve this problem is that the language I’m using is trying to help me to death.

Okay, so I add up all the values. The above exampleWhich I made up for the purposes of demonstration. comes to 61 bytes, plus whatever space the name / nickname strings take up. That’s trivial, and I could safely keep a few thousand of these around without worrying about memory overhead.

Yes, this SpaceMarine example is small and simple enough that your average coder could probably glance at the definition and eyeball it. They can take a guess at the memory usage and get within a factor of 2 without needing to whip out the calculator. But in practice your data structures will be more complex than this, with classes containing other classes. The SpaceMarine might contain a WeaponClass which contains a WeaponModel which contains a WeaponMod which contains a Damage type, and it’s pretty reasonable to not want to have to scan through thirty variables across five different source files before you know how big something is. In a less obstructionist language I could get this information in a line or two of code, and here it turns into to a little bit of middle school math homework.

Note that I’m not even trying to do anything with this information in my program. It’s not like I’m asking C# for direct memory access or anything “crazy” like that. For the most part I applaud C# and its attempts to save the programmer from having to do things, but the moment the language attempts to save you from knowing things then something has gone horribly wrong.

On The Other Hand…

Despite my griping, I’m really warming up to Unity / C#. For every annoyance it puts on me, it relieves a couple of longstanding C++ headaches. It’s great being able to write code without needing to juggle stupid header files. It’s a massive relief to get a project rolling without messing around with libraries and include paths. And for the 90% of the project where I don’t care about memory usage, it’s really nice to not have to worry about memory usage.

It’s great having a platform with more stuff built-in, because it makes everything else more modular. I mean, this is what it’s like in C++:

“I’d like to add a console window to this project. Just something simple to draw scrolling text on screen. No problem, I wrote one of those last year. Oh, but that code uses my 3D vectors to draw the window. Ok, I’ll include those vectors. But that code is tied to the 4×4 matrix code because those two types interact. That 4×4 Matrix code references my camera framework, the camera framework depends on my RGB color code, and the RGB color code uses my string parser for turning HTML-style #RRGGBB strings into color values.”

All I wanted was to draw some rectangles on screen and now I’m importing 5,000 lines of code.

Little knot here.
Little knot here.

You can try to make your stuff as self-contained as possible, but that’s pretty hard to do when you’re writing code for basic building blocks that get used everywhere.

In Unity, all of that crap comes built-in and you don’t have to go on some crazy misadventure through dependency hell. Everything “just works”. It’s amazing and it makes programming fun again. Or at least, it’s fun when I don’t have to go dumpster diving in internet forums for answers.

Next week I’ll talk about a couple of hard jobs that Unity made easy.

 

Footnotes:

[1] Yes, you have to make sure you’re getting the size of things and not pointers, which means you might need to step down through the object hierarchy. The point is, this is easy to do.

[2] Which I made up for the purposes of demonstration.



From The Archives:
 

143 thoughts on “Unity Week #7: Why Would You Want to do That?

  1. Zak McKracken says:

    I wonder sometetimes if it’s just the C#/Unity community (plus maybe one or two others) where the “you don’t want that” answer is so frequent, because I haven’t really seen that with Python.
    In Python, it’s pretty hard to know the memory footprint of some object not just because of overhead and things that might be linked in but also because a bunch of objects might be referring to the same area in memory, but also be referred to by one other object themselves, so … it gets complicated, and adding the sum of memory used by each object may be way larger than the memory used by Python in total. By orders of magnitude sometimes.

    Yet, a very quick search immediately finds this:
    The first answer explains why it’s hard to do and what the current best method is, the second explains the “simple” way which needs to be used with caution.
    Bam.

    Come to Python, is what I’m saying. There’s nice people here. And silly jokes.

      1. Paul Spooner says:

        Yes! Give Python a try! You can program in the Blender Game Environment!

        1. OldOak says:

          You can program in the Blender Game Environment!

          That might become a problem.
          Blender game engine is removed already from what’s going to be the next iteration of Blender (version 2.8).
          How about trying a C#/GDscript (python-like language) approach in GoDot engine instead? Best of both worlds!

          1. Paul Spooner says:

            Right, but 2.8 is moving toward real-time rendering, so the BGE won’t be necessary any more. The idea is to do app dev (game, tool, etc) directly in the Blender environment.

          2. Paul Spooner says:

            Right, but 2.8 is moving toward real-time rendering, so the BGE won’t be necessary any more. The idea is to do app dev (game, tool, etc) directly in the Blender environment.
            But it’s kinda a joke suggestion anyway. Shamus had a traumatic experience with Blender.

            1. OldOak says:

              Shamus had a traumatic experience with Blender.

              Wow, new “aged news” :-D
              Ok, if his trauma didn’t heal, will let it be.

          3. I second godot engine, I took me weeks before I even had to consider asking a question in their chat channels. (and then I figured it out before I finished formulating the question)

            This truly is the engine we’ve been waiting for…

            1. Also, open source means you can just look it up in the source code for the engine.. like ..

              working with GridMap tiles you have to set the ‘orientation’

              which is just an int… no docs

              so like, wtf do I set it to?!?!

              🎝 search search search the godot source code 🎜

              grid_map_editor_plugin.cpp
              Basis orm;
              orm.set_orthogonal_index(orientation);
              orm = rot * orm;

              item.pos = selection.begin + rel;
              item.item = itm;
              item.rot = orm.get_orthogonal_index();

              http://docs.godotengine.org/en/3.0/classes/class_basis.html#class-basis-get-orthogonal-index

    1. Daemian Lucifer says:

      I dunno…Dynamic typing?Whitespace?

      1. Paul Spooner says:

        assert(‘you already use both whitespace and braces. If you really want the braces, Python supports paren, curly, and square brace block control. It just doesn’t force you to use it.’)

        1. Echo Tango says:

          The one nice thing about mandatory squiqqle-brackets, is that they can be used to do auto-formatting (including whitespace!) on your code. Golang is the first language I know of to have this built into the standard toolchain, and it’s great. It removes nearly all arguments over formatting[1], and you don’t need to mess with configuring tools like pylint or non-standard formatting tools like Javascript has. :)

          [1] The only one that remain, are arguments over things the formatter can’t support / doesn’t support yet.

      2. Mephane says:

        I wouldn’t propose Whitespace as an alternative to Python. ;)

    2. Ander says:

      Can’t speak for Unity, but the C# community is *big*, and even at my C#-centered work there are people in the “Answer the stinkin’ question” camp and the “Don’t answer ‘stupid’ questions” camp. I showed one of the latter Shamus’s post about not being arrogant in forum space (surprised he didn’t link it in this post), and he concluded that Shamus probably isn’t a programmer because programmers would rather be told that they are doing it wrong than be told how to do the “wrong” thing right. That response was oh so fitting to the “I know what you need better than you do” camp.

      1. Daemian Lucifer says:

        That guy is most likely not a real programmer,because real programmers know that there is not VUN UND ONLY VUN way to do any of the things and so would never assume that only the way they are doing things is the correct one.

          1. Echo Tango says:

            The really funny part is, that there’s lots of people who wouldn’t consider things like HTML or CSS “programming”. Those two in combination are Turing-complete, but I don’t even need that for my definition: “Are you making a computer do something in a systematic way, and fighting with bugs? Then it counts as programming.” Sure, some things are more complicated forms of programming, but I like to have an inclusive definition. :)

          2. TheJungerLudendorff says:

            He’s linking TVTropes!
            Quick, throw fish at him before he starts linking to XKCD and DM of the Rings and consumes what little free time we have left!

            1. Ander says:

              DL already did. That’s what his python comment is. It’s too late…Farewell, workday

            2. Zak McKracken says:

              Oh, I didn’t even look at the link. Thought it must have been the mandatory XKCD reference about “real”, not “true” programmers.

    3. Tizzy says:

      Oh, I don’t know that Shamus would appreciate the jokes. He might not really approve of cheap Monty Python references. Not at all.

      On a more serious note, I feel like stackoverflow is pretty much designed for questions like this, over the old-style static forums. Especially if you have an XY problem in your question, in my experience, site participants will help you reformulate the question, and usually answer both the X and the Y part as best they can. So, like here, “maybe you don’t really want to do X because Z, do Y instead. But if you’re curious, this is how you would do X, but W might happen.”

      Most importantly, the stackexchange setup seems to encourage by design the participants to craft answers that have a lasting value and as wide an appeal as possible. And discourage pat answers.

      Or is it just Python? Is the C# stackexchange / overflow not that useful?

      1. Cybron says:

        I can verify that stack/overflow is perfectly useful for C#. Dunno about unity specifically, though.

        1. Richard says:

          Stack Overflow is starting to suffer from a surfeit of arrogance. A lot of questions get shut down as “stupid question”, and a lot of answers are being downvoted as wrong as well – even when they’re actually a far better answer than the top voted one.

          However, the older questions and answers tend to be good.

          So I guess it’s dying. It’ll take a long time to die, and might still be saved, but it’s starting to follow expert sexchange…

    4. Shamus says:

      Totally unrelated: This week I watched this retrospective on Monkey Island and the old Lucasarts games:

      https://www.youtube.com/watch?v=9F9ahZQ7oP0

      And realized your name is a reference to one of their more obscure adventure games. I was like, “Hey! I know that guy. Sort of.”

      1. Zak McKracken says:

        Haha!
        (My name actually came up a while ago in another comment section on this site… sorry, no time to dig for it now)

        To me, Zak McKracken was always my favourite Lucas Arts adventure (though I loved Maniac Mansion and Day of the Tentacle) and I very much like the character, what with trying to save humanity from stupidity :)

        Just realize that it’s actually available on GOG…
        I’m not sure if I’d still have the patience to play through it these days, though.

        1. Daemian Lucifer says:

          It was my first adventure,back in the days of commodore 64.And I had to play it without a guide.I think I can still finish it,though those damn mazes might cause me some problems.

          1. Zak McKracken says:

            played it on the C64, too. My last two playthroughs were attempts to find the least amount of money you have to spend to beat the game.
            …and I think that’s probably how I’d fail now if I tried again, by flying to the different places in the wrong order and running out of money.
            That or running out of patience with the user interface. Was alright at the time but not soon after, I started expecting that the “what is” function should just happen automatically on mouse-over (that implies having a mouse, too)

    5. default_ex says:

      It’s the C# community. The standard response to almost anything is more along the lines of, “your doing it wrong, do this instead”, sometimes with an explanation of why that may or may not make sense. The irony is that most of the time, like Shamus’ question he began with there is a way to do that.

      Take sizeof for example. C# has generics and type extension functionality that together allow you to create an extension method attached to every single class you can access. “public static int GetSizeOf(this T obj) where T : class { .. measure here … }” (might have to bind against ‘Object’ instead of ‘class’). From there you can use typeof and Reflection to calculate the size. It’s stupid that I have had a function in my personal library that does that since C# 3.5 but have yet to see anything added to the core to address such a crucial function for anyone that cares about their program’s footprint.

  2. StuHacking says:

    61 bytes if you’re lucky. Even in C++ some of those members will be either be aligned to word boundaries or the largest struct member, and some that you expect to be 1-byte might be expanded to 4 bytes. In a managed language, there will be additional overhead for the class metadata itself. And if you’re inheriting another class and overriding superclass methods then you may have an additional 8 bytes for a lookup table pointer.

    This might explain why the reponses you got from the forum were so vague/useless: The actual amount of memory a managed class takes up is affected by things the runtime may do that aren’t exposed to the programmer. That said, it’s no reason not to try and estimate the amount of memory used by one or hundreds of class instances. An estimate is better than just assuming the value is totally incalculable. There’s certainly no excuse for that sort of forum jackassery and gatekeeping.

    Boring C++ struct stats follow:
    ——————————-

    In your example class, the layout of members suggests that some padding is going to occur to improve performance of memory access. On clang, I see this is taking up 88 bytes because the natural alignment is 8 bytes and we have about 15 items to squeeze in, which we can do in 11 8 byte slots. Reordered and grouped:

    *** Dumping AST Record Layout
    0 | struct SpaceMarine {

    // Two bools padded to 8 bytes
    0 | _Bool isDude
    1 | _Bool cigar

    // 6 four byte members ~ 3 * 8 bytes
    4 | int bullets
    8 | int grenades
    12 | int armor
    16 | int hitpoints
    20 | int deadWives
    24 | float beardStubble

    // 7 members, actually 8 bytes
    32 | long aliensKilledWithFirearms
    40 | long aliensKilledWithMelee
    48 | long aliensKilledWithUnarmed
    56 | long aliensKilledWithShout
    64 | double burlyness
    72 | char * name
    80 | char * profane_nickname
    }
    | [sizeof=88, dsize=88, align=8,
    | nvsize=88, nvalign=8]

    Rules for packing bools in particular gets interesting. If you space them out across a class (e.g. your example) you can drastically inflate the layout:

    *** Dumping AST Record Layout
    0 | struct SpaceMarine
    0 | long aliensKilledWithFirearms
    8 | _Bool isDude
    16 | long aliensKilledWithMelee
    24 | _Bool cigar
    32 | long aliensKilledWithUnarmed
    40 | _Bool hat
    48 | long aliensKilledWithShout
    56 | _Bool monocle
    60 | int bullets
    64 | int grenades
    68 | int armor
    72 | int hitpoints
    76 | int deadWives
    | [sizeof=80, dsize=80, align=8,
    | nvsize=80, nvalign=8]

    Whereas, ordering components by size will help the compiler shrink the layout, maintaining alignment and not packing the structure.

    *** Dumping AST Record Layout
    0 | struct SpaceMarine
    0 | _Bool isDude
    1 | _Bool cigar
    2 | _Bool hat
    3 | _Bool monocle
    8 | long aliensKilledWithFirearms
    16 | long aliensKilledWithMelee
    24 | long aliensKilledWithUnarmed
    32 | long aliensKilledWithShout
    40 | int bullets
    44 | int grenades
    48 | int armor
    52 | int hitpoints
    56 | int deadWives
    | [sizeof=64, dsize=64, align=8,
    | nvsize=64, nvalign=8]

    But I would expect any managed language worth its salt to optimize the layout of class members in memory.

    1. Ander says:

      “An estimate is better than just assuming the value is totally incalculable”

      I think some people would disagree. The argument would be that an inaccurate estimate is dangerous, so best not make it possible. (What if someone doesn’t read the doc and doesn’t know it’s an inaccurate estimate?) C# has a “don’t hurt yourself” philosophy. This is the language that makes you type the “unsafe” keyword or something before using pointers (which I find insulting, but to the language’s credit I don’t actually need to use pointers much/ever)

      1. StuHacking says:

        Yes, I agree that an inaccurate estimate is potentially dangerous. However, if you know it’s only an estimate, and you know the reason for the estimate then you can make a risk/reward decision on whether the estimate is worth it. In this case, estimate the rough memory usage of a set of classes is not being used as a basis for any mission critical calculation.

        And it’s always better to overestimate than underestimate in any case.

    2. Zak McKracken says:

      Wait, did you mean to say that booleans are padded to 8 Bytes, or bits?
      Even padding them to 8 bits seems like a terrible waste of resources to me.

      … I guess kids these days don’t learn boolean logic operations on single memory addresses anymore, like we used to…

      POKE 53280, PEEK (53280) OR 1

      This sets the lowest bit at address 53280 to 1. No idea why you would want to do that, given that it’s the frame colour on your C64, but I couldn’t remember any more useful address.

      I guess this way of thinking is still somewhat alive in IPv4 network address masks and in POSIX file permissions, but to be honest I’m fine with not having to perform that sort of dance.

      I do hope that C++ and C# will still store a matrix of booleans using one bit per element, though.

      1. StuHacking says:

        They are padded to byte boundaries to improve access time, In the case of a struct with an 8-byte alignment bools will be padding to the nearest 8 byte boundary. Indeed, not 8 bits.

        Memory access patterns are different on modern processors. It’s faster to access an 8 byte region than peek and poke at a single bit, and memory is so cheap it’s usually not worth packing down to the bit level. However, ordering your struct members can still yield some memory layout improvements.

        I’m no expert, by any means. In C++, a std::bitset will take up 1 byte on my machine (i.e. 8 bits), which is good- You get the benefit of a high level interface for bit manipulation, but you still get to pack it into the space of an unsigned int. (for example.) C++ still gives you the control over memory layout. I can’t really comment on C#.

        1. StuHacking says:

          (Previous comment should have said std::bitset[8] but with angular brackets. i.e. A templated 8 entry bitset.)

        2. Zak McKracken says:

          …and then I fell into the trap of still thinking in 8-bit bytes although we’re on to 64-bit architectures. I suppose that’s where the 8 “bytes” come from? I.e.: data structures are padded to 64 bits or multiples thereof.
          That would actually make sense to me.

          1. StuHacking says:

            Aligment is to the largest struct member – at least this seems to be the case in clang and gcc. All this stuff is implementation dependent; as far as I know the C++ specification doesn’t require any of this alignment behaviour so it can be affected by platform and compiler.

            But what I see is that bools will be padded until they meet the next alignment boundary, or consecutive smaller members can fill the gap. i.e:

            // A struct with byte members will be aligned to 1-byte boundaries
            *** Dumping AST Record Layout
            0 | struct Vec3b
            0 | char x
            1 | _Bool a
            2 | char y
            3 | char z
            | [sizeof=4, dsize=4, align=1,
            | nvsize=4, nvalign=1]

            // A struct with 4-byte members will be aligned to 4-byte boundaries
            *** Dumping AST Record Layout
            0 | struct Vec3f
            0 | float x
            4 | _Bool a
            8 | float y
            12 | float z
            | [sizeof=16, dsize=16, align=4,
            | nvsize=16, nvalign=4]

            // And so on…
            *** Dumping AST Record Layout
            0 | struct Vec3d
            0 | double x
            8 | _Bool a
            16 | double y
            24 | double z
            | [sizeof=32, dsize=32, align=8,
            | nvsize=32, nvalign=8]

            And if I order the members suboptimally, it can affect the alignment of members. i.e. Even if the align size is 8, an int following a byte, can fit into the 4 byte boundary. e.g:

            *** Dumping AST Record Layout
            0 | struct A
            0 | _Bool a
            4 | int b
            8 | long c
            | [sizeof=16, dsize=16, align=8,
            | nvsize=16, nvalign=8]

            *** Dumping AST Record Layout
            0 | struct B
            0 | int b
            8 | long c
            16 | _Bool a
            | [sizeof=24, dsize=24, align=8,
            | nvsize=24, nvalign=8]

      2. Olivier FAURE says:

        Booleans are padded to 4 bytes (32 bits) or 8 bytes (64 bits) depending on whether the system is, well, 32bits or 64bits.

        The idea is that your CPU has access to some amount of memory, and this memory is divided into small nuggets of 8 bytes / 64 bits. The CPU can access any of these nuggets in one instruction. So it can access byte 0 through byte 7, or byte 88 through byte 95, etc. But if you want to access an integer that’s stored from byte 90 to 97, then you need two instructions, which is wasteful: one to load from 88 through 95, and one to load from 96 through 103.

        So if your struct has an integer, then two booleans in a row, they’ll take one byte each. However, if it has a boolean, an integer, then a boolean, the boolean will be padded to 8 bytes, because the integer needs to be 8-aligned.

        It’s a trade-off, but in most cases, if you’re letting your language decide your memory layout for you, it will (rightfully) assume that you’ll get better performances by padding your structure than by having to make multiple loads to access one variable.

        And of course, a language like C# can go further by breaking assumptions like “cigar must be stored after aliensKilledWithMelee in memory”, and just rearrange fields to have both memory-aligned variables and low amounts of padding.

        I do hope that C++ and C# will still store a matrix of booleans using one bit per element, though.

        C++ stores vector that way, which is kind of a bad design decision, and creates problems if you want to get a “bool*” pointer to an element of the array. Making a separate bitvector class would have been better.

        1. default_ex says:

          Boolean are only padded to 1 byte in C#. Structure alignment padding is wasted space that is unaffected by any of the values contained inside the structure. You can test this easily with explicit layouts by embedding a struct that isn’t explicitly laid out or by taking a pointer against the struct and casting it to a byte. Explicit struct layout to do weird cross-binding of value type structures is often overlooked in C# and incredibly powerful. The layout rules are not as complicated as everyone keeps trying to say they are. They are far simpler.

        2. Richard says:

          The C++ std::vector<bool> specialisation stores the bools as bits.

          It’s also incredibly slow, so should never* be used, especially in multiprocessor systems because you end up invalidating the cache all the time.

          * Unless you really want to

  3. Daemian Lucifer says:

    Heres the thing,you dont ask forums about a problem you have.

    Long answer:Programming forums are full of smug assholes who dont have better things than to jerk you around and make your blood boil.There is a far better way to find what you are looking for,but I wont tell you which one because you dont need to know.

    1. Zak McKracken says:

      If you don’t know that you’re probably not qualified to use it anyway

    2. Dreadjaws says:

      Programming forums are full of smug assholes who dont have better things than to jerk you around and make your blood boil

      This is most certainly not exclusive to programming forums.

    3. Wolf says:

      I was sure you where going to offer the “Propose a hacky and wrong approach yourself and let the arrogant forumites stumble over each others feet in a rush to correct you.”-approach.

      Don’t know if the efficacy of that one is just an urban myth though.
      The idea still feels too wrong for me to ever try it in real life.

  4. Zak McKracken says:

    Uninformed outsider question:
    Your operating system should be able to tell you how much memory is allocated to your program.
    Then, if you want to know how much memory your bunch of widgets requires, wouldn’t the most robust way of doing that be to make one version of the program which keeps them around, and another which deletes them, each then entering some sort of stable loop where they do nothing while you check the memory footprint via the OS. The difference should be a decent esimate of the number you were after, accounting for all real effects.

    Not saying that this was a very elegant or quick method, but robust it should be, as long as the size of the thing you’re measuring is significant compared to the rest of the program. If it isn’t then it’s probably not worth worrying anyaway, or you just need to make a thousand of these widgets to get a better reading.

    This may also actually be the only sane way to answer the question of “how much memory would be freed if I deleted this thing”, as opposed to “how much memory is assigned to this thing (but may also be assigned to other things)”

    1. Paul Spooner says:

      That’s a good idea, and it should work. Just run the program and create a few thousand of these objects, and then run the program again without creating them, and compare the memory footprint. If you instrument the clock, you can use the process to figure out how long it takes to create them as well.

    2. Matt Downie says:

      I just googled ‘unity get free memory’:
      First result:

      “How to check free / used memory?”
      “Honostly i can’t imagine you needing to see free / used memory unless one of these two things are occuring:
      A.) Your doing mobile
      B.) Your doing something really wrong.”

      1. Daemian Lucifer says:

        A.) Your doing mobile

        And this is what is really infuriating.You get an answer like this,that admits there ARE situations where it should be used,and then proceed to ignore them completely and not answer the question.

        1. Matt Downie says:

          (Actually the reply was a bit more helpful than that – I edited it down for fun. He went on:
          “In Pro, you can see a profiler which shows memory usage on some things, but i find its inaccurate a lot. On iOS you can build and see how memory is allocated with XCode’s tools.”)

    3. Richard says:

      It really boils down to one of the fundamental limitations of “managed” code:

      The runtime is opaque and can do anything at any time. It offers you no guarantees of performance or layout, only of safety.

      It could decide to compress some objects to save space, or even to destroy some (and recreate them later if they get accessed).

      You could of course also call this an advantage.

  5. Mousazz says:

    At the risk of sounding like one of those forumites you’re pissed off at:

    Huh. I’m keeping an awful lot of these widget objects in memory. I need them while generating the scene, and I occasionally need them later, but once the game is running they’re mostly just taking up memory.

    Let’s assume, for the sake of argument, that these objects have a non-trivial size and also require a non-trivial bit of processing power to create

    So… Do you actually have a real problem on your hands? Or is it just a case of premature optimization?

    1. Daemian Lucifer says:

      Seeing how the point of this whole project is to get the hang of unity,all of the things in question are not “real”,but rather a learning tool for the future.

      1. Tizzy says:

        Also, I feel like “premature optimization” only applies if you’re actually writing code to optimize stuff. I find nothing to criticize about a programmer simply wanting to have a better picture of how their code is doing.

        1. Steve C says:

          Especially when the true goal is to learn the language. This entire project is a more useful Hello World.

    2. GargamelLeNoir says:

      Keeping basic optimization in mind while building the foundation of a project is perfectly sensible and can save a lot of time later.

    3. Abnaxis says:

      Also importantly, the check he is trying to do SHOULD be trivial. It’s 5 minutes and a line of code in C++.

    4. Blakeyrat says:

      The general philosophy of C# is to ensure each object is in as small as scope as possible. If the object is used during X, but not during Y, you need to refactor your code so it’s out-of-scope when Y happens. Then it’s just a matter of letting C#’s memory manager figure out when and how to free up the memory those objects used. (Any optimization beyond that is major voodoo you almost certainly will never need.)

      I think that’s what the forum responders were trying to get across in a terrible way.

      Basically, they’re saying “if you have to ask, your code is poorly structured”. Which may or may not be true, I haven’t seen it. But it’s a *reasonable* thing to reply, as long as you do so politely.

      1. Matt Downie says:

        What if the question was phrased like this:
        “I have an idea for a way to structure my game world. It should run fast, but will require me to allocate anything up to forty thousand objects of a particular class. I want to know if this will use up so much memory that my game won’t run on low-spec computers any more. How do I check this before I’ve committed to this approach?”

        1. Pylo says:

          The answer is: “you use a memory profiler and Unity just so happens to have one integrated into the editor”

          I hate unhelpfull smug non-answers on the internet forums as much as Shamus does but frankly I do think he really was on the wrong track with this one. Even in c++ going through a chain of pointers and adding the sizes is a weird way to reason about memory usage. In fact even in his simplified example adding the sizes of fields in SpaceMarine doesnt really produce a usefull info becuse those strigns (name, nickname) could theoretically be a lot bigger than everything else put together.

          1. Steve C says:

            You’re saying that first paragraph is an example of a unhelpfull smug non-answer right?

    5. Clareo Nex says:

      That’s what he needs the size of these things for: to find out if there’s a problem. Maybe they’re 1% of the program’s footprint. Maybe they’re 50%.

      1. Kyte says:

        That’s when you use a profiler.

        1. Richard says:

          Not really a sane answer, because you can’t profile until you’ve built it.

          When you’re trying to decide between different approaches, you usually can’t afford to build both before determining which is better.

          In the case of deciding between “Build them all once and keep in memory” vs “Rebuild and destroy individually as needed”, you want to know how long it takes to create an individual one, how long it takes to destroy them, and how much memory they each use.

          And you need to make the decision early on, because changing your mind is likely to be expensive.

  6. Olivier FAURE says:

    I haven’t worked on any big C# projects yet, but a general tip: most object-oriented languages based on C (Java, C#, D) have a bunch of really annoying / hard-to-use features for their classes which go away if you declare every class member as public, or better, declare all your data types as struct. (so SpaceMarine stays a class, but SpaceMarineSaveData becomes a struct)

    I’m guessing the problem when trying to fetch memory size is that C# is saying, “Well, I’m going to rearrange your class’s fields to take up less memory / be more efficient to cache / whatever, but this is way harder to do if I have to provide the class’s size at compile time”. Store your data type as a struct instead, and C# doesn’t bother rearranging it, which means you’re free to measure it.

  7. Misamoto says:

    I believe there are profiling tools in VS. Are they any help?

    1. 4th Dimension says:

      They will tell you from moment to moment how much memory (among other things) your program uses and even show when GC is performed, but won’t tell you how much of that memory is taken up by arditrary bits of data.

      1. 4th Dimension says:

        My bad. For the first time I played with the profiler, and yeah it can take snapshots of the memory of the application and allows you to drill down into memory. It’s still rather opaque read, but information is presumably in there somewhere.

  8. Draklaw says:

    I hate to be on the moron side, but I agree that trying to get the memory usage of some objects using language constructs does not make much sense. (Note: I’m a C++ dev, not a unity dev.)

    sizeof() in C/C++ don’t really return the amount of memory an object takes, but the amount of memory a single struct/class takes. This is different because your object sure has a struct/class at its root, but can also be composed of more data referenced by pointers. sizeof() won’t tell you the size of these, because it simply cannot know which pointers are owned by your data structure and which one are merely reference to objects owed by something else.

    For example, if you have a string `std::string myString(“Hello world ! This is a somewhat long string that will take some space in memory.”);` and you call `sizeof(myString)`, you will get the same thing as `sizeof(std::string)`. It wont include the size of the buffer used to store your string.

    If you know well the class of the object you want to query, then you can work around this. With the above example, `sizeof(std::string) + myString.size()` is probably a good approximation (although some implementations store a small buffer in the string object so they don’t have to allocate memory for small strings, in which case this solution overestimate the side of the string). Even that is not perfect: when you write `new SomeObject[n]` in C++, the operator new will allocate a bit more memory to remember the number `n` so it know how much time to call the destructor of `SomeObject` when you call `delete[]` (that’s why `delete` and `delete[]` are two different things).

    If you try to know the size of an object coming from some library and you have no idea how this object is structured, then you are out-of-luck. For instance, if you want to know the size of `QWidget` in the library Qt, `sizeof(QWidget)` is of no help. That’s because Qt usually only store an opaque pointer it its public facing classes and allocate the real data in a hidden datastructure (see PIMPL idiom). (The reason for this is to make it easy to preserve binary compatibility.) So `sizeof(QWidget)` is likely very small, even if widgets are really complex objects with a lot of attributes.

    Basically, unless Unity developers provide a `memorySize()` method for each of their objects that compute the (approximate) real size of the object, your language (C#, C, C++ or anything) can’t really give you an accurate answer because it would have to guess the developer intent.

    This is even worse in the case of Unity. Your widgets could actually reference objects living in the video memory (typically, textures). There is no way for C# to know that. The good news is that Unity is likely already managing video memory, and release useless textures to make place for more useful ones. Most modern game engines do texture streaming and don’t even try to store all the textures at once in the video memory because they are too big. So you don’t have to care about that, unless you see that unity struggle with texture management.

    To conclude, your widgets are probably both small enough in memory and fast enough to create so that you don’t need to care about this. The good approach would be to build your software, and once it is done you do some profiling and see what causes performance issues. If building widgets takes more time than you want, or your application eats too much memory, you can experiment and measure the impact of your changes. While this requires some amount of guesswork, it is also the only solution that returns reliable data.

    1. Olivier FAURE says:

      For a while I was very confused as to why accents were popping up everywhere in your post, until I realized you were using them to denote code bits.

      This blog’s formatting doesn’t support using accents that way though, that’s a markdown formatting tag.

      Otherwise, I get what you’re trying to say, but this isn’t really a constructive way to go about problem solving. This is what Shamus was saying earlier “You can’t have perfect information on memory, therefore it’s pointless to provide tools that help you get any information on memory”.

      1. Steve C says:

        IKR? I’m shocked that the spam filter didn’t eat that comment. It hates normal comments. I would have guessed accents on “n” and “d” would be Strongbad level of DELETED!!

    2. Retsam says:

      For future reference, you can put code in tags format it as code.

      1. Retsam says:

        Messed that message up, and didn’t get an edit window for some reason… trying that again.

        You can put code snippets inside ‘code’ XML tags and it’ll monospace it.

        1. Olivier FAURE says:

          Awesome, thanks!

          (this should probably get featured in one of Shamus’ code articles at some point, because there’s definitely a need for it)

          Trying it now:

          This is code

          This is code

          EDIT: Aw, it’s kinda crappy. The code tag eats nested formatting instead of displaying it as-is.

          <code>This is code</code>

          EDIT: But using & lt; and & gt; works.

    3. Blake says:

      “sizeof() in C/C++ don’t really return the amount of memory an object takes, but the amount of memory a single struct/class takes. This is different because your object sure has a struct/class at its root, but can also be composed of more data referenced by pointers. sizeof() won’t tell you the size of these, because it simply cannot know which pointers are owned by your data structure and which one are merely reference to objects owed by something else.”

      And yet I use sizeof() for things all the time, because I know which structures have pointers and which don’t.
      If I know I’m going to be creating lots of something, and need to know I’m staying under my memory budget, being able to type sizeof(thing)*5000 straight in the watch window is a trivially easy way of knowing how big thing[5000] will be.
      It even means I can rearrange the structure and verify I’ve made an improvement almost immediately.

      If you’re the sort of person who cares about how much memory you’re using (most experienced game devs), then you’re likely not the sort of person using a whole lot of pointers in your structures anyway given the memory access times.

      sizeof(thing) gives you a factual, objective answer to a question instantly, which you can static_assert() on, or use to create storage precisely big enough to contain all your objects, and doesn’t require any special magic.

      1. Draklaw says:

        Well I was assuming that the “Widget” object Shamus tries to introspect are Unity objects, not objects he created himself (now I’m not so sure about it). In this case using sizeof is not reliable. If they are objects he created himself, I would assume he has a good idea of how big and how expensive to create they are. So sizeof might be useful in this case to get a precise figure, but I don’t think it is really useful before you try to optimize your code. And in managed language, I’m not sure this kind of optimization really matter.

      2. Richard says:

        Indeed. And when you’re using such things for (de)serialising data to interoperable binary formats, you can sprinkle those Asserts to immediately discover if you have made a mistake.

        The standard document for the binary format specifies the size of various elements, so you can put those numbers directly into the asserts to make sure it won’t even compile if it’s not right.

        So if it’s POD, sometimes I really, desperately want to know the size.

  9. Joshua says:

    Couple grammar typos:

    “and here it turns into to a little bit”
    “crap comes built-in and you don’t have to going”

  10. GargamelLeNoir says:

    I strongly agree with your sentiment, screw that kind of annoying people, their comments are literally worse than nothing because they waste your time and might dissuade others from helping you.
    That said maybe you could use class serializing tools to get a ballpark of what your object weights?

  11. StuHacking says:

    I found some sort of low level memory profile for Unity on Bitbucket, which claims it can also give you a snapshot of the C# Heap with type descriptions. Perhaps this is an option that would let you see the actual memory usage of a bunch of widgets without having to write a load of memory usage code? It seems to come from the Unity team…

    Link: https://bitbucket.org/Unity-Technologies/memoryprofiler

  12. I think you may be looking for Unity’s Profiler. It lets you drill down into the CPU usage and memory consumption of specific objects (and functions, if you want to get really hardcore).

    1. Echo Tango says:

      Ooh. Actual debugging / programmer tools. :)

  13. Onodera says:

    I agree with Draklaw, the size of an object you have created is a non-trivial value that you cannot even count naively due to cycles in the object graph.

    I think your question was an example of the XY problem. Your real question was “what is the rule of thumb to determine if I should cache objects or create them on the fly?”, the answer to which is, of course, “hide the difference behind a factory method, start with direct creation and introduce the cache if your profiler shows a lot of GC problems”.

    1. Blakeyrat says:

      Now that I’ve re-read the post’s first paragraph, I agree with you.

      The *real* question is “is it better to cache or regenerate this data for the rare occasions I need it?” The size of the data is kind of a secondary concern, it contributes to the answer but is not the answer.

      Since caching data in C# is so trivial (you can use the MemoryCache class’s key/value store) I’d suggest just doing it both ways and benchmarking. It’s not like it’s going to be 50,000 lines of code to set up a cache so you have to figure it out on paper first.

    2. Retsam says:

      Yeah, I agree, this felt like classic X-Y problem to me. To save people a google:

      The XY problem is asking about your attempted solution rather than your actual problem.

      That is, you are trying to solve problem X, and you think solution Y would work, but instead of asking about X when you run into trouble, you ask about Y.

      In this case Y was “How do I see the memory size of an object in C#”, and I believe X was “How do I measure (and make decisions about) the memory usage of a C# program”.

      Shamus was asking (or at least googling for) Y, and got the answer of “You can’t and/or shouldn’t”, which AFAIK, is a pretty accurate answer in terms of Y, but is an entirely unhelpful answer in terms of the real problem X. But to Shamus X and Y are synonymous, so it looks to him like C# people (or at least these “forum idiots”) just don’t care about memory optimization (i.e. X). But I think it’s more the case that there’s just different techniques for doing it in C# (e.g. profilers).

      Though ideally, people answering questions should be aware of the XY problem and try to dig for the “real question”. “You can’t,” is an unhelpful answer, even when accurate – it’s much more helpful to ask what problem they’re trying to solve, and suggest alternative solutions. If nobody in those threads saw questions about memory size and suggested using a profiler (or some alternative approach) then “forum idiots”, indeed.

      It’s a classic issue with switching languages or paradigms: you see a problem and you naturally reach for the tool to solve the problem that you’ve always used to solve it, and sometimes it doesn’t work and that leads to frustration. But the tools that make sense in one language may not make sense in another. It’s honestly the hardest part of learning another programming language: syntax and concepts are almost always pretty easy, but changing how you think about code is hard.

      1. Xeorm says:

        This is what it sounds like the most to me as well. To me it looks like Shamus is looking for a ballpark answer of “how much memory is this thing using?” which is a completely valid question to ask. Unfortunately, C# is intentionally not designed to answer such a question. The key to remember with memory like this is how much room it takes can vary. If you run the code on a 32 bit machine it’ll take less memory than on a 64 bit machine. Usually. Then there’s different ways the compiler might compress the data, and you do end up with cases where the exact memory usage can change.

        Or in short, getting a ballpark answer is non-trivial to do for the language. Which is why the correct answer at the end of the day is to grab a profiler of some sort and use that. It much better solves the actual question being asked.

  14. Mark Magagna says:

    The problem of figuring out usage and whether to free them gets even worse when you consider that freeing the objects in C# just sends them to the garbage collector – who knows if you’re going to be interrupted by a GC cycle?

  15. Bloodsquirrel says:

    I’ve got to agree with the forumites, unfortunately, about trying to get the size of objects in C#. For the vast majority of reasons you would be wanting to know the size of an object, there’s no such thing as “close enough”. If you’re manually stepping through a block of memory, you need to be exact. Meanwhile, for the purposes of estimating memory usage, when you’re dealing with object-oriented programming you’re not likely to even get within a mile of close anyway.

    This is one of those things where there’s a fundamental break between C-style coding and the object oriented world. The entire concept of object oriented programming revolves around encapsulating all of the information about the object’s internals so that the only thing the rest of the code needs to know about it is its interface. Even in C++ using sizeof() isn’t going to accomplish what you want if you’ve got an object that contains a pointer to a much bigger object.

    Oh, yeah, and your SpaceMarine might also actually be a ChaosMarine, which is a subclass of SpaceMarine, so even the exact size of the SpaceMarine class might not be valid because ChaosMarine has extra members.

    but the moment the language attempts to save you from knowing things then something has gone horribly wrong.

    The problem is that being able to know some things breaks the fundamental rules of abstraction that the language is built on top of. The entire point of an interface, for example, is that you’re supposed to be able swap out the implementation behind it at will without affecting any of the classes using it. If something in your code needs to know more than the public interface, then something has broken the object-oriented model.

    So, really, the only options for answering this question are “You can’t” or “There might be a hacky way to get what you want, but it’s only going to be valid under certain assumptions, so I need to know what those assumptions are in order to figure out which way is valid”.

    This really is a case where knowing what you were actually trying to accomplish is necessary, since the answer is going to involve using the profiler.

    1. Shamus says:

      “For the vast majority of reasons you would be wanting to know the size of an object, there’s no such thing as “close enough”. ”

      Even if you’re right and the “vast majority” (of use cases you’re familiar with in your domain) don’t need this, that’s still not ALL use cases. Is the compiler’s guess about my memory usage going to be LESS accurate than me scanning through the source and adding up all the sizes manually? I insist that in many cases, “Close enough” is preferable to “I have no idea.”

      You know the stuff on your hard drive often takes up a different amount of space than what’s reported, right? Like, a 1 byte file still takes up a block of HD space, which is probably something like 4 kilobytes. But you still sometimes want to know how big files are. Image if your operating system removed the ability for you to see the size of files. Your HD fills up and you have no idea what you should delete because you can’t tell if A is larger than B.

      Then people would come up with these crazy hacks:

      “Just copy the file until you run out of HD space. Then divide the free space you started with by the number of files you created.” And yet those are the sorts of hacks people are resorting to here in C#. This would not be an improvement. People still need to know how much resources they’re using, but now it’s more trouble for them to find out and their estimates will be even worse than before. It’s more hassle and it keeps the user ignorant.

      1. Bloodsquirrel says:

        I insist that in many cases, “Close enough” is preferable to “I have no idea.”

        The problem is that, in many cases, “close enough” is a fatal flaw, and if the person answering the question doesn’t know which case they’re looking at they have no way of knowing that in this one case it’s okay. And a lot of times it’s much, much better to know that you don’t know something than to think that you know it. A method that works in 99% of cases but fails in the other 1% is a great way to wind up with a subtle, hard-to-replicated, hard-to-find bug in your program.

        You know the stuff on your hard drive often takes up a different amount of space than what’s reported, right? Like, a 1 byte file still takes up a block of HD space, which is probably something like 4 kilobytes.

        This is a case of “which piece of information do you want”, though. 1 byte or 4 kilobytes are both correct, given a different assumption about what you want the information for.

        But given modern hard drive sizes, the OS can afford to make one or the other. If someone is looking through Windows Explorer it’s extraordinarily unlikely that they’re manually adding up the sizes of small files to figure out how much hard drive space they’re taking up. The user just needs to know whether a file is taking up a few bytes or twenty gigabytes.

        But in the case of people answering questions on forums (or, even worse, writing the API), making those assumptions is far more difficult and dangerous. When you’re immersed in your problem it can be easy to forget what their perspective looks like: would it even occur to them that a use case exists where someone would want “close enough” if they don’t specify it as part of the question?

        1. Paul Spooner says:

          Close enough can’t possibly be a fatal flaw. If it’s fatal, it’s not close enough.
          Also, as your response to the example amply indicates, there is such a thing as close enough, and it’s not fatal at all, unless you’re writing os code or hard-drive firmware.
          Furthermore, your approach boils down to “it’s better to give no answer than an imprecise one”. But this is a self-defeating viewpoint, since it is itself insufficiently precise.
          I prefer the What if approach. Give a ballpark answer and admit that it’s a ballpark answer.

          1. Rymdsmurfen says:

            Close enough can’t possibly be a fatal flaw. If it’s fatal, it’s not close enough.

            This sounds like a semantic word game on your part. Something that is “close enough” to estimate application memory usage will not be “close enough” to avoid fatal application errors when marshalling data structures for instance.

            “Give a ballpark answer”? That is not the expected behavior of any code. And even if we made this perfectly clear to the user we still have the problem of implementing it, which would be very hard for any non-trivial type. It would require run-time (not compile time) analysis. Maybe the garbage collector in .Net could manage this by anayzing reference dependencies (assuming that it only contains managed objects). Or you could get the size of all private fields, which is what C++’s sizeof would do (I think); but this wouldn’t “ballpark” it, it would be the correct answer to a different question.

            1. Decius says:

              The flaw isn’t in the measurement of something close to what you actually want.

              The fatal flaw is in not realizing that the thing you measured isn’t close enough to what you wanted the measurement of.

              When talking about average class size in the context of how much 1 on 1 attention a student gets, often it’s close enough to divide school enrollment by the number of classrooms.

              But that doesn’t work, at all, to determine how many desks each classroom needs. Even if you take the next highest whole number after the division, the thing you actually need is “what is the largest number of students in any class that occurs in this room?”.

              And so “How many students per classroom” has at least three different meanings: One if you are marketing how good your student/classroom ratio is, another if you are provisioning desks for the classrooms, and a third meaning if you are trying to count how many gym lockers you need, and some students have more than one gym period.

        2. Lanthanide says:

          I think this is actually a little bit of hubris on Shamus’ part, to be honest.

          If you keep seeing the same “assholeish” answer to this question over and over again by many different people on many different forums, then its likely because there is something fundamental/important that you’re missing that truly makes your question not sensible in this context (C#), whereas it was sensible in your old context (C/C++). The alternative is to suggest that literally everyone is an asshole, and it’s not you who is missing something crucial.

          In other words, this is an example of Dunning-Krueger – I’m not meaning this in a pejorative or judgemental way, it just encapsulates this experience very well, IMO.

          I’ve only got experience in C, but others in the comments have suggested that this is likely a case of premature optimization, and the correct approach in the C# context is to do profiling to see how your program is performing, rather than manual calculations on paper, as you might do in C.

          1. The Rocketeer says:

            “This seems to be Dunning-Krueger.”

            “I don’t have experience in the relevant field, but here’s why you’re wrong…”

            1. Lanthanide says:

              You don’t yourself need to be an expert rocket surgeon to recognise that if a self-taught rocket surgeon moving into a new area of rocket surgery keeps getting told the same answer to the same question by many different experienced rocket surgeons, that perhaps the self-taught rocket surgeon lacks some basic understanding of the new area they’re working in such that they keep asking the wrong question.

              Really the Dunning-Krueger effect has two parts to it:
              1. The ignorant don’t realise they’re ignorant because they fail to know their own limits
              2. The informed don’t realise what they’re doing is difficult because they fail to know the limits of others

              (It seems that many of the experienced rocket surgeons that Shamus is complaining about are suffering from definition #2)

              Almost by definition, a 2nd or 3rd party can look at the individual and diagnose it as an instance of the Dunning-Krueger effect, where the individual themself is oblivious to this.

          2. Daemian Lucifer says:

            The alternative is to suggest that literally everyone is an asshole, and it’s not you who is missing something crucial.

            Or,seeing how numerous people here have answered the question in a non assholish ways,only those who offer the same unhelpful assholish answers are the assholes.

          3. Dreadjaws says:

            I think this is actually a little bit of hubris on Shamus’ part, to be honest.

            If you keep seeing the same “assholeish” answer to this question over and over again by many different people on many different forums, then its likely because there is something fundamental/important that you’re missing that truly makes your question not sensible in this context (C#), whereas it was sensible in your old context (C/C++). The alternative is to suggest that literally everyone is an asshole, and it’s not you who is missing something crucial.

            Preposterous. Ignorance from Shamus’ part doesn’t excuse assholish answers. They can perfectly well answer nicely. And, if the answer exists and they know it, they can give it to him, even if they don’t think it’ll help.

            1. Lanthanide says:

              The second one that Shamus has quoted, which he calls “Absurd dogma” does not appear “assholeish” to me at all.

              Here’s the actual thread: https://www.codeproject.com/Answers/177612/Size-of-a-class-in-c#answer4

              Go read the whole thing and decide if the author, Dave Kerr, is an asshole, or if he’s just patiently explaining some fundamental internals of how C# works. Doesn’t come across as “absurd dogma” at all, but rather a level-headed and polite explanation about C#.

              1. Ander says:

                Thanks for the link. I think Shamus explains what he means by “absurd dogma,” but according to that post, the “dogma” of “you can’t know that so we won’t bother estimating” would be in C#’s language philosophy.

                “the compiler is at liberty to choose how to store the class as long as it adheres to the standard,” is probably the key here. Many of the people who set to answer these kinds of questions don’t realize that their assumptions about the language might not be shared by the people asking the question.

                That said, “can you let us know why?” makes me suspicious. It suggests that the poster does have an answer but doesn’t want to parse it out now for some reason. Maybe the potential answers are too numerous and broad, but since this is a niche request that, according to some here, isn’t part of the C# way of doing things anyway, I assume there can’t be that many ways to do it.

                1. Syal says:

                  It suggests that the poster does have an answer but doesn’t want to parse it out now for some reason.

                  I mean, as a non-programmer I’ve had to answer vague questions at speed before, and having to give four possible answers when you know for a fact only one applies is really annoying. Plus there’s the possibility the person didn’t actually mean what they said at all, and are thinking of some fifth problem they don’t know the terminology for. Usually it’s faster and less of a headache to ask them to re-word the question.

                2. Steve C says:

                  The “can you let us know why?” part is what really bothers me. Enlightening someone’s idle curiosity is an infinite time sink with zero guarantee that they answer. More often than not it is the exact opposite. They are only going to use it as a justification why they should not answer you.

                  I wrote about this before the last time Shamus wrote about this before. Though Felblood explains it better than I do in that thread:
                  https://shamusyoung.com/twentysidedtale/?p=21365#comment-353630

                  When asking for help, a random person on the internet asking “why?” is a trap even when they don’t intend it to be one.

                3. Decius says:

                  “The compiler is at liberty to…”

                  No.

                  The compiler runs on a finite state machine; its behavior is deterministic.

                  The algorithm that the compiler uses to determine how the data is stored might be really complex, but it exists, and it will compile the same code the same way every time. You CAN know what the compiler will do, and you should be able to TELL a good compiler exactly how to do it (although you normally shouldn’t, unless you understand why the compiler does it the way it does and can explain why a specific case should be an exception).

                  1. Ander says:

                    Yeah, it’s a finite state machine. So’s the RNG. That doesn’t mean the language designers intend you to manipulate or reliably predict that level of the finite state machine. Should you be able to? I do not have the domain experience to argue that point one way or the other. But it does seem that the C# spec makers think the answer is, “No.”

                    1. Richard says:

                      A C++ compiler is also at liberty to rearrange the members of a struct or class* in a variety of ways that significantly change the size-in-memory.

                      Most of the memory model is “implementation-defined”.
                      For example, alignment requirements vary hugely between CPUs – and even within a specific CPU!
                      – SSE2 and AVX FPU instructions have very different alignment requirements to x87 FPU instructions, yet both are on your desktop CPU.

                      The compiler can still tell you how big it’s going to be.

                      * C++ compilers have intrinsics that allow you to override this, to make it easier to exchange data with systems that have different layout requirements. These usually make your code slower of course.

              2. Decius says:

                Buried deep within the nested comments:
                “The task you are looking at is known as ‘profiling’, which involves running the application whilst visual studio (or another profiling tool) gathers low level data about how the CPU is being hit”
                and then provides a reference link, the quality of which I cannot evaluate.

                Clearly, the only way to tell when it is better to recreate things and when it is better to keep them in memory is to create a virtual machine with the performance of your target machine, run the program on the VM, and have your profiler run, outside the VM (because if the profiler competes for memory and CPU you aren’t going to get an accurate profile), on the program. Then change one thing, recompile, and repeat.

                C# memory and CPU usage must be inherently unpredictable, making it a good source of cryptographically secure randomness.

                Or maybe people who give programming advice don’t think close to the metal anymore.

                1. Lanthanide says:

                  “Or maybe people who give programming advice don’t think close to the metal anymore.”

                  Or maybe C# has a garbage collector that runs at random intervals, thus changing the precise behaviour of the program when it comes to (re-)allocating memory from run to run.

          4. Decius says:

            “Premature optimizing” is what people who solve problems by throwing more compute at them call “optimizing”.

            Or, “Premature optimizing” is the failure to correctly balance the competing demands of plentiful memory, processor, and disk versus scarce programmer time.

      2. EmmEnnEff says:

        Given that the compiler has no idea as to what the sizes of any objects your object is referencing are (As they may change at run-time, contain cycles of references, contain weak references, contain objects created via reflection (Which I’d imagine would pollute the C# equivalent of the Java PermGen), unmanaged objects, etc), I’m afraid that scanning through your code and adding up what you think are your object sizes manually is going to be more accurate.

        I’m assuming you meant the run-time, in which case the answer to the memory footprint of objects with cycles/reflection would be opinionated, while the footprint of objects containing references to unmanaged code would be intractable. As such, any built-in API for getting the size of your objects would produce either misleading, or wrong results.

    2. Mephane says:

      This is one of those things where there’s a fundamental break between C-style coding and the object oriented world. The entire concept of object oriented programming revolves around encapsulating all of the information about the object’s internals so that the only thing the rest of the code needs to know about it is its interface. Even in C++ using sizeof() isn’t going to accomplish what you want if you’ve got an object that contains a pointer to a much bigger object.

      Oh, yeah, and your SpaceMarine might also actually be a ChaosMarine, which is a subclass of SpaceMarine, so even the exact size of the SpaceMarine class might not be valid because ChaosMarine has extra members.

      The proper OO solution here is of course to give your classes a virtual member function that returns the size of the instance, which goes through all members and asks them for their size. You’d need to do that in C++, too, because as soon as a class contains anything but primitives, sizeof() won’t return the actual amount of total memory required to hold all of its data.

      1. Blake says:

        “You’d need to do that in C++, too, because as soon as a class contains anything but primitives, sizeof() won’t return the actual amount of total memory required to hold all of its data.”

        Well that’s not true, you can have structs in structs in structs and have sizeof() return the size of your outermost struct just fine even though it contains structs which are not primitives.
        If they contain objects with outside allocations it obviously won’t add those up, but that wouldn’t necessarily be what you want anyway.

        If i have some class that looks a bit like:
        struct Foo { ObjectManager* objectManager; char bigData[10232]; };
        I would want my sizeof(Foo) to return 10240 not that plus however big the objectManager is.

        Bottom line is even if it’s not useful for every chunk of code, there’s plenty of times it does work. And like everything you write with you need to know what you’re doing.
        If you restricted people from doing anything that could possibly be ambiguous (or even dangerous) under any specific circumstance, there would be very little you could let people do.

        1. Decius says:

          Anything powerful enough to be useful is powerful enough to be dangerous.

  16. Shamus, a question/experiment: If you created, say, 100,000 instances of that class, then write down the memory increase in Task Manager (assuming the amount of instances you created increases used RAM by a relatively significant amount), then divide that number by the amount of instances, wouldn’t that give a rough estimate of how much memory a class (and sub-classes etc) consumes?

    1. StuHacking says:

      Not necessarily in a managed language, as the runtime will likely be allocating a block of memory up front and then using that to store instances of objects, instead of allocating memory when requested. The likelihood is that the memory used by the managed process (as seen by the OS) will grow in large increments, and the runtime will try and reclaim memory from it’s existing pool before requesting more from the OS.

      For a system level language (like C/C++) where you are doing the low level allocations manually, this process would work better. (Arguably though, you should use the virtual page allocation provided by the operating system instead of immediate allocation). For a managed runtime like Java or .Net/Unity you should use a memory profiler to get a finer grained analysis of individual object allocation and lifetime.

  17. Bodyless says:

    There is a method in C#/.NET to get the total amount of memory used by your Program: GC.GetMemory()
    It should also cause the Garbage Collector to run. (Might depend on the .NET Framework version used). Cleaning up your memory beforehand is a good idea here.

    If you call that before and after disposing your objects, you can estimate the memory you saved by throwing them away.

    You could also get some fancy memory profiling software. Though i don’t know if these work with unity.
    They might try to give you numbers on the memory used by your objects. But i have seen them fail at it.

  18. Nick Powell says:

    For every annoyance it puts on me, it relieves a couple of longstanding C++ headaches

    See, this is why I like C#. The generic collections/LINQ library in particular completely trivialises so many things that are really awkward in C++. You want to sort a list of soldiers alphabetically by their names, then filter out the ones who are further than 100m from the player, then get the intersection of that set with another set from elsewhere in the program? That can be one line of LINQ and lambdas.

    In C++ you’d end up needing 30 lines of looping and potentially custom comparison objects and iterator wizardry. More importantly though, you’d have to stop what you’re doing and look up the exact syntax necessary to do each of those things with the specific container class(es) you’re using

    1. Rymdsmurfen says:

      And on top of all those benefits the code also becomes very readable, almost to the point that a non-programmer could figure out what it is doing.

    2. The Snide Sniper says:

      Lambdas do fix most of the usual C++ headaches, yes. However, you should be aware that lambdas are in C++11!

      Combining that with STL makes your example rather trivial to implement:

      #include <algorithm>
      #include <cstring>
      // Sort alphabetically by name.
      sort(list.begin(), list.end(), [] (const Item & i1, const Item & i2) -> bool {
      return std::strcmp(i1.name, i2.name) < 1;
      });
      // Remove if too far away...
      list.remove_if([player](const Item & i) -> bool {
      return magnitude(i.position - player.position) > 100;
      });
      // Intersection (inefficient, due to lack of assumptions about "otherlist")
      list.remove_if([&otherlist] (const Item & i) -> bool {
      return std::find(otherlist.begin(), otherlist.end(), i) != otherlist.end();
      });

      You do have a point about needing to know the syntax, but there is an argument to be made that one should look up functions with which one is unfamiliar, rather than making assumptions. Which side of the argument I’d support depends on what the programmer is trying to do, however. A programmer who just wants to estimate something (eg. Shamus) probably wants the quick-and-dirty (possibly non-functional) solution. A programmer who wants to write code for future use should be careful to use functions exactly as documented.

      1. Blake says:

        C++20 will likely include Ranges, https://ericniebler.github.io/range-v3/ which will simplify this further bringing it closer to the C# style syntax.

        Would end up something along the lines of
        using namespace std;

        vector soldiers = SomethingReturningTheList();
        auto filteredList = soldiers |= action::sort() | view::filter_if([](auto& lhs, auto& rhs){return lhs.name lhs.name < 100 || !OtherList.Contains(s);

      2. Blake says:

        C++20 will likely include Ranges, https://ericniebler.github.io/range-v3/ which will simplify this further bringing it closer to the C# style syntax.

        Would end up something along the lines of

        using namespace std;

        vector soldiers = SomethingReturningTheList();
        auto filteredList = soldiers |= action::sort() | view::filter_if([](auto& lhs, auto& rhs){return lhs.name < 100 || !OtherList.Contains(s);});

        If Abbreviated Lambdas: http://open-std.org/JTC1/SC22/WG21/docs/papers/2017/p0573r2.html made it too, it’d be more like:

        auto filteredList = soldiers |= action::sort() | view::filter_if(lhs, rhs) => lhs.name < 100 || !OtherList.Contains(s);

  19. turcurudin (Dave B.) says:

    Q: I’m looking for some place where people actually give helpful answers to technical questions. Is there anywhere that isn’t full of unhelpful jerks?

    EDIT: Nevermind, I figured it out.

    ;)

  20. Dev Null says:

    My contempt for these sorts of people is boundless. I’m sorry, forum idiots, but we live in a universe of finite resources…

    Except for contempt. Contempt, clearly and demonstrably, is and will always be infinite.

  21. Darker says:

    Shamus, you were actually on the right track with Marshal.SizeOf. The only thing you needed to do was to add the [StructLayout(LayoutKind.Sequential)] attribute to Widget and all of its base classes. Technically adding this attribute could affect the size of the object due to the compiler no longer being able to freely reorder the fields, but it seems it would be good enough for your purposes.

  22. marty says:

    Grilling According to StackExchange, a short play:

    Any recommendations for a meat-thermometer?
    What are you cooking?
    I picked up some 12 oz rib eye steaks that I’m plan on grilling to about 130 internal temp.
    130 celcius? You shouldn’t be cooking steaks to that temperature.
    12 oz rib eyes are wrong, you shouldn’t grill any steak that’s over 8 oz.
    A real chef knows how long to grill a steak and doesn’t need a thermometer.
    Why are you grilling a steak? You should use the sou vide method. It’ll be the right temperature.
    Use lump charcoal for grilling, you don’t need lighter-fluid, just a smoke-chimney.
    You’ve forgotten about carry-over, those steaks are going to be well done by the time you eat them if you pull them off the grill at 130.
    Cook steaks for 4 minutes a side in a skillet over a high heat.
    You don’t need a thermometer, just prod the steak with tongs, you’ll be able to feel when it’s done.
    Grilling meat is inefficient, look into curing it in a brine and spice mixture.
    Your problem is you are concerned with the temperature of the meat and not the seer on it.
    It’s not a good idea to eat meat not cooked to an internal temperature of 165 degrees, bacteria etc.
    Are the steaks dry-aged because you might be up a creek if you even thinking about cooking steaks that aren’t dry-aged.
    What’s wrong with your current meat-thermometer?

    1. Shamus says:

      Perfect. This is so real I got mad at your fictional chefs.

      Well done.

      1. Douglas Sundseth says:

        Nobody with any taste eats steak “well done”.

        8-)

        1. Droid says:

          Online advice is like a medium steak, neither rare nor well done.

          1. marty says:

            That’s definitely sensiblechuckle.gif worthy.

            1. saj14saj says:

              I was the number one user on the Stack Exchange cooking site for a few years, and am still number two despite not having posted in several years. Long live the long tail.

              I just want to say….

              A rib eye is a great steak.

              SA itself doesn’t do specific equipment recommendations, so I would have put this in a comment rather than an answer: I recommend the Thermopen by ThermoWorks, as it is fast and very accurate, and easy to read. But it can be expensive, and cannot be ordered from Amazon last I checked.

              For a more budget minded, and inexpensive thermometer, Polder makes a good model, both the prob kind, and the instant read kind.

              130 C will absolutely make your steak inedible, but from context it would be clear that the intent was 130 F, which is entirely reasonable… On SA, I would have edited the question to reflect that.

              As mentioned, and experienced cook who does steak frequently can tell the doneness by touch, but a good thermometer is reliable, fast, and not expensive.

              . . . . .

              Truthfully, I don’t understand all the vitriol on measuring the size of .NET objects. Both sides have elements of truth to their viewpoint, and the right approach only becomes clear in the full context of a specific problem or issue. In my world, where I have 100k identity objects in memory at once, size does matter, but I can estimate it and see the overall program usage.

  23. @Shamus I don’t use Visual Studio but can’t you examine a object via that and see the “size” of a object?

    BTW! Are you going to add glass effect to the windows of the buildings? This would make them glitter in the dark, a single global light source could probably be used to save processing.
    Example: https://www.youtube.com/watch?v=b9RhwlgWf4s
    (and why do these videos always have obnoxious music trying to seem edgy?)

  24. Kyte says:

    The people above already made a pretty good case for why measuring, but I’ll add another:

    The size of your object often doesn’t matter at all, because the memory manager may try to recycle memory blocks.

    If you request a 1MB memory block, free it, then request a 1KB memory block, then, depending on the specific allocator implementation, the memory manager may give you a brand new 1KB memory block, the previous 1MB memory block or possibly some random 6KB memory block it had floating around in its “small block” pool.

    This means the actual size of your things will depend on how and when and in which order you make and delete things.

    1. Blake says:

      To me that doesn’t sound like the size is changing, merely how much memory it’s using, and making things smaller would still increase the chances of it ending up in a small block. Size is still relevant.

  25. Dreadjaws says:

    If we had infinite processing power and could re-create the objects for free, then there would be no reason to keep them around. But in this universe both of these resources are finite, so we need to study the problem to know what the right thing to do is.

    Obviously, you need to gather the infinity gems and use them to erase half the objects so they stop using resources.

  26. Dreadjaws says:

    My contempt for these sorts of people is boundless.

    I can relate. I absolutely hate unhelpful answers, but I hate them even more when they treat you like a jerk for daring to ask a genuine question.

    Question: “Hey, guys. I have a problem running [game] in my rig, even though I have all the requirements. I have a i3 3300 CPU, 6 GB DDR4 RAM, a GeForce GT 690 and a 3TB HDD running Windows 10.”
    Answer 1: “Windows 10 is your problem. Get Windows 8.”
    Answer 2: “Who cares? [Game] sucks anyway. Play [other game] instead.”
    Answer 3: “What kind of an idiot only has 6 GB RAM? I run 24 GB RAM in my rig, like any real gamer would.”
    Answer 4: “Get an Xbox One.”
    Answer 5: ” My PC is completely different, but it works perfectly fine for me. You must be doing something wrong.”
    Answer 6: “Get a real rig. What you have is basically a typewriter. If you can’t afford it, then you shouldn’t be playing anyway.”
    Answer 7: “LOL. Doesn’t know how to run [game], even though it’s the easiest thing in the world. Go play gameboy instead, lol”
    Answer 8: “Pirate Bay.”
    Answer 9: “[Game] has compatibility issues with the 3300 series, you need to download a patch from the website.”
    Answer 10: “You suck. Try sucking less.”

    Even if there is a decent answer you might miss it in a sea of hate, or give up before finding it. Why can’t they just be nice about it? Even if they know it won’t be of much use. “You know, this isn’t really going to be of much help, but here you go, this is how you do it. I’ll just warn you, don’t expect perfect results”. Is that so hard? I guess being an asshole is much easier.

    1. Daemian Lucifer says:

      Answer 8: “Pirate Bay.”

      Sad thing is,that often helps.There are too many games where the pirated version works better than the official ones.

  27. Nick Johnson says:

    Did you get any answers which basically amounted to “Use the profiler”? Because that probably is the right answer here. sizeof probably isn’t going to help you.

  28. Kylroy says:

    Ah, Shamus circles back to this old problem:

    https://shamusyoung.com/twentysidedtale/?p=21365

    If your proposed solution to a simple problem involves more work than rebuilding from scratch, it’s not a viable solution.

  29. Geebs says:

    I can’t help but feel that the C# approach to memory management (very ably and clearly described by a lot of the posters above) contributes to why game programmers stuck with C++ for so long.

    Realtime 3D programming outside of a game engine involves a lot of writing directly to memory and hard limits on RAM which are imposed by whatever video hardware the user happens to have. Couple that with 60 fps and you’re talking about millions if not billions of operations per second. If you’re also using the GPU for other computing functions at the same time, you’re trying to shuttle resources back and forth across a tiny bottleneck. What that adds up to, is stuff like having your performance competely tank over a few megabytes of data, or having your procedural algorithm eat your computer’s entire memory in a few seconds and render the entire machine unusable. Also lots and lots and lots of “undefined behaviour” if you mess up your memory mapping.

    I guess I’m trying to say that, especially in the context of realtime 3D, I simply can’t understand people who don’t want to know about memory allocations; and I suspect a lot of the more dismissive forum posts are from people who are making some totally other class of app where memory is not their problem.

  30. Decius says:

    If you know what you want to look up, iteratively and/or recursively, and how to get the values that you want to end up adding, and you’re a programmer, is it hard?

  31. kdansky says:

    When I want to know about memory consumption of a desktop app, the easiest way is the following:

    1. Open task manager, look at your memory footprint.
    2. Create ten thousand of your objects in question.
    3. Look at the memory footprint again.
    4. The difference divided by 10k is roughly how much memory one object costs.

    This is an estimation, and that’s good enough for the use case of estimating memory footprint. If it’s not sufficiently informative, then the correct way to go about it is to get out the profiling tools instead of doing arithmetic: You will always make assumptions about how the compiler works, and unless you implemented the compiler you will be wrong a lot.

    I am a bit annoyed at you, Shamus. You keep writing about optimising and memory usage, and I do not remember you ever use the sentence “I used the profiler” – Which is step ZERO when doing optimising work. ETWTrace and the VS Profiler are your primary tools for this job.

    Modern libraries are not honest about memory usage because even the libraries and the compiler do not know before run-time. This is a deliberate trade-off taken by most systems because it results in better performance and small footprints, and profilers beat programmers hands down every time.

    1. Shamus says:

      ” Which is step ZERO when doing optimising work.”

      Except, I wasn’t DOING optimizing work. I was curious about the memory cost of objects. In C++ this is trivial, and in C# it’s possible but poorly documented and discouraged through reflexive orthodoxy.

      Yes, I could have used task manager. But like adding up the fields by hand, this is me doing work to find out something that ought to be fairly trivial to find out with the tools I was already using.

      1. Shamus says:

        Thinking on this more: It seems like this “optimization” thing is what is pissing people off. They see me worrying about memory usage and think, “Oh, you must be OPTIMIZING!” But to me, knowing the size of a struct is just basic situational awareness. To me, asking about the size of an array (or the size of a struct of basic types) is just a simple question, like asking “Is this variable an unsigned short or a long int?” Imagine if you wanted to know the type of an object and everyone told you to use specialized compiler markup, a profiler, or task manager.

        1. Lanthanide says:

          To me, asking about the size of an array (or the size of a struct of basic types) is just a simple question, like asking “Is this variable an unsigned short or a long int?”

          And that is completely reasonable.

          However, what you said is this:

          Let’s assume, for the sake of argument, that these objects have a non-trivial size and also require a non-trivial bit of processing power to create. We create lots of them, we use most of them at startup, and then as the program runs we occasionally need a few of them. (But we can’t predict which ones ahead of time.)

          So it comes down to a difference of degree. People, such as myself, think your first question is completely reasonable to be able to ‘know’. But your second example is much more complex, and in the realm of C#, there is no good answer that will be correct 100% of the time, which is why the C# experts are hesitant to provide the general-case answer that you’re looking for – because there is no general-case answer.

          The answer that you referred to as being “pure dogma” ends with the statement “If you really need to know more about the size of this object internally, can you let us know why? This may make things clearer :)”. So if it were YOU asking the question on that forum, I think you’d reply and explain your situation above, and Dave Kerr, who seems like a really nice and helpful guy, probably would have helped you in the direction that you wanted.

          But if you read the rest of the thread, the questioner seems to be interested in how long it takes the system to actually allocate the memory itself, rather than being concerned with the size it takes up. So his question was not YOUR question, and so the forum thread was not helpful for you. And someone does reply in that same thread about marshalling etc, which was the thread you needed to follow to reach your eventual answer anyway.

          So to me, this really honestly looks like “newbie to a technology doesn’t fully understand the new context they’re working in, and assumes everyone else is being deliberately unhelpful, when actually its their own limitations that prevent them from fully understanding the answers they’re being told”. This is compounded because it was not you asking YOUR question, so you couldn’t clarify and justify your specific use-case, which IMO is a completely reasonable thing to want to know and do (to a very rough approximation).

          I also think this is another version of the problem you documented last week with the Voronoi diagram. You were asking questions using the language of an old domain (C/C++ memory management), in a new domain (C#), and being frustrated by the answers appearing to be really unhelpful, when actually the way memory management works in C# is very different – by design. If instead you had googled “marshalling in C#”, you would have gotten much more relevant answers to what you wanted to achieve – only you didn’t know that to start with.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to turcurudin (Dave B.) Cancel reply

Your email address will not be published.