Lumberyard memory management

0

Hi, I'm trying to evaluate if lumberyard fits my needs, and I'm very interested in the c++ side, especially the memory management of this engine. This is the only page I've found in the documentation and, to be honest, is quite shallow and lacks any technical details.

I would like to know how this engine handles memory allocations in c++, does it have any allocators/memory pools ready to use? Is every allocation/deallocation in the hands of the programmer or it mimics Unreal with its awful, slow and horrible Garbage Collector attached to c++ like a parasite?

Thanks for any help or direction you can provide :)

asked 6 years ago179 views
6 Answers
0
Accepted Answer

I'm not a Lumberyard dev and I'm not too familiar with how Unreal handled memory allocations but I believe the answer is that Lumberyard does offer a lot of control over C++ memory allocations. I agree there's not a lot of information on how different types of allocators work though even in the actual code. I haven't tested the performance of the Lumberyard memory allocation system either but in theory it looks like it should be much faster and work well with multiple threads. Lumberyard does have a few default allocators that can be specified but in a multi-threaded environment you may prefer having more allocators to reduce the chance of synchronization conflicts. The pool allocators in Lumberyard are probably much faster than a generic Garbage Collector due to reduced synchronization issues but I'm guessing the memory usage is higher than a generic Garbage Collector as a result. There is some garbage collection functionality in the Lumberyard allocators based on what I've seen in the code.

If you look in /dev/Code/Framework/AzCore/AzCore/Memory/PoolAllocator.h you can get a general idea on how to create additional allocators, pool allocators for small objects in the example below. Every Class usually has an AZ_CLASS_ALLOCATOR macro which overloads all the new/delete calls to use a certain allocator. You also may want to use "aznew", which is recommended in the code somewhere, instead of "new" when doing allocations which includes some extra debugging information based on compile settings I think but I haven't looked into the specifics. I usually use my own pool allocator base class just to reduce some needless redundancy like providing a description for every unique allocator and to cut down on the name length of the class being inherited. The last InitPool function I added below is just a custom one I use to configure the allocator during creation since in my case I only used it to allocate a single class type so the size was always the same. I think 4096 was the max/default page size for pool allocators but you can pre-allocate a few pages as well if needed in the Create function which usually needs called in the System Component initialization or before the allocator would ever get used for obvious reasons.

    class YourNamespace::PoolAllocatorBase : public AZ::ThreadPoolBase<AZ::ThreadPoolAllocator>
{
public:
const char* GetDescription() const override
{
return "Generic thread safe pool allocator for small objects";
}
};
class YourNamespace::YourPoolAllocator : public YourNamespace::PoolAllocatorBase
{
public:
AZ_TYPE_INFO(YourPoolAllocator, "{Insert Unique UUID Here}")
AZ_CLASS_ALLOCATOR(YourPoolAllocator, AZ::SystemAllocator, 0)
public:
const char* GetName() const override
{
return "YourNamespace::YourPoolAllocator";
}
};
class YourNamespace::YourClass
{
public:
AZ_TYPE_INFO(YourClass, "{Insert Unique UUID Here}")
AZ_CLASS_ALLOCATOR(YourClass, YourPoolAllocator, 0)
...
};
template<typename PoolName, typename PoolData>
void InitPool(AZ::u32 numPages = 0)
{
PoolName::Descriptor cDesc;
cDesc.m_pageSize = 4096;
cDesc.m_numStaticPages = numPages;
cDesc.m_minAllocationSize = sizeof(PoolData);
cDesc.m_maxAllocationSize = sizeof(PoolData);
AZ::AllocatorInstance<PoolName>::Create(cDesc);
}
answered 6 years ago
0

thanks for the feedback, I'm looking forward to hear more :D

answered 6 years ago
0

Lumberyard memory management does employ a garbage collector that the allocators make use of. Generally, each class that makes use of an allocator has the ability to customize how it interacts through the usage of schemas such as the ones I detailed and potentially custom ones if you wanted to go that route. That'd be a way to handle more specific use cases you have in mind if what's available isn't quite right for you. In that sense, there is a degree of control/customization in memory management while staying in the bounds of the provided memory management. Personally, I recommend using the allocators since they were built with optimizing memory access in mind. You are certainly free to use other approaches however.

answered 6 years ago
0

Hey @REDACTEDUSER

I have also submitted your perspective on documentation as well -- thank you for this note! It is greatly helpful for the teams working on improving such aspects :)

answered 6 years ago
0

This is a great and thorough analysis but I see that there's a common question on the types of allocators.

You can find the definitions for most of the allocator types in dev\Code\Framework\AzCore\AzCore\Memory. Here's a quick breakdown of the schemas.

  • Hpha is the default and is capable of handling small and large allocs alike
  • Heap is designed for multithreaded use and is consequently nedmalloc based
  • BestFit is used for heavy resource management and consequently bookkeeps outside of the managed memory. GPU resources are a good example for this
  • Child, as the name implies, lets you child an allocator to another. An example would be for tracking purposes on the parent
  • Pool uses a small block allocator expressly for small allocs. There's also ThreadPool which uses thread local storage I'd also recommend using aznew over new as it goes through the allocators. It's not required but certainly recommended.
answered 6 years ago
0

Epic has decided that in order to make c++ more accessible to a larger audience they needed to add a garbage collector to it. Long story short, every class that inherits from UObject is tracked by Unreal's garbage collector that runs every once in a while, unfortunately their c++ is tied to it and cannot be turned off and this can be a strong hit on performance.

That's why I'm interested if Lumberyard uses a similar strategy. From your reply it seems to leave the control (of allocations, deallocations and memory management) in the hands of the programmer, am I right? if so, I really like it.

answered 6 years ago

This post is closed: Adding new answers, comments, and votes is disabled.