It's the price of eliminating memory fragmentation, I guess.
I think that this warrants a thorough analysis. This the first time the new memory management system has been used, and we barely know enough about its real-life performance, let alone the constraints under which it operates.
I do not know what causes this slowdown, and I did not expect it either. Allocating and releasing memory should require (roughly) the same amount of time, regardless of the size of the allocation (as long as it can be covered by the allocator's page size). There must be more to that.
I believe that's as fast as it can be, any optimisations will be ones in the browser.
The new memory management (it's called a "slab allocator") is good at keeping fragmentation at bay. But we have barely begun testing it yet, it's hardly even a week old to begin with
There is plenty of room for "tuning" the new memory management. Tuning requires that data is collected on how the memory management is used, for later analysis. To this end there is an interface in the clib2 runtime library which NetSurf uses. It would be nice to allow for memory usage information to be collected through that interface, to be stored in log files.