Ignore:
Timestamp:
Jun 12, 2012, 7:06:50 PM (13 years ago)
Author:
[email protected]
Message:

GC should be 1.7X faster
https://bugs.webkit.org/show_bug.cgi?id=88840

Reviewed by Oliver Hunt.

I profiled, and removed anything that showed up as a concurrency
bottleneck. Then, I added 3 threads to our max thread count, since we
can scale up to more threads now.

  • heap/BlockAllocator.cpp:

(JSC::BlockAllocator::BlockAllocator):
(JSC::BlockAllocator::~BlockAllocator):
(JSC::BlockAllocator::releaseFreeBlocks):
(JSC::BlockAllocator::waitForRelativeTimeWhileHoldingLock):
(JSC::BlockAllocator::waitForRelativeTime):
(JSC::BlockAllocator::blockFreeingThreadMain):

  • heap/BlockAllocator.h:

(BlockAllocator):
(JSC::BlockAllocator::allocate):
(JSC::BlockAllocator::deallocate): Use a spin lock for the common case
where we're just popping a linked list. (A pthread mutex would sleep our
thread even if the lock were only contended for a microsecond.)

Scope the lock to avoid holding it while allocating VM, since that's a
slow activity and it doesn't modify any of our data structures.

We still use a pthread mutex to handle our condition variable since we
have to, and it's not a hot path.

  • heap/CopiedSpace.cpp:

(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::doneFillingBlock):

  • heap/CopiedSpace.h:

(JSC::CopiedSpace::CopiedSpace): Use a spin lock for the to space lock,
since it just guards linked list and hash table manipulation.

  • heap/MarkStack.cpp:

(JSC::MarkStackSegmentAllocator::MarkStackSegmentAllocator):
(JSC::MarkStackSegmentAllocator::allocate):
(JSC::MarkStackSegmentAllocator::release):
(JSC::MarkStackSegmentAllocator::shrinkReserve): Use a spin lock, since
we're just managing a linked list.

(JSC::MarkStackArray::donateSomeCellsTo): Changed donation to be proportional
to our current stack size. This fixes cases where we used to donate too
much. Interestingly, donating too much was starving the donor (when it
ran out of work later) *and* the recipient (since it had to wait on a
long donation operation to complete before it could acquire the lock).

In the worst case, we're still guaranteed to donate N cells in roughly log N time.

This change also fixes cases where we used to donate too little, since
we would always keep a fixed minimum number of cells. In the worst case,
with N marking threads, would could have N large object graph roots in
our stack for the duration of GC, and scale to only 1 thread.

It's an interesting observation that a single object in the mark stack
might represent an arbitrarily large object graph -- and only the act
of marking can find out.

(JSC::MarkStackArray::stealSomeCellsFrom): Steal in proportion to idle
threads. Once again, this fixes cases where constants could cause us
to steal too much or too little.

(JSC::SlotVisitor::donateKnownParallel): Always wake up other threads
if they're idle. We can afford to do this because we're conservative
about when we donate.

(JSC::SlotVisitor::drainFromShared):

  • heap/MarkStack.h:

(MarkStackSegmentAllocator):
(MarkStackArray):
(JSC):

  • heap/SlotVisitor.h: Merged the "should I donate?" decision into a

single function, for simplicity.

  • runtime/Options.cpp:

(minimumNumberOfScansBetweenRebalance): Reduced the delay before donation
a lot. We can afford to do this because, in the common case, donation is
a single branch that decides not to donate.

(cpusToUse): Use more CPUs now, since we scale better now.

  • runtime/Options.h:

(Options): Removed now-unused variables.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/heap/MarkStack.h

    r119633 r120149  
    2929#include "CopiedSpace.h"
    3030#include "HandleTypes.h"
     31#include "JSValue.h"
    3132#include "Options.h"
    32 #include "JSValue.h"
    3333#include "Register.h"
    3434#include "UnconditionalFinalizer.h"
     
    3939#include <wtf/HashMap.h>
    4040#include <wtf/HashSet.h>
    41 #include <wtf/Vector.h>
    4241#include <wtf/Noncopyable.h>
    4342#include <wtf/OSAllocator.h>
    4443#include <wtf/PageBlock.h>
     44#include <wtf/TCSpinLock.h>
    4545#include <wtf/text/StringHash.h>
     46#include <wtf/Vector.h>
    4647
    4748#if ENABLE(OBJECT_MARK_LOGGING)
     
    113114       
    114115    private:
    115         Mutex m_lock;
     116        SpinLock m_lock;
    116117        MarkStackSegment* m_nextFreeSegment;
    117118    };
     
    130131        bool isEmpty();
    131132       
    132         bool canDonateSomeCells(); // Returns false if you should definitely not call doanteSomeCellsTo().
    133         bool donateSomeCellsTo(MarkStackArray& other); // Returns true if some cells were donated.
    134        
    135         void stealSomeCellsFrom(MarkStackArray& other);
     133        void donateSomeCellsTo(MarkStackArray& other);
     134       
     135        void stealSomeCellsFrom(MarkStackArray& other, size_t idleThreadCount);
    136136
    137137        size_t size();
     
    415415    }
    416416
    417     inline bool MarkStackArray::canDonateSomeCells()
    418     {
    419         size_t numberOfCellsToKeep = Options::minimumNumberOfCellsToKeep;
    420         // Another check: see if we have enough cells to warrant donation.
    421         if (m_top <= numberOfCellsToKeep) {
    422             // This indicates that we might not want to donate anything; check if we have
    423             // another full segment. If not, then don't donate.
    424             if (!m_topSegment->m_previous)
    425                 return false;
    426            
    427             ASSERT(m_topSegment->m_previous->m_top == m_segmentCapacity);
    428         }
    429        
    430         return true;
    431     }
    432 
    433417    inline size_t MarkStackArray::size()
    434418    {
Note: See TracChangeset for help on using the changeset viewer.