Ignore:
Timestamp:
Aug 25, 2014, 3:35:40 PM (11 years ago)
Author:
[email protected]
Message:

FTL should be able to do polymorphic call inlining
https://bugs.webkit.org/show_bug.cgi?id=135145

Reviewed by Geoffrey Garen.
Source/JavaScriptCore:


Added a log-based high-fidelity call edge profiler that runs in DFG JIT (and optionally
baseline JIT) code. Used it to do precise polymorphic inlining in the FTL. Potential
inlining sites use the call edge profile if it is available, but they will still fall back
on the call inline cache and rare case counts if it's not. Polymorphic inlining means that
multiple possible callees can be inlined with a switch to guard them. The slow path may
either be an OSR exit or a virtual call.

The call edge profiling added in this patch is very precise - it will tell you about every
call that has ever happened. It took some effort to reduce the overhead of this profiling.
This mostly involved ensuring that we don't do it unnecessarily. For example, we avoid it
in the baseline JIT (you can conditionally enable it but it's off by default) and we only do
it in the DFG JIT if we know that the regular inline cache profiling wasn't precise enough.
I also experimented with reducing the precision of the profiling. This led to a significant
reduction in the speed-up, so I avoided this approach. I also explored making log processing
concurrent, but that didn't help. Also, I tested the overhead of the log processing and
found that most of the overhead of this profiling is actually in putting things into the log
rather than in processing the log - that part appears to be surprisingly cheap.

Polymorphic inlining could be enabled in the DFG if we enabled baseline call edge profiling,
and if we guarded such inlining sites with some profiling mechanism to detect
polyvariant monomorphisation opportunities (where the callsite being inlined reveals that
it's actually monomorphic).

This is a ~28% speed-up on deltablue and a ~7% speed-up on richards, with small speed-ups on
other programs as well. It's about a 2% speed-up on Octane version 2, and never a regression
on anything we care about. Some aggregates, like V8Spider, see a regression. This is
highlighting the increase in profiling overhead. But since this doesn't show up on any major
score (code-load or SunSpider), it's probably not relevant.

(JSC::CallEdge::dump):

  • bytecode/CallEdge.h: Added.

(JSC::CallEdge::operator!):
(JSC::CallEdge::callee):
(JSC::CallEdge::count):
(JSC::CallEdge::despecifiedClosure):
(JSC::CallEdge::CallEdge):

  • bytecode/CallEdgeProfile.cpp: Added.

(JSC::CallEdgeProfile::callEdges):
(JSC::CallEdgeProfile::numCallsToKnownCells):
(JSC::worthDespecifying):
(JSC::CallEdgeProfile::worthDespecifying):
(JSC::CallEdgeProfile::visitWeak):
(JSC::CallEdgeProfile::addSlow):
(JSC::CallEdgeProfile::mergeBack):
(JSC::CallEdgeProfile::fadeByHalf):
(JSC::CallEdgeLog::CallEdgeLog):
(JSC::CallEdgeLog::~CallEdgeLog):
(JSC::CallEdgeLog::isEnabled):
(JSC::operationProcessCallEdgeLog):
(JSC::CallEdgeLog::emitLogCode):
(JSC::CallEdgeLog::processLog):

  • bytecode/CallEdgeProfile.h: Added.

(JSC::CallEdgeProfile::numCallsToNotCell):
(JSC::CallEdgeProfile::numCallsToUnknownCell):
(JSC::CallEdgeProfile::totalCalls):

  • bytecode/CallEdgeProfileInlines.h: Added.

(JSC::CallEdgeProfile::CallEdgeProfile):
(JSC::CallEdgeProfile::add):

  • bytecode/CallLinkInfo.cpp:

(JSC::CallLinkInfo::visitWeak):

  • bytecode/CallLinkInfo.h:
  • bytecode/CallLinkStatus.cpp:

(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::computeFromLLInt):
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::computeExitSiteData):
(JSC::CallLinkStatus::computeFromCallLinkInfo):
(JSC::CallLinkStatus::computeFromCallEdgeProfile):
(JSC::CallLinkStatus::computeDFGStatuses):
(JSC::CallLinkStatus::isClosureCall):
(JSC::CallLinkStatus::makeClosureCall):
(JSC::CallLinkStatus::dump):
(JSC::CallLinkStatus::function): Deleted.
(JSC::CallLinkStatus::internalFunction): Deleted.
(JSC::CallLinkStatus::intrinsicFor): Deleted.

  • bytecode/CallLinkStatus.h:

(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::isSet):
(JSC::CallLinkStatus::couldTakeSlowPath):
(JSC::CallLinkStatus::edges):
(JSC::CallLinkStatus::size):
(JSC::CallLinkStatus::at):
(JSC::CallLinkStatus::operator[]):
(JSC::CallLinkStatus::canOptimize):
(JSC::CallLinkStatus::canTrustCounts):
(JSC::CallLinkStatus::isClosureCall): Deleted.
(JSC::CallLinkStatus::callTarget): Deleted.
(JSC::CallLinkStatus::executable): Deleted.
(JSC::CallLinkStatus::makeClosureCall): Deleted.

  • bytecode/CallVariant.cpp: Added.

(JSC::CallVariant::dump):

  • bytecode/CallVariant.h: Added.

(JSC::CallVariant::CallVariant):
(JSC::CallVariant::operator!):
(JSC::CallVariant::despecifiedClosure):
(JSC::CallVariant::rawCalleeCell):
(JSC::CallVariant::internalFunction):
(JSC::CallVariant::function):
(JSC::CallVariant::isClosureCall):
(JSC::CallVariant::executable):
(JSC::CallVariant::nonExecutableCallee):
(JSC::CallVariant::intrinsicFor):
(JSC::CallVariant::functionExecutable):
(JSC::CallVariant::isHashTableDeletedValue):
(JSC::CallVariant::operator==):
(JSC::CallVariant::operator!=):
(JSC::CallVariant::operator<):
(JSC::CallVariant::operator>):
(JSC::CallVariant::operator<=):
(JSC::CallVariant::operator>=):
(JSC::CallVariant::hash):
(JSC::CallVariant::deletedToken):
(JSC::CallVariantHash::hash):
(JSC::CallVariantHash::equal):

  • bytecode/CodeOrigin.h:

(JSC::InlineCallFrame::isNormalCall):

  • bytecode/ExitKind.cpp:

(JSC::exitKindToString):

  • bytecode/ExitKind.h:
  • bytecode/GetByIdStatus.cpp:

(JSC::GetByIdStatus::computeForStubInfo):

  • bytecode/PutByIdStatus.cpp:

(JSC::PutByIdStatus::computeForStubInfo):

  • dfg/DFGAbstractInterpreterInlines.h:

(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):

  • dfg/DFGBackwardsPropagationPhase.cpp:

(JSC::DFG::BackwardsPropagationPhase::propagate):

  • dfg/DFGBasicBlock.cpp:

(JSC::DFG::BasicBlock::~BasicBlock):

  • dfg/DFGBasicBlock.h:

(JSC::DFG::BasicBlock::takeLast):
(JSC::DFG::BasicBlock::didLink):

  • dfg/DFGByteCodeParser.cpp:

(JSC::DFG::ByteCodeParser::processSetLocalQueue):
(JSC::DFG::ByteCodeParser::removeLastNodeFromGraph):
(JSC::DFG::ByteCodeParser::addCallWithoutSettingResult):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::emitFunctionChecks):
(JSC::DFG::ByteCodeParser::undoFunctionChecks):
(JSC::DFG::ByteCodeParser::inliningCost):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::cancelLinkingForBlock):
(JSC::DFG::ByteCodeParser::attemptToInlineCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::prepareToParseBlock):
(JSC::DFG::ByteCodeParser::clearCaches):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::linkBlock):
(JSC::DFG::ByteCodeParser::linkBlocks):
(JSC::DFG::ByteCodeParser::parseCodeBlock):

  • dfg/DFGCPSRethreadingPhase.cpp:

(JSC::DFG::CPSRethreadingPhase::freeUnnecessaryNodes):

  • dfg/DFGClobberize.h:

(JSC::DFG::clobberize):

  • dfg/DFGCommon.h:
  • dfg/DFGConstantFoldingPhase.cpp:

(JSC::DFG::ConstantFoldingPhase::foldConstants):

  • dfg/DFGDoesGC.cpp:

(JSC::DFG::doesGC):

  • dfg/DFGDriver.cpp:

(JSC::DFG::compileImpl):

  • dfg/DFGFixupPhase.cpp:

(JSC::DFG::FixupPhase::fixupNode):

  • dfg/DFGGraph.cpp:

(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::visitChildren):

  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::link):

  • dfg/DFGLazyJSValue.cpp:

(JSC::DFG::LazyJSValue::switchLookupValue):

  • dfg/DFGLazyJSValue.h:

(JSC::DFG::LazyJSValue::switchLookupValue): Deleted.

  • dfg/DFGNode.cpp:

(WTF::printInternal):

  • dfg/DFGNode.h:

(JSC::DFG::OpInfo::OpInfo):
(JSC::DFG::Node::hasHeapPrediction):
(JSC::DFG::Node::hasCellOperand):
(JSC::DFG::Node::cellOperand):
(JSC::DFG::Node::setCellOperand):
(JSC::DFG::Node::canBeKnownFunction): Deleted.
(JSC::DFG::Node::hasKnownFunction): Deleted.
(JSC::DFG::Node::knownFunction): Deleted.
(JSC::DFG::Node::giveKnownFunction): Deleted.
(JSC::DFG::Node::hasFunction): Deleted.
(JSC::DFG::Node::function): Deleted.
(JSC::DFG::Node::hasExecutable): Deleted.
(JSC::DFG::Node::executable): Deleted.

  • dfg/DFGNodeType.h:
  • dfg/DFGPhantomCanonicalizationPhase.cpp:

(JSC::DFG::PhantomCanonicalizationPhase::run):

  • dfg/DFGPhantomRemovalPhase.cpp:

(JSC::DFG::PhantomRemovalPhase::run):

  • dfg/DFGPredictionPropagationPhase.cpp:

(JSC::DFG::PredictionPropagationPhase::propagate):

  • dfg/DFGSafeToExecute.h:

(JSC::DFG::safeToExecute):

  • dfg/DFGSpeculativeJIT.cpp:

(JSC::DFG::SpeculativeJIT::emitSwitch):

  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGStructureRegistrationPhase.cpp:

(JSC::DFG::StructureRegistrationPhase::run):

  • dfg/DFGTierUpCheckInjectionPhase.cpp:

(JSC::DFG::TierUpCheckInjectionPhase::run):
(JSC::DFG::TierUpCheckInjectionPhase::removeFTLProfiling):

  • dfg/DFGValidate.cpp:

(JSC::DFG::Validate::validate):

  • dfg/DFGWatchpointCollectionPhase.cpp:

(JSC::DFG::WatchpointCollectionPhase::handle):

  • ftl/FTLCapabilities.cpp:

(JSC::FTL::canCompile):

  • ftl/FTLLowerDFGToLLVM.cpp:

(JSC::FTL::ftlUnreachable):
(JSC::FTL::LowerDFGToLLVM::lower):
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileCheckCell):
(JSC::FTL::LowerDFGToLLVM::compileCheckBadCell):
(JSC::FTL::LowerDFGToLLVM::compileGetExecutable):
(JSC::FTL::LowerDFGToLLVM::compileNativeCallOrConstruct):
(JSC::FTL::LowerDFGToLLVM::compileSwitch):
(JSC::FTL::LowerDFGToLLVM::buildSwitch):
(JSC::FTL::LowerDFGToLLVM::compileCheckFunction): Deleted.
(JSC::FTL::LowerDFGToLLVM::compileCheckExecutable): Deleted.

  • heap/Heap.cpp:

(JSC::Heap::collect):

  • jit/AssemblyHelpers.h:

(JSC::AssemblyHelpers::storeValue):
(JSC::AssemblyHelpers::loadValue):

  • jit/CCallHelpers.h:

(JSC::CCallHelpers::setupArguments):

  • jit/GPRInfo.h:

(JSC::JSValueRegs::uses):

  • jit/JITCall.cpp:

(JSC::JIT::compileOpCall):

  • jit/JITCall32_64.cpp:

(JSC::JIT::compileOpCall):

  • runtime/Options.h:
  • runtime/VM.cpp:

(JSC::VM::ensureCallEdgeLog):

  • runtime/VM.h:
  • tests/stress/new-array-then-exit.js: Added.

(foo):

  • tests/stress/poly-call-exit-this.js: Added.
  • tests/stress/poly-call-exit.js: Added.

Source/WTF:


Add some power that I need for call edge profiling.

  • wtf/OwnPtr.h:

(WTF::OwnPtr<T>::createTransactionally):

  • wtf/Spectrum.h:

(WTF::Spectrum::add):
(WTF::Spectrum::addAll):
(WTF::Spectrum::get):
(WTF::Spectrum::size):
(WTF::Spectrum::KeyAndCount::KeyAndCount):
(WTF::Spectrum::clear):
(WTF::Spectrum::removeIf):

LayoutTests:

  • js/regress/script-tests/simple-poly-call-nested.js: Added.
  • js/regress/script-tests/simple-poly-call.js: Added.
  • js/regress/simple-poly-call-expected.txt: Added.
  • js/regress/simple-poly-call-nested-expected.txt: Added.
  • js/regress/simple-poly-call-nested.html: Added.
  • js/regress/simple-poly-call.html: Added.
File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp

    r172853 r172940  
    5151namespace JSC { namespace DFG {
    5252
     53static const bool verbose = false;
     54
    5355class ConstantBufferKey {
    5456public:
     
    179181    void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
    180182    void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
    181     void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind);
     183    void emitFunctionChecks(CallVariant, Node* callTarget, int registerOffset, CodeSpecializationKind);
     184    void undoFunctionChecks(CallVariant);
    182185    void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
     186    unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CodeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1.
    183187    // Handle inlining. Return true if it succeeded, false if we need to plant a call.
    184     bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind);
     188    bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
     189    enum CallerLinkability { CallerDoesNormalLinking, CallerLinksManually };
     190    bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance);
     191    void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability);
     192    void cancelLinkingForBlock(InlineStackEntry*, BasicBlock*); // Only works when the given block is the last one to have been added for that inline stack entry.
    185193    // Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
    186194    bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
    187195    bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
    188     bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind);
     196    bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
    189197    Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
    190198    Node* handleGetByOffset(SpeculatedType, Node* base, const StructureSet&, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
     
    201209    Node* getScope(unsigned skipCount);
    202210   
    203     // Prepare to parse a block.
    204211    void prepareToParseBlock();
     212    void clearCaches();
     213
    205214    // Parse a single basic block of bytecode instructions.
    206215    bool parseBlock(unsigned limit);
     
    297306        return delayed.execute(this, setMode);
    298307    }
     308   
     309    void processSetLocalQueue()
     310    {
     311        for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
     312            m_setLocalQueue[i].execute(this);
     313        m_setLocalQueue.resize(0);
     314    }
    299315
    300316    Node* set(VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
     
    638654        return result;
    639655    }
     656   
     657    void removeLastNodeFromGraph(NodeType expectedNodeType)
     658    {
     659        Node* node = m_currentBlock->takeLast();
     660        RELEASE_ASSERT(node->op() == expectedNodeType);
     661        m_graph.m_allocator.free(node);
     662    }
    640663
    641664    void addVarArgChild(Node* child)
     
    646669   
    647670    Node* addCallWithoutSettingResult(
    648         NodeType op, Node* callee, int argCount, int registerOffset,
     671        NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset,
    649672        SpeculatedType prediction)
    650673    {
     
    654677            m_parameterSlots = parameterSlots;
    655678
    656         int dummyThisArgument = op == Call || op == NativeCall ? 0 : 1;
     679        int dummyThisArgument = op == Call || op == NativeCall || op == ProfiledCall ? 0 : 1;
    657680        for (int i = 0 + dummyThisArgument; i < argCount; ++i)
    658681            addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
    659682
    660         return addToGraph(Node::VarArg, op, OpInfo(0), OpInfo(prediction));
     683        return addToGraph(Node::VarArg, op, opInfo, OpInfo(prediction));
    661684    }
    662685   
    663686    Node* addCall(
    664         int result, NodeType op, Node* callee, int argCount, int registerOffset,
     687        int result, NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset,
    665688        SpeculatedType prediction)
    666689    {
    667690        Node* call = addCallWithoutSettingResult(
    668             op, callee, argCount, registerOffset, prediction);
     691            op, opInfo, callee, argCount, registerOffset, prediction);
    669692        VirtualRegister resultReg(result);
    670693        if (resultReg.isValid())
     
    872895       
    873896        // Potential block linking targets. Must be sorted by bytecodeBegin, and
    874         // cannot have two blocks that have the same bytecodeBegin. For this very
    875         // reason, this is not equivalent to
     897        // cannot have two blocks that have the same bytecodeBegin.
    876898        Vector<BasicBlock*> m_blockLinkingTargets;
    877899       
     
    10201042{
    10211043    ASSERT(registerOffset <= 0);
    1022     CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
    10231044   
    10241045    if (callTarget->hasConstant())
    10251046        callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true);
     1047   
     1048    if ((!callLinkStatus.canOptimize() || callLinkStatus.size() != 1)
     1049        && !isFTL(m_graph.m_plan.mode) && Options::useFTLJIT()
     1050        && InlineCallFrame::isNormalCall(kind)
     1051        && CallEdgeLog::isEnabled()
     1052        && Options::dfgDoesCallEdgeProfiling()) {
     1053        ASSERT(op == Call || op == Construct);
     1054        if (op == Call)
     1055            op = ProfiledCall;
     1056        else
     1057            op = ProfiledConstruct;
     1058    }
    10261059   
    10271060    if (!callLinkStatus.canOptimize()) {
     
    10291062        // that we cannot optimize them.
    10301063       
    1031         addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
     1064        addCall(result, op, OpInfo(), callTarget, argumentCountIncludingThis, registerOffset, prediction);
    10321065        return;
    10331066    }
    10341067   
    10351068    unsigned nextOffset = m_currentIndex + instructionSize;
    1036 
    1037     if (InternalFunction* function = callLinkStatus.internalFunction()) {
    1038         if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, specializationKind)) {
    1039             // This phantoming has to be *after* the code for the intrinsic, to signify that
    1040             // the inputs must be kept alive whatever exits the intrinsic may do.
    1041             addToGraph(Phantom, callTarget);
    1042             emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
    1043             return;
    1044         }
    1045        
    1046         // Can only handle this using the generic call handler.
    1047         addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
    1048         return;
    1049     }
    1050        
    1051     Intrinsic intrinsic = callLinkStatus.intrinsicFor(specializationKind);
    1052 
    1053     JSFunction* knownFunction = nullptr;
    1054     if (intrinsic != NoIntrinsic) {
    1055         emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
    1056            
    1057         if (handleIntrinsic(result, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
    1058             // This phantoming has to be *after* the code for the intrinsic, to signify that
    1059             // the inputs must be kept alive whatever exits the intrinsic may do.
    1060             addToGraph(Phantom, callTarget);
    1061             emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
    1062             if (m_graph.compilation())
    1063                 m_graph.compilation()->noticeInlinedCall();
    1064             return;
    1065         }
    1066     } else if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, kind)) {
     1069   
     1070    OpInfo callOpInfo;
     1071   
     1072    if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
    10671073        if (m_graph.compilation())
    10681074            m_graph.compilation()->noticeInlinedCall();
    10691075        return;
     1076    }
     1077   
    10701078#if ENABLE(FTL_NATIVE_CALL_INLINING)
    1071     } else if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls()) {
    1072         JSFunction* function = callLinkStatus.function();
     1079    if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls() && callLinkStatus.size() == 1 && !callLinkStatus.couldTakeSlowPath()) {
     1080        CallVariant callee = callLinkStatus[0].callee();
     1081        JSFunction* function = callee.function();
     1082        CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
    10731083        if (function && function->isHostFunction()) {
    1074             emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind);
    1075             knownFunction = function;
    1076 
    1077             if (op == Call)
     1084            emitFunctionChecks(callee, callTarget, registerOffset, specializationKind);
     1085            callOpInfo = OpInfo(m_graph.freeze(function));
     1086
     1087            if (op == Call || op == ProfiledCall)
    10781088                op = NativeCall;
    10791089            else {
    1080                 ASSERT(op == Construct);
     1090                ASSERT(op == Construct || op == ProfiledConstruct);
    10811091                op = NativeConstruct;
    10821092            }
    10831093        }
     1094    }
    10841095#endif
    1085     }
    1086     Node* call = addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);
    1087 
    1088     if (knownFunction)
    1089         call->giveKnownFunction(knownFunction);
     1096   
     1097    addCall(result, op, callOpInfo, callTarget, argumentCountIncludingThis, registerOffset, prediction);
    10901098}
    10911099
    1092 void ByteCodeParser::emitFunctionChecks(const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
     1100void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
    10931101{
    10941102    Node* thisArgument;
     
    10981106        thisArgument = 0;
    10991107
    1100     if (callLinkStatus.isProved()) {
    1101         addToGraph(Phantom, callTarget, thisArgument);
    1102         return;
    1103     }
    1104    
    1105     ASSERT(callLinkStatus.canOptimize());
    1106    
    1107     if (JSFunction* function = callLinkStatus.function())
    1108         addToGraph(CheckFunction, OpInfo(m_graph.freeze(function)), callTarget, thisArgument);
    1109     else {
    1110         ASSERT(callLinkStatus.executable());
    1111        
    1112         addToGraph(CheckExecutable, OpInfo(callLinkStatus.executable()), callTarget, thisArgument);
    1113     }
     1108    JSCell* calleeCell;
     1109    Node* callTargetForCheck;
     1110    if (callee.isClosureCall()) {
     1111        calleeCell = callee.executable();
     1112        callTargetForCheck = addToGraph(GetExecutable, callTarget);
     1113    } else {
     1114        calleeCell = callee.nonExecutableCallee();
     1115        callTargetForCheck = callTarget;
     1116    }
     1117   
     1118    ASSERT(calleeCell);
     1119    addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck, thisArgument);
     1120}
     1121
     1122void ByteCodeParser::undoFunctionChecks(CallVariant callee)
     1123{
     1124    removeLastNodeFromGraph(CheckCell);
     1125    if (callee.isClosureCall())
     1126        removeLastNodeFromGraph(GetExecutable);
    11141127}
    11151128
     
    11201133}
    11211134
    1122 bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind)
     1135unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountIncludingThis, CodeSpecializationKind kind)
    11231136{
    1124     static const bool verbose = false;
    1125    
    1126     CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
    1127    
    11281137    if (verbose)
    1129         dataLog("Considering inlining ", callLinkStatus, " into ", currentCodeOrigin(), "\n");
    1130    
    1131     // First, the really simple checks: do we have an actual JS function?
    1132     if (!callLinkStatus.executable()) {
     1138        dataLog("Considering inlining ", callee, " into ", currentCodeOrigin(), "\n");
     1139   
     1140    FunctionExecutable* executable = callee.functionExecutable();
     1141    if (!executable) {
    11331142        if (verbose)
    1134             dataLog("    Failing because there is no executable.\n");
    1135         return false;
    1136     }
    1137     if (callLinkStatus.executable()->isHostFunction()) {
    1138         if (verbose)
    1139             dataLog("    Failing because it's a host function.\n");
    1140         return false;
    1141     }
    1142    
    1143     FunctionExecutable* executable = jsCast<FunctionExecutable*>(callLinkStatus.executable());
     1143            dataLog("    Failing because there is no function executable.");
     1144        return UINT_MAX;
     1145    }
    11441146   
    11451147    // Does the number of arguments we're passing match the arity of the target? We currently
     
    11491151        if (verbose)
    11501152            dataLog("    Failing because of arity mismatch.\n");
    1151         return false;
     1153        return UINT_MAX;
    11521154    }
    11531155   
     
    11581160    // global function, where watchpointing gives us static information. Overall, it's a rare case
    11591161    // because we expect that any hot callees would have already been compiled.
    1160     CodeBlock* codeBlock = executable->baselineCodeBlockFor(specializationKind);
     1162    CodeBlock* codeBlock = executable->baselineCodeBlockFor(kind);
    11611163    if (!codeBlock) {
    11621164        if (verbose)
    11631165            dataLog("    Failing because no code block available.\n");
    1164         return false;
     1166        return UINT_MAX;
    11651167    }
    11661168    CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
    1167         codeBlock, specializationKind, callLinkStatus.isClosureCall());
     1169        codeBlock, kind, callee.isClosureCall());
    11681170    if (!canInline(capabilityLevel)) {
    11691171        if (verbose)
    11701172            dataLog("    Failing because the function is not inlineable.\n");
    1171         return false;
     1173        return UINT_MAX;
    11721174    }
    11731175   
     
    11791181        if (verbose)
    11801182            dataLog("    Failing because the caller is too large.\n");
    1181         return false;
     1183        return UINT_MAX;
    11821184    }
    11831185   
     
    11981200            if (verbose)
    11991201                dataLog("    Failing because depth exceeded.\n");
    1200             return false;
     1202            return UINT_MAX;
    12011203        }
    12021204       
     
    12061208                if (verbose)
    12071209                    dataLog("    Failing because recursion detected.\n");
    1208                 return false;
     1210                return UINT_MAX;
    12091211            }
    12101212        }
     
    12121214   
    12131215    if (verbose)
    1214         dataLog("    Committing to inlining.\n");
    1215    
    1216     // Now we know without a doubt that we are committed to inlining. So begin the process
    1217     // by checking the callee (if necessary) and making sure that arguments and the callee
    1218     // are flushed.
    1219     emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, specializationKind);
    1220    
     1216        dataLog("    Inlining should be possible.\n");
     1217   
     1218    // It might be possible to inline.
     1219    return codeBlock->instructionCount();
     1220}
     1221
     1222void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability)
     1223{
     1224    CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     1225   
     1226    ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX);
     1227   
     1228    CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind);
     1229
    12211230    // FIXME: Don't flush constants!
    12221231   
     
    12341243   
    12351244    InlineStackEntry inlineStackEntry(
    1236         this, codeBlock, codeBlock, m_graph.lastBlock(), callLinkStatus.function(), resultReg,
     1245        this, codeBlock, codeBlock, m_graph.lastBlock(), callee.function(), resultReg,
    12371246        (VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind);
    12381247   
     
    12481257    RELEASE_ASSERT(
    12491258        m_inlineStackTop->m_inlineCallFrame->isClosureCall
    1250         == callLinkStatus.isClosureCall());
    1251     if (callLinkStatus.isClosureCall()) {
     1259        == callee.isClosureCall());
     1260    if (callee.isClosureCall()) {
    12521261        VariableAccessData* calleeVariable =
    12531262            set(VirtualRegister(JSStack::Callee), callTargetNode, ImmediateNakedSet)->variableAccessData();
     
    12641273   
    12651274    parseCodeBlock();
    1266     prepareToParseBlock(); // Reset our state now that we're back to the outer code.
     1275    clearCaches(); // Reset our state now that we're back to the outer code.
    12671276   
    12681277    m_currentIndex = oldIndex;
     
    12771286            ASSERT(inlineStackEntry.m_callsiteBlockHead->isLinked);
    12781287       
    1279         // It's possible that the callsite block head is not owned by the caller.
    1280         if (!inlineStackEntry.m_caller->m_unlinkedBlocks.isEmpty()) {
    1281             // It's definitely owned by the caller, because the caller created new blocks.
    1282             // Assert that this all adds up.
    1283             ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_block == inlineStackEntry.m_callsiteBlockHead);
    1284             ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking);
    1285             inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking = false;
    1286         } else {
    1287             // It's definitely not owned by the caller. Tell the caller that he does not
    1288             // need to link his callsite block head, because we did it for him.
    1289             ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking);
    1290             ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHead == inlineStackEntry.m_callsiteBlockHead);
    1291             inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking = false;
    1292         }
     1288        if (callerLinkability == CallerDoesNormalLinking)
     1289            cancelLinkingForBlock(inlineStackEntry.m_caller, inlineStackEntry.m_callsiteBlockHead);
    12931290       
    12941291        linkBlocks(inlineStackEntry.m_unlinkedBlocks, inlineStackEntry.m_blockLinkingTargets);
     
    13091306            // in the linker's binary search.
    13101307            lastBlock->bytecodeBegin = m_currentIndex;
    1311             m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock()));
     1308            if (callerLinkability == CallerDoesNormalLinking) {
     1309                if (verbose)
     1310                    dataLog("Adding unlinked block ", RawPointer(m_graph.lastBlock()), " (one return)\n");
     1311                m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock()));
     1312            }
    13121313        }
    13131314       
    13141315        m_currentBlock = m_graph.lastBlock();
    1315         return true;
     1316        return;
    13161317    }
    13171318   
    13181319    // If we get to this point then all blocks must end in some sort of terminals.
    13191320    ASSERT(lastBlock->last()->isTerminal());
    1320    
    13211321
    13221322    // Need to create a new basic block for the continuation at the caller.
     
    13341334        node->targetBlock() = block.get();
    13351335        inlineStackEntry.m_unlinkedBlocks[i].m_needsEarlyReturnLinking = false;
    1336 #if !ASSERT_DISABLED
    1337         blockToLink->isLinked = true;
    1338 #endif
     1336        if (verbose)
     1337            dataLog("Marking ", RawPointer(blockToLink), " as linked (jumps to return)\n");
     1338        blockToLink->didLink();
    13391339    }
    13401340   
    13411341    m_currentBlock = block.get();
    13421342    ASSERT(m_inlineStackTop->m_caller->m_blockLinkingTargets.isEmpty() || m_inlineStackTop->m_caller->m_blockLinkingTargets.last()->bytecodeBegin < nextOffset);
    1343     m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
    1344     m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get());
     1343    if (verbose)
     1344        dataLog("Adding unlinked block ", RawPointer(block.get()), " (many returns)\n");
     1345    if (callerLinkability == CallerDoesNormalLinking) {
     1346        m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
     1347        m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get());
     1348    }
    13451349    m_graph.appendBlock(block);
    13461350    prepareToParseBlock();
    1347    
    1348     // At this point we return and continue to generate code for the caller, but
    1349     // in the new basic block.
     1351}
     1352
     1353void ByteCodeParser::cancelLinkingForBlock(InlineStackEntry* inlineStackEntry, BasicBlock* block)
     1354{
     1355    // It's possible that the callsite block head is not owned by the caller.
     1356    if (!inlineStackEntry->m_unlinkedBlocks.isEmpty()) {
     1357        // It's definitely owned by the caller, because the caller created new blocks.
     1358        // Assert that this all adds up.
     1359        ASSERT_UNUSED(block, inlineStackEntry->m_unlinkedBlocks.last().m_block == block);
     1360        ASSERT(inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking);
     1361        inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking = false;
     1362    } else {
     1363        // It's definitely not owned by the caller. Tell the caller that he does not
     1364        // need to link his callsite block head, because we did it for him.
     1365        ASSERT(inlineStackEntry->m_callsiteBlockHeadNeedsLinking);
     1366        ASSERT_UNUSED(block, inlineStackEntry->m_callsiteBlockHead == block);
     1367        inlineStackEntry->m_callsiteBlockHeadNeedsLinking = false;
     1368    }
     1369}
     1370
     1371bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance)
     1372{
     1373    CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     1374   
     1375    if (!inliningBalance)
     1376        return false;
     1377   
     1378    if (InternalFunction* function = callee.internalFunction()) {
     1379        if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind)) {
     1380            addToGraph(Phantom, callTargetNode);
     1381            emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
     1382            inliningBalance--;
     1383            return true;
     1384        }
     1385        return false;
     1386    }
     1387   
     1388    Intrinsic intrinsic = callee.intrinsicFor(specializationKind);
     1389    if (intrinsic != NoIntrinsic) {
     1390        if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
     1391            addToGraph(Phantom, callTargetNode);
     1392            emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
     1393            inliningBalance--;
     1394            return true;
     1395        }
     1396        return false;
     1397    }
     1398   
     1399    unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind);
     1400    if (myInliningCost > inliningBalance)
     1401        return false;
     1402   
     1403    inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability);
     1404    inliningBalance -= myInliningCost;
     1405    return true;
     1406}
     1407
     1408bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
     1409{
     1410    if (verbose) {
     1411        dataLog("Handling inlining...\n");
     1412        dataLog("Stack: ", currentCodeOrigin(), "\n");
     1413    }
     1414    CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     1415   
     1416    if (!callLinkStatus.size()) {
     1417        if (verbose)
     1418            dataLog("Bailing inlining.\n");
     1419        return false;
     1420    }
     1421   
     1422    unsigned inliningBalance = Options::maximumFunctionForCallInlineCandidateInstructionCount();
     1423    if (specializationKind == CodeForConstruct)
     1424        inliningBalance = std::min(inliningBalance, Options::maximumFunctionForConstructInlineCandidateInstructionCount());
     1425    if (callLinkStatus.isClosureCall())
     1426        inliningBalance = std::min(inliningBalance, Options::maximumFunctionForClosureCallInlineCandidateInstructionCount());
     1427   
     1428    // First check if we can avoid creating control flow. Our inliner does some CFG
     1429    // simplification on the fly and this helps reduce compile times, but we can only leverage
     1430    // this in cases where we don't need control flow diamonds to check the callee.
     1431    if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) {
     1432        emitFunctionChecks(
     1433            callLinkStatus[0].callee(), callTargetNode, registerOffset, specializationKind);
     1434        bool result = attemptToInlineCall(
     1435            callTargetNode, resultOperand, callLinkStatus[0].callee(), registerOffset,
     1436            argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction,
     1437            inliningBalance);
     1438        if (!result && !callLinkStatus.isProved())
     1439            undoFunctionChecks(callLinkStatus[0].callee());
     1440        if (verbose) {
     1441            dataLog("Done inlining (simple).\n");
     1442            dataLog("Stack: ", currentCodeOrigin(), "\n");
     1443        }
     1444        return result;
     1445    }
     1446   
     1447    // We need to create some kind of switch over callee. For now we only do this if we believe that
     1448    // we're in the top tier. We have two reasons for this: first, it provides us an opportunity to
     1449    // do more detailed polyvariant/polymorphic profiling; and second, it reduces compile times in
     1450    // the DFG. And by polyvariant profiling we mean polyvariant profiling of *this* call. Note that
     1451    // we could improve that aspect of this by doing polymorphic inlining but having the profiling
     1452    // also. Currently we opt against this, but it could be interesting. That would require having a
     1453    // separate node for call edge profiling.
     1454    // FIXME: Introduce the notion of a separate call edge profiling node.
     1455    // https://bugs.webkit.org/show_bug.cgi?id=136033
     1456    if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()) {
     1457        if (verbose) {
     1458            dataLog("Bailing inlining (hard).\n");
     1459            dataLog("Stack: ", currentCodeOrigin(), "\n");
     1460        }
     1461        return false;
     1462    }
     1463   
     1464    unsigned oldOffset = m_currentIndex;
     1465   
     1466    bool allAreClosureCalls = true;
     1467    bool allAreDirectCalls = true;
     1468    for (unsigned i = callLinkStatus.size(); i--;) {
     1469        if (callLinkStatus[i].callee().isClosureCall())
     1470            allAreDirectCalls = false;
     1471        else
     1472            allAreClosureCalls = false;
     1473    }
     1474   
     1475    Node* thingToSwitchOn;
     1476    if (allAreDirectCalls)
     1477        thingToSwitchOn = callTargetNode;
     1478    else if (allAreClosureCalls)
     1479        thingToSwitchOn = addToGraph(GetExecutable, callTargetNode);
     1480    else {
     1481        // FIXME: We should be able to handle this case, but it's tricky and we don't know of cases
     1482        // where it would be beneficial. Also, CallLinkStatus would make all callees appear like
     1483        // closure calls if any calls were closure calls - except for calls to internal functions.
     1484        // So this will only arise if some callees are internal functions and others are closures.
     1485        // https://bugs.webkit.org/show_bug.cgi?id=136020
     1486        if (verbose) {
     1487            dataLog("Bailing inlining (mix).\n");
     1488            dataLog("Stack: ", currentCodeOrigin(), "\n");
     1489        }
     1490        return false;
     1491    }
     1492   
     1493    if (verbose) {
     1494        dataLog("Doing hard inlining...\n");
     1495        dataLog("Stack: ", currentCodeOrigin(), "\n");
     1496    }
     1497   
     1498    // This makes me wish that we were in SSA all the time. We need to pick a variable into which to
     1499    // store the callee so that it will be accessible to all of the blocks we're about to create. We
     1500    // get away with doing an immediate-set here because we wouldn't have performed any side effects
     1501    // yet.
     1502    if (verbose)
     1503        dataLog("Register offset: ", registerOffset);
     1504    VirtualRegister calleeReg(registerOffset + JSStack::Callee);
     1505    calleeReg = m_inlineStackTop->remapOperand(calleeReg);
     1506    if (verbose)
     1507        dataLog("Callee is going to be ", calleeReg, "\n");
     1508    setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush);
     1509   
     1510    SwitchData& data = *m_graph.m_switchData.add();
     1511    data.kind = SwitchCell;
     1512    addToGraph(Switch, OpInfo(&data), thingToSwitchOn);
     1513   
     1514    BasicBlock* originBlock = m_currentBlock;
     1515    if (verbose)
     1516        dataLog("Marking ", RawPointer(originBlock), " as linked (origin of poly inline)\n");
     1517    originBlock->didLink();
     1518    cancelLinkingForBlock(m_inlineStackTop, originBlock);
     1519   
     1520    // Each inlined callee will have a landing block that it returns at. They should all have jumps
     1521    // to the continuation block, which we create last.
     1522    Vector<BasicBlock*> landingBlocks;
     1523   
     1524    // We make force this true if we give up on inlining any of the edges.
     1525    bool couldTakeSlowPath = callLinkStatus.couldTakeSlowPath();
     1526   
     1527    if (verbose)
     1528        dataLog("About to loop over functions at ", currentCodeOrigin(), ".\n");
     1529   
     1530    for (unsigned i = 0; i < callLinkStatus.size(); ++i) {
     1531        m_currentIndex = oldOffset;
     1532        RefPtr<BasicBlock> block = adoptRef(new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
     1533        m_currentBlock = block.get();
     1534        m_graph.appendBlock(block);
     1535        prepareToParseBlock();
     1536       
     1537        Node* myCallTargetNode = getDirect(calleeReg);
     1538       
     1539        bool inliningResult = attemptToInlineCall(
     1540            myCallTargetNode, resultOperand, callLinkStatus[i].callee(), registerOffset,
     1541            argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction,
     1542            inliningBalance);
     1543       
     1544        if (!inliningResult) {
     1545            // That failed so we let the block die. Nothing interesting should have been added to
     1546            // the block. We also give up on inlining any of the (less frequent) callees.
     1547            ASSERT(m_currentBlock == block.get());
     1548            ASSERT(m_graph.m_blocks.last() == block);
     1549            m_graph.killBlockAndItsContents(block.get());
     1550            m_graph.m_blocks.removeLast();
     1551           
     1552            // The fact that inlining failed means we need a slow path.
     1553            couldTakeSlowPath = true;
     1554            break;
     1555        }
     1556       
     1557        JSCell* thingToCaseOn;
     1558        if (allAreDirectCalls)
     1559            thingToCaseOn = callLinkStatus[i].callee().nonExecutableCallee();
     1560        else {
     1561            ASSERT(allAreClosureCalls);
     1562            thingToCaseOn = callLinkStatus[i].callee().executable();
     1563        }
     1564        data.cases.append(SwitchCase(m_graph.freeze(thingToCaseOn), block.get()));
     1565        m_currentIndex = nextOffset;
     1566        processSetLocalQueue(); // This only comes into play for intrinsics, since normal inlined code will leave an empty queue.
     1567        addToGraph(Jump);
     1568        if (verbose)
     1569            dataLog("Marking ", RawPointer(m_currentBlock), " as linked (tail of poly inlinee)\n");
     1570        m_currentBlock->didLink();
     1571        landingBlocks.append(m_currentBlock);
     1572
     1573        if (verbose)
     1574            dataLog("Finished inlining ", callLinkStatus[i].callee(), " at ", currentCodeOrigin(), ".\n");
     1575    }
     1576   
     1577    RefPtr<BasicBlock> slowPathBlock = adoptRef(
     1578        new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
     1579    m_currentIndex = oldOffset;
     1580    data.fallThrough = BranchTarget(slowPathBlock.get());
     1581    m_graph.appendBlock(slowPathBlock);
     1582    if (verbose)
     1583        dataLog("Marking ", RawPointer(slowPathBlock.get()), " as linked (slow path block)\n");
     1584    slowPathBlock->didLink();
     1585    prepareToParseBlock();
     1586    m_currentBlock = slowPathBlock.get();
     1587    Node* myCallTargetNode = getDirect(calleeReg);
     1588    if (couldTakeSlowPath) {
     1589        addCall(
     1590            resultOperand, callOp, OpInfo(), myCallTargetNode, argumentCountIncludingThis,
     1591            registerOffset, prediction);
     1592    } else {
     1593        addToGraph(CheckBadCell);
     1594        addToGraph(Phantom, myCallTargetNode);
     1595        emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
     1596       
     1597        set(VirtualRegister(resultOperand), addToGraph(BottomValue));
     1598    }
     1599
     1600    m_currentIndex = nextOffset;
     1601    processSetLocalQueue();
     1602    addToGraph(Jump);
     1603    landingBlocks.append(m_currentBlock);
     1604   
     1605    RefPtr<BasicBlock> continuationBlock = adoptRef(
     1606        new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN));
     1607    m_graph.appendBlock(continuationBlock);
     1608    if (verbose)
     1609        dataLog("Adding unlinked block ", RawPointer(continuationBlock.get()), " (continuation)\n");
     1610    m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(continuationBlock.get()));
     1611    prepareToParseBlock();
     1612    m_currentBlock = continuationBlock.get();
     1613   
     1614    for (unsigned i = landingBlocks.size(); i--;)
     1615        landingBlocks[i]->last()->targetBlock() = continuationBlock.get();
     1616   
     1617    m_currentIndex = oldOffset;
     1618   
     1619    if (verbose) {
     1620        dataLog("Done inlining (hard).\n");
     1621        dataLog("Stack: ", currentCodeOrigin(), "\n");
     1622    }
    13501623    return true;
    13511624}
     
    16461919bool ByteCodeParser::handleConstantInternalFunction(
    16471920    int resultOperand, InternalFunction* function, int registerOffset,
    1648     int argumentCountIncludingThis, SpeculatedType prediction, CodeSpecializationKind kind)
     1921    int argumentCountIncludingThis, CodeSpecializationKind kind)
    16491922{
    16501923    // If we ever find that we have a lot of internal functions that we specialize for,
     
    16541927    // we know about is small enough, that having just a linear cascade of if statements
    16551928    // is good enough.
    1656    
    1657     UNUSED_PARAM(prediction); // Remove this once we do more things.
    16581929   
    16591930    if (function->classInfo() == ArrayConstructor::info()) {
     
    20212292void ByteCodeParser::prepareToParseBlock()
    20222293{
     2294    clearCaches();
     2295    ASSERT(m_setLocalQueue.isEmpty());
     2296}
     2297
     2298void ByteCodeParser::clearCaches()
     2299{
    20232300    m_constants.resize(0);
    20242301}
     
    20602337
    20612338    while (true) {
    2062         for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
    2063             m_setLocalQueue[i].execute(this);
    2064         m_setLocalQueue.resize(0);
     2339        processSetLocalQueue();
    20652340       
    20662341        // Don't extend over jump destinations.
     
    22062481            if (!cachedFunction
    22072482                || m_inlineStackTop->m_profiledBlock->couldTakeSlowCase(m_currentIndex)
    2208                 || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadFunction)) {
     2483                || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell)) {
    22092484                set(VirtualRegister(currentInstruction[1].u.operand), get(VirtualRegister(JSStack::Callee)));
    22102485            } else {
     
    22122487                ASSERT(cachedFunction->inherits(JSFunction::info()));
    22132488                Node* actualCallee = get(VirtualRegister(JSStack::Callee));
    2214                 addToGraph(CheckFunction, OpInfo(frozen), actualCallee);
     2489                addToGraph(CheckCell, OpInfo(frozen), actualCallee);
    22152490                set(VirtualRegister(currentInstruction[1].u.operand), addToGraph(JSConstant, OpInfo(frozen)));
    22162491            }
     
    28943169            ASSERT(pointerIsFunction(currentInstruction[2].u.specialPointer));
    28953170            addToGraph(
    2896                 CheckFunction,
     3171                CheckCell,
    28973172                OpInfo(m_graph.freeze(static_cast<JSCell*>(actualPointerFor(
    28983173                    m_inlineStackTop->m_codeBlock, currentInstruction[2].u.specialPointer)))),
     
    33183593    }
    33193594   
    3320 #if !ASSERT_DISABLED
    3321     block->isLinked = true;
    3322 #endif
     3595    if (verbose)
     3596        dataLog("Marking ", RawPointer(block), " as linked (actually did linking)\n");
     3597    block->didLink();
    33233598}
    33243599
     
    33263601{
    33273602    for (size_t i = 0; i < unlinkedBlocks.size(); ++i) {
     3603        if (verbose)
     3604            dataLog("Attempting to link ", RawPointer(unlinkedBlocks[i].m_block), "\n");
    33283605        if (unlinkedBlocks[i].m_needsNormalLinking) {
     3606            if (verbose)
     3607                dataLog("    Does need normal linking.\n");
    33293608            linkBlock(unlinkedBlocks[i].m_block, possibleTargets);
    33303609            unlinkedBlocks[i].m_needsNormalLinking = false;
     
    34933772void ByteCodeParser::parseCodeBlock()
    34943773{
    3495     prepareToParseBlock();
     3774    clearCaches();
    34963775   
    34973776    CodeBlock* codeBlock = m_inlineStackTop->m_codeBlock;
     
    35593838                    //    a peephole coalescing of this block in the if statement above. So, we're
    35603839                    //    generating suboptimal code and leaving more work for the CFG simplifier.
    3561                     ASSERT(m_inlineStackTop->m_unlinkedBlocks.isEmpty() || m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin < m_currentIndex);
     3840                    if (!m_inlineStackTop->m_unlinkedBlocks.isEmpty()) {
     3841                        unsigned lastBegin =
     3842                            m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin;
     3843                        ASSERT_UNUSED(
     3844                            lastBegin, lastBegin == UINT_MAX || lastBegin < m_currentIndex);
     3845                    }
    35623846                    m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(block.get()));
    35633847                    m_inlineStackTop->m_blockLinkingTargets.append(block.get());
Note: See TracChangeset for help on using the changeset viewer.