Changeset 209653 in webkit


Ignore:
Timestamp:
Dec 9, 2016, 11:32:38 PM (9 years ago)
Author:
[email protected]
Message:

JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
https://bugs.webkit.org/show_bug.cgi?id=160355

Reviewed by Filip Pizlo.

JSTests:

New microbenchmarks to measure call type performance.

  • microbenchmarks/calling-computed-args.js: Added.
  • microbenchmarks/calling-many-callees.js: Added.
  • microbenchmarks/calling-one-callee-fixed.js: Added.
  • microbenchmarks/calling-one-callee.js: Added.
  • microbenchmarks/calling-poly-callees.js: Added.
  • microbenchmarks/calling-poly-extra-arity-callees.js: Added.
  • microbenchmarks/calling-tailcall.js: Added.
  • microbenchmarks/calling-virtual-arity-fixup-callees.js: Added.
  • microbenchmarks/calling-virtual-arity-fixup-stackargs.js: Added.
  • microbenchmarks/calling-virtual-callees.js: Added.
  • microbenchmarks/calling-virtual-extra-arity-callees.js: Added.

Source/JavaScriptCore:

This patch implements passing JavaScript function arguments in registers for 64 bit platforms.

The implemented convention follows the ABI conventions for the associated platform.
The first two arguments are the callee and argument count, the rest of the argument registers
contain "this" and following argument until all platform argument registers are exhausted.
Arguments beyond what fit in registers are placed on the stack in the same location as
before this patch.

For X86-64 non-Windows platforms, there are 6 argument registers specified in the related ABI.
ARM64 has had argument registers. This allows for 4 or 6 parameter values to be placed in
registers on these respective platforms. This patch doesn't implement passing arguments in
registers for 32 bit platform, since most platforms have at most 4 argument registers
specified and 32 bit platforms use two 32 bit registers/memory locations to store one JSValue.

The call frame on the stack in unchanged in format and the arguments that are passed in
registers use the corresponding call frame location as a spill location. Arguments can
also be passed on the stack. The LLInt, baseline JIT'ed code as well as the initial entry
from C++ code base arguments on the stack. DFG s and FTL generated code pass arguments
via registers. All callees can accept arguments either in registers or on the stack.
The callee is responsible for moving argument to its preferred location.

The multiple entry points to JavaSCript code is now handled via the JITEntryPoints class and
related code. That class now has entries for StackArgsArityCheckNotRequired,
StackArgsMustCheckArity and for platforms that support registers arguments,
RegisterArgsArityCheckNotRequired, RegisterArgsMustCheckArity as well as and additional
RegisterArgsPossibleExtraArgs entry point when extra registers argument are passed.
This last case is needed to spill those extra arguments to the corresponding call frame
slots.

  • JavaScriptCore.xcodeproj/project.pbxproj:
  • b3/B3ArgumentRegValue.h:
  • b3/B3Validate.cpp:
  • bytecode/CallLinkInfo.cpp:

(JSC::CallLinkInfo::CallLinkInfo):

  • bytecode/CallLinkInfo.h:

(JSC::CallLinkInfo::setUpCall):
(JSC::CallLinkInfo::argumentsLocation):
(JSC::CallLinkInfo::argumentsInRegisters):

  • bytecode/PolymorphicAccess.cpp:

(JSC::AccessCase::generateImpl):

  • dfg/DFGAbstractInterpreterInlines.h:

(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):

  • dfg/DFGByteCodeParser.cpp:

(JSC::DFG::ByteCodeParser::parseBlock):

  • dfg/DFGCPSRethreadingPhase.cpp:

(JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
(JSC::DFG::CPSRethreadingPhase::specialCaseArguments):
(JSC::DFG::CPSRethreadingPhase::computeIsFlushed):

  • dfg/DFGClobberize.h:

(JSC::DFG::clobberize):

  • dfg/DFGCommon.h:
  • dfg/DFGDCEPhase.cpp:

(JSC::DFG::DCEPhase::run):

  • dfg/DFGDoesGC.cpp:

(JSC::DFG::doesGC):

  • dfg/DFGDriver.cpp:

(JSC::DFG::compileImpl):

  • dfg/DFGFixupPhase.cpp:

(JSC::DFG::FixupPhase::fixupNode):

  • dfg/DFGGenerationInfo.h:

(JSC::DFG::GenerationInfo::initArgumentRegisterValue):

  • dfg/DFGGraph.cpp:

(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):

  • dfg/DFGGraph.h:

(JSC::DFG::Graph::needsFlushedThis):
(JSC::DFG::Graph::addImmediateShouldSpeculateInt32):

  • dfg/DFGInPlaceAbstractState.cpp:

(JSC::DFG::InPlaceAbstractState::initialize):

  • dfg/DFGJITCompiler.cpp:

(JSC::DFG::JITCompiler::link):
(JSC::DFG::JITCompiler::compile):
(JSC::DFG::JITCompiler::compileFunction):
(JSC::DFG::JITCompiler::compileEntry): Deleted.

  • dfg/DFGJITCompiler.h:

(JSC::DFG::JITCompiler::addJSDirectCall):
(JSC::DFG::JITCompiler::JSDirectCallRecord::JSDirectCallRecord):
(JSC::DFG::JITCompiler::JSDirectCallRecord::hasSlowCall):

  • dfg/DFGJITFinalizer.cpp:

(JSC::DFG::JITFinalizer::JITFinalizer):
(JSC::DFG::JITFinalizer::finalize):
(JSC::DFG::JITFinalizer::finalizeFunction):

  • dfg/DFGJITFinalizer.h:
  • dfg/DFGLiveCatchVariablePreservationPhase.cpp:

(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlock):

  • dfg/DFGMaximalFlushInsertionPhase.cpp:

(JSC::DFG::MaximalFlushInsertionPhase::treatRegularBlock):
(JSC::DFG::MaximalFlushInsertionPhase::treatRootBlock):

  • dfg/DFGMayExit.cpp:
  • dfg/DFGMinifiedNode.cpp:

(JSC::DFG::MinifiedNode::fromNode):

  • dfg/DFGMinifiedNode.h:

(JSC::DFG::belongsInMinifiedGraph):

  • dfg/DFGNode.cpp:

(JSC::DFG::Node::hasVariableAccessData):

  • dfg/DFGNode.h:

(JSC::DFG::Node::accessesStack):
(JSC::DFG::Node::setVariableAccessData):
(JSC::DFG::Node::hasArgumentRegisterIndex):
(JSC::DFG::Node::argumentRegisterIndex):

  • dfg/DFGNodeType.h:
  • dfg/DFGOSRAvailabilityAnalysisPhase.cpp:

(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):

  • dfg/DFGOSREntrypointCreationPhase.cpp:

(JSC::DFG::OSREntrypointCreationPhase::run):

  • dfg/DFGPlan.cpp:

(JSC::DFG::Plan::compileInThreadImpl):

  • dfg/DFGPreciseLocalClobberize.h:

(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):

  • dfg/DFGPredictionInjectionPhase.cpp:

(JSC::DFG::PredictionInjectionPhase::run):

  • dfg/DFGPredictionPropagationPhase.cpp:
  • dfg/DFGPutStackSinkingPhase.cpp:
  • dfg/DFGRegisterBank.h:

(JSC::DFG::RegisterBank::iterator::unlock):
(JSC::DFG::RegisterBank::unlockAtIndex):

  • dfg/DFGSSAConversionPhase.cpp:

(JSC::DFG::SSAConversionPhase::run):

  • dfg/DFGSafeToExecute.h:

(JSC::DFG::safeToExecute):

  • dfg/DFGSpeculativeJIT.cpp:

(JSC::DFG::SpeculativeJIT::SpeculativeJIT):
(JSC::DFG::SpeculativeJIT::clearGenerationInfo):
(JSC::DFG::dumpRegisterInfo):
(JSC::DFG::SpeculativeJIT::dump):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::checkArgumentTypes):
(JSC::DFG::SpeculativeJIT::setupArgumentRegistersForEntry):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGSpeculativeJIT.h:

(JSC::DFG::SpeculativeJIT::allocate):
(JSC::DFG::SpeculativeJIT::spill):
(JSC::DFG::SpeculativeJIT::generationInfoFromVirtualRegister):
(JSC::DFG::JSValueOperand::JSValueOperand):
(JSC::DFG::JSValueOperand::gprUseSpecific):

  • dfg/DFGSpeculativeJIT32_64.cpp:

(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGSpeculativeJIT64.cpp:

(JSC::DFG::SpeculativeJIT::fillJSValue):
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):

  • dfg/DFGStrengthReductionPhase.cpp:

(JSC::DFG::StrengthReductionPhase::handleNode):

  • dfg/DFGThunks.cpp:

(JSC::DFG::osrEntryThunkGenerator):

  • dfg/DFGVariableEventStream.cpp:

(JSC::DFG::VariableEventStream::reconstruct):

  • dfg/DFGVirtualRegisterAllocationPhase.cpp:

(JSC::DFG::VirtualRegisterAllocationPhase::allocateRegister):
(JSC::DFG::VirtualRegisterAllocationPhase::run):

  • ftl/FTLCapabilities.cpp:

(JSC::FTL::canCompile):

  • ftl/FTLJITCode.cpp:

(JSC::FTL::JITCode::~JITCode):
(JSC::FTL::JITCode::initializeEntrypointThunk):
(JSC::FTL::JITCode::setEntryFor):
(JSC::FTL::JITCode::addressForCall):
(JSC::FTL::JITCode::executableAddressAtOffset):
(JSC::FTL::JITCode::initializeAddressForCall): Deleted.
(JSC::FTL::JITCode::initializeArityCheckEntrypoint): Deleted.

  • ftl/FTLJITCode.h:
  • ftl/FTLJITFinalizer.cpp:

(JSC::FTL::JITFinalizer::finalizeFunction):

  • ftl/FTLLink.cpp:

(JSC::FTL::link):

  • ftl/FTLLowerDFGToB3.cpp:

(JSC::FTL::DFG::LowerDFGToB3::lower):
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileGetArgumentRegister):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileTailCall):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileCallEval):

  • ftl/FTLOSREntry.cpp:

(JSC::FTL::prepareOSREntry):

  • ftl/FTLOutput.cpp:

(JSC::FTL::Output::argumentRegister):
(JSC::FTL::Output::argumentRegisterInt32):

  • ftl/FTLOutput.h:
  • interpreter/ShadowChicken.cpp:

(JSC::ShadowChicken::update):

  • jit/AssemblyHelpers.cpp:

(JSC::AssemblyHelpers::emitDumbVirtualCall):

  • jit/AssemblyHelpers.h:

(JSC::AssemblyHelpers::spillArgumentRegistersToFrameBeforePrologue):
(JSC::AssemblyHelpers::spillArgumentRegistersToFrame):
(JSC::AssemblyHelpers::fillArgumentRegistersFromFrameBeforePrologue):
(JSC::AssemblyHelpers::emitPutArgumentToCallFrameBeforePrologue):
(JSC::AssemblyHelpers::emitPutArgumentToCallFrame):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitGetFromCallFrameArgumentBeforePrologue):
(JSC::AssemblyHelpers::emitGetPayloadFromCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::incrementCounter):

  • jit/CachedRecovery.cpp:

(JSC::CachedRecovery::addTargetJSValueRegs):

  • jit/CachedRecovery.h:

(JSC::CachedRecovery::gprTargets):
(JSC::CachedRecovery::setWantedFPR):
(JSC::CachedRecovery::wantedJSValueRegs):
(JSC::CachedRecovery::setWantedJSValueRegs): Deleted.

  • jit/CallFrameShuffleData.h:
  • jit/CallFrameShuffler.cpp:

(JSC::CallFrameShuffler::CallFrameShuffler):
(JSC::CallFrameShuffler::dump):
(JSC::CallFrameShuffler::tryWrites):
(JSC::CallFrameShuffler::prepareAny):

  • jit/CallFrameShuffler.h:

(JSC::CallFrameShuffler::snapshot):
(JSC::CallFrameShuffler::addNew):
(JSC::CallFrameShuffler::initDangerFrontier):
(JSC::CallFrameShuffler::updateDangerFrontier):
(JSC::CallFrameShuffler::findDangerFrontierFrom):

  • jit/CallFrameShuffler64.cpp:

(JSC::CallFrameShuffler::emitDisplace):

  • jit/GPRInfo.h:

(JSC::JSValueRegs::operator==):
(JSC::JSValueRegs::operator!=):
(JSC::GPRInfo::toArgumentIndex):
(JSC::argumentRegisterFor):
(JSC::argumentRegisterForCallee):
(JSC::argumentRegisterForArgumentCount):
(JSC::argumentRegisterIndexForJSFunctionArgument):
(JSC::jsFunctionArgumentForArgumentRegister):
(JSC::argumentRegisterForFunctionArgument):
(JSC::numberOfRegisterArgumentsFor):

  • jit/JIT.cpp:

(JSC::JIT::compileWithoutLinking):
(JSC::JIT::link):
(JSC::JIT::compileCTINativeCall): Deleted.

  • jit/JIT.h:

(JSC::JIT::compileNativeCallEntryPoints):

  • jit/JITCall.cpp:

(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileCallEval):
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):

  • jit/JITCall32_64.cpp:

(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):

  • jit/JITCode.cpp:

(JSC::JITCode::execute):
(JSC::DirectJITCode::DirectJITCode):
(JSC::DirectJITCode::initializeEntryPoints):
(JSC::DirectJITCode::addressForCall):
(JSC::NativeJITCode::addressForCall):
(JSC::DirectJITCode::initializeCodeRef): Deleted.

  • jit/JITCode.h:

(JSC::JITCode::executableAddress): Deleted.

  • jit/JITEntryPoints.h: Added.

(JSC::JITEntryPoints::JITEntryPoints):
(JSC::JITEntryPoints::entryFor):
(JSC::JITEntryPoints::setEntryFor):
(JSC::JITEntryPoints::offsetOfEntryFor):
(JSC::JITEntryPoints::registerEntryTypeForArgumentCount):
(JSC::JITEntryPoints::registerEntryTypeForArgumentType):
(JSC::JITEntryPoints::clearEntries):
(JSC::JITEntryPoints::operator=):
(JSC::JITEntryPointsWithRef::JITEntryPointsWithRef):
(JSC::JITEntryPointsWithRef::codeRef):
(JSC::argumentsLocationFor):
(JSC::registerEntryPointTypeFor):
(JSC::entryPointTypeFor):
(JSC::thunkEntryPointTypeFor):
(JSC::JITJSCallThunkEntryPointsWithRef::JITJSCallThunkEntryPointsWithRef):
(JSC::JITJSCallThunkEntryPointsWithRef::entryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::setEntryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::offsetOfEntryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::clearEntries):
(JSC::JITJSCallThunkEntryPointsWithRef::codeRef):
(JSC::JITJSCallThunkEntryPointsWithRef::operator=):

  • jit/JITOpcodes.cpp:

(JSC::JIT::privateCompileJITEntryNativeCall):
(JSC::JIT::privateCompileCTINativeCall): Deleted.

  • jit/JITOpcodes32_64.cpp:

(JSC::JIT::privateCompileJITEntryNativeCall):
(JSC::JIT::privateCompileCTINativeCall): Deleted.

  • jit/JITOperations.cpp:
  • jit/JITThunks.cpp:

(JSC::JITThunks::jitEntryNativeCall):
(JSC::JITThunks::jitEntryNativeConstruct):
(JSC::JITThunks::jitEntryStub):
(JSC::JITThunks::jitCallThunkEntryStub):
(JSC::JITThunks::hostFunctionStub):
(JSC::JITThunks::ctiNativeCall): Deleted.
(JSC::JITThunks::ctiNativeConstruct): Deleted.

  • jit/JITThunks.h:
  • jit/JSInterfaceJIT.h:

(JSC::JSInterfaceJIT::emitJumpIfNotInt32):
(JSC::JSInterfaceJIT::emitLoadInt32):

  • jit/RegisterSet.cpp:

(JSC::RegisterSet::argumentRegisters):

  • jit/RegisterSet.h:
  • jit/Repatch.cpp:

(JSC::linkSlowFor):
(JSC::revertCall):
(JSC::unlinkFor):
(JSC::linkVirtualFor):
(JSC::linkPolymorphicCall):

  • jit/SpecializedThunkJIT.h:

(JSC::SpecializedThunkJIT::SpecializedThunkJIT):
(JSC::SpecializedThunkJIT::checkJSStringArgument):
(JSC::SpecializedThunkJIT::linkFailureHere):
(JSC::SpecializedThunkJIT::finalize):

  • jit/ThunkGenerator.h:
  • jit/ThunkGenerators.cpp:

(JSC::createRegisterArgumentsSpillEntry):
(JSC::slowPathFor):
(JSC::linkCallThunkGenerator):
(JSC::linkDirectCallThunkGenerator):
(JSC::linkPolymorphicCallThunkGenerator):
(JSC::virtualThunkFor):
(JSC::nativeForGenerator):
(JSC::nativeCallGenerator):
(JSC::nativeTailCallGenerator):
(JSC::nativeTailCallWithoutSavedTagsGenerator):
(JSC::nativeConstructGenerator):
(JSC::stringCharLoadRegCall):
(JSC::charCodeAtThunkGenerator):
(JSC::charAtThunkGenerator):
(JSC::fromCharCodeThunkGenerator):
(JSC::clz32ThunkGenerator):
(JSC::sqrtThunkGenerator):
(JSC::floorThunkGenerator):
(JSC::ceilThunkGenerator):
(JSC::truncThunkGenerator):
(JSC::roundThunkGenerator):
(JSC::expThunkGenerator):
(JSC::logThunkGenerator):
(JSC::absThunkGenerator):
(JSC::imulThunkGenerator):
(JSC::randomThunkGenerator):
(JSC::boundThisNoArgsFunctionCallGenerator):

  • jit/ThunkGenerators.h:
  • jsc.cpp:

(jscmain):

  • llint/LLIntEntrypoint.cpp:

(JSC::LLInt::setFunctionEntrypoint):
(JSC::LLInt::setEvalEntrypoint):
(JSC::LLInt::setProgramEntrypoint):
(JSC::LLInt::setModuleProgramEntrypoint):

  • llint/LLIntSlowPaths.cpp:

(JSC::LLInt::entryOSR):
(JSC::LLInt::setUpCall):

  • llint/LLIntThunks.cpp:

(JSC::LLInt::generateThunkWithJumpTo):
(JSC::LLInt::functionForRegisterCallEntryThunkGenerator):
(JSC::LLInt::functionForStackCallEntryThunkGenerator):
(JSC::LLInt::functionForRegisterConstructEntryThunkGenerator):
(JSC::LLInt::functionForStackConstructEntryThunkGenerator):
(JSC::LLInt::functionForRegisterCallArityCheckThunkGenerator):
(JSC::LLInt::functionForStackCallArityCheckThunkGenerator):
(JSC::LLInt::functionForRegisterConstructArityCheckThunkGenerator):
(JSC::LLInt::functionForStackConstructArityCheckThunkGenerator):
(JSC::LLInt::functionForCallEntryThunkGenerator): Deleted.
(JSC::LLInt::functionForConstructEntryThunkGenerator): Deleted.
(JSC::LLInt::functionForCallArityCheckThunkGenerator): Deleted.
(JSC::LLInt::functionForConstructArityCheckThunkGenerator): Deleted.

  • llint/LLIntThunks.h:
  • runtime/ArityCheckMode.h:
  • runtime/ExecutableBase.cpp:

(JSC::ExecutableBase::clearCode):

  • runtime/ExecutableBase.h:

(JSC::ExecutableBase::entrypointFor):
(JSC::ExecutableBase::offsetOfEntryFor):
(JSC::ExecutableBase::offsetOfJITCodeWithArityCheckFor): Deleted.

  • runtime/JSBoundFunction.cpp:

(JSC::boundThisNoArgsFunctionCall):

  • runtime/NativeExecutable.cpp:

(JSC::NativeExecutable::finishCreation):

  • runtime/ScriptExecutable.cpp:

(JSC::ScriptExecutable::installCode):

  • runtime/VM.cpp:

(JSC::VM::VM):
(JSC::thunkGeneratorForIntrinsic):
(JSC::VM::clearCounters):
(JSC::VM::dumpCounters):

  • runtime/VM.h:

(JSC::VM::getJITEntryStub):
(JSC::VM::getJITCallThunkEntryStub):
(JSC::VM::addressOfCounter):
(JSC::VM::counterFor):

  • wasm/WasmBinding.cpp:

(JSC::Wasm::importStubGenerator):

Source/WTF:

Added a new build option ENABLE_VM_COUNTERS to enable JIT'able counters.
The default is for the option to be off.

  • wtf/Platform.h:

Added ENABLE_VM_COUNTERS

Location:
trunk
Files:
12 added
105 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r209652 r209653  
     12016-12-09  Michael Saboff  <[email protected]>
     2
     3        JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
     4        https://bugs.webkit.org/show_bug.cgi?id=160355
     5
     6        Reviewed by Filip Pizlo.
     7
     8        New microbenchmarks to measure call type performance.
     9
     10        * microbenchmarks/calling-computed-args.js: Added.
     11        * microbenchmarks/calling-many-callees.js: Added.
     12        * microbenchmarks/calling-one-callee-fixed.js: Added.
     13        * microbenchmarks/calling-one-callee.js: Added.
     14        * microbenchmarks/calling-poly-callees.js: Added.
     15        * microbenchmarks/calling-poly-extra-arity-callees.js: Added.
     16        * microbenchmarks/calling-tailcall.js: Added.
     17        * microbenchmarks/calling-virtual-arity-fixup-callees.js: Added.
     18        * microbenchmarks/calling-virtual-arity-fixup-stackargs.js: Added.
     19        * microbenchmarks/calling-virtual-callees.js: Added.
     20        * microbenchmarks/calling-virtual-extra-arity-callees.js: Added.
     21
    1222016-12-09  Keith Miller  <[email protected]>
    223
  • trunk/Source/JavaScriptCore/ChangeLog

    r209652 r209653  
     12016-12-09  Michael Saboff  <[email protected]>
     2
     3        JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
     4        https://bugs.webkit.org/show_bug.cgi?id=160355
     5
     6        Reviewed by Filip Pizlo.
     7
     8        This patch implements passing JavaScript function arguments in registers for 64 bit platforms.
     9
     10        The implemented convention follows the ABI conventions for the associated platform.
     11        The first two arguments are the callee and argument count, the rest of the argument registers
     12        contain "this" and following argument until all platform argument registers are exhausted.
     13        Arguments beyond what fit in registers are placed on the stack in the same location as
     14        before this patch.
     15
     16        For X86-64 non-Windows platforms, there are 6 argument registers specified in the related ABI.
     17        ARM64 has had argument registers.  This allows for 4 or 6 parameter values to be placed in
     18        registers on these respective platforms.  This patch doesn't implement passing arguments in
     19        registers for 32 bit platform, since most platforms have at most 4 argument registers
     20        specified and 32 bit platforms use two 32 bit registers/memory locations to store one JSValue.
     21
     22        The call frame on the stack in unchanged in format and the arguments that are passed in
     23        registers use the corresponding call frame location as a spill location. Arguments can
     24        also be passed on the stack. The LLInt, baseline JIT'ed code as well as the initial entry
     25        from C++ code base arguments on the stack. DFG s and FTL generated code pass arguments
     26        via registers. All callees can accept arguments either in registers or on the stack.
     27        The callee is responsible for moving argument to its preferred location.
     28
     29        The multiple entry points to JavaSCript code is now handled via the JITEntryPoints class and
     30        related code.  That class now has entries for StackArgsArityCheckNotRequired,
     31        StackArgsMustCheckArity and for platforms that support registers arguments,
     32        RegisterArgsArityCheckNotRequired, RegisterArgsMustCheckArity as well as and additional
     33        RegisterArgsPossibleExtraArgs entry point when extra registers argument are passed.
     34        This last case is needed to spill those extra arguments to the corresponding call frame
     35        slots.
     36
     37        * JavaScriptCore.xcodeproj/project.pbxproj:
     38        * b3/B3ArgumentRegValue.h:
     39        * b3/B3Validate.cpp:
     40        * bytecode/CallLinkInfo.cpp:
     41        (JSC::CallLinkInfo::CallLinkInfo):
     42        * bytecode/CallLinkInfo.h:
     43        (JSC::CallLinkInfo::setUpCall):
     44        (JSC::CallLinkInfo::argumentsLocation):
     45        (JSC::CallLinkInfo::argumentsInRegisters):
     46        * bytecode/PolymorphicAccess.cpp:
     47        (JSC::AccessCase::generateImpl):
     48        * dfg/DFGAbstractInterpreterInlines.h:
     49        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
     50        * dfg/DFGByteCodeParser.cpp:
     51        (JSC::DFG::ByteCodeParser::parseBlock):
     52        * dfg/DFGCPSRethreadingPhase.cpp:
     53        (JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
     54        (JSC::DFG::CPSRethreadingPhase::specialCaseArguments):
     55        (JSC::DFG::CPSRethreadingPhase::computeIsFlushed):
     56        * dfg/DFGClobberize.h:
     57        (JSC::DFG::clobberize):
     58        * dfg/DFGCommon.h:
     59        * dfg/DFGDCEPhase.cpp:
     60        (JSC::DFG::DCEPhase::run):
     61        * dfg/DFGDoesGC.cpp:
     62        (JSC::DFG::doesGC):
     63        * dfg/DFGDriver.cpp:
     64        (JSC::DFG::compileImpl):
     65        * dfg/DFGFixupPhase.cpp:
     66        (JSC::DFG::FixupPhase::fixupNode):
     67        * dfg/DFGGenerationInfo.h:
     68        (JSC::DFG::GenerationInfo::initArgumentRegisterValue):
     69        * dfg/DFGGraph.cpp:
     70        (JSC::DFG::Graph::dump):
     71        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
     72        * dfg/DFGGraph.h:
     73        (JSC::DFG::Graph::needsFlushedThis):
     74        (JSC::DFG::Graph::addImmediateShouldSpeculateInt32):
     75        * dfg/DFGInPlaceAbstractState.cpp:
     76        (JSC::DFG::InPlaceAbstractState::initialize):
     77        * dfg/DFGJITCompiler.cpp:
     78        (JSC::DFG::JITCompiler::link):
     79        (JSC::DFG::JITCompiler::compile):
     80        (JSC::DFG::JITCompiler::compileFunction):
     81        (JSC::DFG::JITCompiler::compileEntry): Deleted.
     82        * dfg/DFGJITCompiler.h:
     83        (JSC::DFG::JITCompiler::addJSDirectCall):
     84        (JSC::DFG::JITCompiler::JSDirectCallRecord::JSDirectCallRecord):
     85        (JSC::DFG::JITCompiler::JSDirectCallRecord::hasSlowCall):
     86        * dfg/DFGJITFinalizer.cpp:
     87        (JSC::DFG::JITFinalizer::JITFinalizer):
     88        (JSC::DFG::JITFinalizer::finalize):
     89        (JSC::DFG::JITFinalizer::finalizeFunction):
     90        * dfg/DFGJITFinalizer.h:
     91        * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
     92        (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlock):
     93        * dfg/DFGMaximalFlushInsertionPhase.cpp:
     94        (JSC::DFG::MaximalFlushInsertionPhase::treatRegularBlock):
     95        (JSC::DFG::MaximalFlushInsertionPhase::treatRootBlock):
     96        * dfg/DFGMayExit.cpp:
     97        * dfg/DFGMinifiedNode.cpp:
     98        (JSC::DFG::MinifiedNode::fromNode):
     99        * dfg/DFGMinifiedNode.h:
     100        (JSC::DFG::belongsInMinifiedGraph):
     101        * dfg/DFGNode.cpp:
     102        (JSC::DFG::Node::hasVariableAccessData):
     103        * dfg/DFGNode.h:
     104        (JSC::DFG::Node::accessesStack):
     105        (JSC::DFG::Node::setVariableAccessData):
     106        (JSC::DFG::Node::hasArgumentRegisterIndex):
     107        (JSC::DFG::Node::argumentRegisterIndex):
     108        * dfg/DFGNodeType.h:
     109        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
     110        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
     111        * dfg/DFGOSREntrypointCreationPhase.cpp:
     112        (JSC::DFG::OSREntrypointCreationPhase::run):
     113        * dfg/DFGPlan.cpp:
     114        (JSC::DFG::Plan::compileInThreadImpl):
     115        * dfg/DFGPreciseLocalClobberize.h:
     116        (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
     117        * dfg/DFGPredictionInjectionPhase.cpp:
     118        (JSC::DFG::PredictionInjectionPhase::run):
     119        * dfg/DFGPredictionPropagationPhase.cpp:
     120        * dfg/DFGPutStackSinkingPhase.cpp:
     121        * dfg/DFGRegisterBank.h:
     122        (JSC::DFG::RegisterBank::iterator::unlock):
     123        (JSC::DFG::RegisterBank::unlockAtIndex):
     124        * dfg/DFGSSAConversionPhase.cpp:
     125        (JSC::DFG::SSAConversionPhase::run):
     126        * dfg/DFGSafeToExecute.h:
     127        (JSC::DFG::safeToExecute):
     128        * dfg/DFGSpeculativeJIT.cpp:
     129        (JSC::DFG::SpeculativeJIT::SpeculativeJIT):
     130        (JSC::DFG::SpeculativeJIT::clearGenerationInfo):
     131        (JSC::DFG::dumpRegisterInfo):
     132        (JSC::DFG::SpeculativeJIT::dump):
     133        (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
     134        (JSC::DFG::SpeculativeJIT::checkArgumentTypes):
     135        (JSC::DFG::SpeculativeJIT::setupArgumentRegistersForEntry):
     136        (JSC::DFG::SpeculativeJIT::compile):
     137        * dfg/DFGSpeculativeJIT.h:
     138        (JSC::DFG::SpeculativeJIT::allocate):
     139        (JSC::DFG::SpeculativeJIT::spill):
     140        (JSC::DFG::SpeculativeJIT::generationInfoFromVirtualRegister):
     141        (JSC::DFG::JSValueOperand::JSValueOperand):
     142        (JSC::DFG::JSValueOperand::gprUseSpecific):
     143        * dfg/DFGSpeculativeJIT32_64.cpp:
     144        (JSC::DFG::SpeculativeJIT::emitCall):
     145        (JSC::DFG::SpeculativeJIT::compile):
     146        * dfg/DFGSpeculativeJIT64.cpp:
     147        (JSC::DFG::SpeculativeJIT::fillJSValue):
     148        (JSC::DFG::SpeculativeJIT::emitCall):
     149        (JSC::DFG::SpeculativeJIT::compile):
     150        * dfg/DFGStrengthReductionPhase.cpp:
     151        (JSC::DFG::StrengthReductionPhase::handleNode):
     152        * dfg/DFGThunks.cpp:
     153        (JSC::DFG::osrEntryThunkGenerator):
     154        * dfg/DFGVariableEventStream.cpp:
     155        (JSC::DFG::VariableEventStream::reconstruct):
     156        * dfg/DFGVirtualRegisterAllocationPhase.cpp:
     157        (JSC::DFG::VirtualRegisterAllocationPhase::allocateRegister):
     158        (JSC::DFG::VirtualRegisterAllocationPhase::run):
     159        * ftl/FTLCapabilities.cpp:
     160        (JSC::FTL::canCompile):
     161        * ftl/FTLJITCode.cpp:
     162        (JSC::FTL::JITCode::~JITCode):
     163        (JSC::FTL::JITCode::initializeEntrypointThunk):
     164        (JSC::FTL::JITCode::setEntryFor):
     165        (JSC::FTL::JITCode::addressForCall):
     166        (JSC::FTL::JITCode::executableAddressAtOffset):
     167        (JSC::FTL::JITCode::initializeAddressForCall): Deleted.
     168        (JSC::FTL::JITCode::initializeArityCheckEntrypoint): Deleted.
     169        * ftl/FTLJITCode.h:
     170        * ftl/FTLJITFinalizer.cpp:
     171        (JSC::FTL::JITFinalizer::finalizeFunction):
     172        * ftl/FTLLink.cpp:
     173        (JSC::FTL::link):
     174        * ftl/FTLLowerDFGToB3.cpp:
     175        (JSC::FTL::DFG::LowerDFGToB3::lower):
     176        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
     177        (JSC::FTL::DFG::LowerDFGToB3::compileGetArgumentRegister):
     178        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
     179        (JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
     180        (JSC::FTL::DFG::LowerDFGToB3::compileTailCall):
     181        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
     182        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
     183        (JSC::FTL::DFG::LowerDFGToB3::compileCallEval):
     184        * ftl/FTLOSREntry.cpp:
     185        (JSC::FTL::prepareOSREntry):
     186        * ftl/FTLOutput.cpp:
     187        (JSC::FTL::Output::argumentRegister):
     188        (JSC::FTL::Output::argumentRegisterInt32):
     189        * ftl/FTLOutput.h:
     190        * interpreter/ShadowChicken.cpp:
     191        (JSC::ShadowChicken::update):
     192        * jit/AssemblyHelpers.cpp:
     193        (JSC::AssemblyHelpers::emitDumbVirtualCall):
     194        * jit/AssemblyHelpers.h:
     195        (JSC::AssemblyHelpers::spillArgumentRegistersToFrameBeforePrologue):
     196        (JSC::AssemblyHelpers::spillArgumentRegistersToFrame):
     197        (JSC::AssemblyHelpers::fillArgumentRegistersFromFrameBeforePrologue):
     198        (JSC::AssemblyHelpers::emitPutArgumentToCallFrameBeforePrologue):
     199        (JSC::AssemblyHelpers::emitPutArgumentToCallFrame):
     200        (JSC::AssemblyHelpers::emitGetFromCallFrameHeaderBeforePrologue):
     201        (JSC::AssemblyHelpers::emitGetFromCallFrameArgumentBeforePrologue):
     202        (JSC::AssemblyHelpers::emitGetPayloadFromCallFrameHeaderBeforePrologue):
     203        (JSC::AssemblyHelpers::incrementCounter):
     204        * jit/CachedRecovery.cpp:
     205        (JSC::CachedRecovery::addTargetJSValueRegs):
     206        * jit/CachedRecovery.h:
     207        (JSC::CachedRecovery::gprTargets):
     208        (JSC::CachedRecovery::setWantedFPR):
     209        (JSC::CachedRecovery::wantedJSValueRegs):
     210        (JSC::CachedRecovery::setWantedJSValueRegs): Deleted.
     211        * jit/CallFrameShuffleData.h:
     212        * jit/CallFrameShuffler.cpp:
     213        (JSC::CallFrameShuffler::CallFrameShuffler):
     214        (JSC::CallFrameShuffler::dump):
     215        (JSC::CallFrameShuffler::tryWrites):
     216        (JSC::CallFrameShuffler::prepareAny):
     217        * jit/CallFrameShuffler.h:
     218        (JSC::CallFrameShuffler::snapshot):
     219        (JSC::CallFrameShuffler::addNew):
     220        (JSC::CallFrameShuffler::initDangerFrontier):
     221        (JSC::CallFrameShuffler::updateDangerFrontier):
     222        (JSC::CallFrameShuffler::findDangerFrontierFrom):
     223        * jit/CallFrameShuffler64.cpp:
     224        (JSC::CallFrameShuffler::emitDisplace):
     225        * jit/GPRInfo.h:
     226        (JSC::JSValueRegs::operator==):
     227        (JSC::JSValueRegs::operator!=):
     228        (JSC::GPRInfo::toArgumentIndex):
     229        (JSC::argumentRegisterFor):
     230        (JSC::argumentRegisterForCallee):
     231        (JSC::argumentRegisterForArgumentCount):
     232        (JSC::argumentRegisterIndexForJSFunctionArgument):
     233        (JSC::jsFunctionArgumentForArgumentRegister):
     234        (JSC::argumentRegisterForFunctionArgument):
     235        (JSC::numberOfRegisterArgumentsFor):
     236        * jit/JIT.cpp:
     237        (JSC::JIT::compileWithoutLinking):
     238        (JSC::JIT::link):
     239        (JSC::JIT::compileCTINativeCall): Deleted.
     240        * jit/JIT.h:
     241        (JSC::JIT::compileNativeCallEntryPoints):
     242        * jit/JITCall.cpp:
     243        (JSC::JIT::compileSetupVarargsFrame):
     244        (JSC::JIT::compileCallEval):
     245        (JSC::JIT::compileCallEvalSlowCase):
     246        (JSC::JIT::compileOpCall):
     247        (JSC::JIT::compileOpCallSlowCase):
     248        * jit/JITCall32_64.cpp:
     249        (JSC::JIT::compileCallEvalSlowCase):
     250        (JSC::JIT::compileOpCall):
     251        (JSC::JIT::compileOpCallSlowCase):
     252        * jit/JITCode.cpp:
     253        (JSC::JITCode::execute):
     254        (JSC::DirectJITCode::DirectJITCode):
     255        (JSC::DirectJITCode::initializeEntryPoints):
     256        (JSC::DirectJITCode::addressForCall):
     257        (JSC::NativeJITCode::addressForCall):
     258        (JSC::DirectJITCode::initializeCodeRef): Deleted.
     259        * jit/JITCode.h:
     260        (JSC::JITCode::executableAddress): Deleted.
     261        * jit/JITEntryPoints.h: Added.
     262        (JSC::JITEntryPoints::JITEntryPoints):
     263        (JSC::JITEntryPoints::entryFor):
     264        (JSC::JITEntryPoints::setEntryFor):
     265        (JSC::JITEntryPoints::offsetOfEntryFor):
     266        (JSC::JITEntryPoints::registerEntryTypeForArgumentCount):
     267        (JSC::JITEntryPoints::registerEntryTypeForArgumentType):
     268        (JSC::JITEntryPoints::clearEntries):
     269        (JSC::JITEntryPoints::operator=):
     270        (JSC::JITEntryPointsWithRef::JITEntryPointsWithRef):
     271        (JSC::JITEntryPointsWithRef::codeRef):
     272        (JSC::argumentsLocationFor):
     273        (JSC::registerEntryPointTypeFor):
     274        (JSC::entryPointTypeFor):
     275        (JSC::thunkEntryPointTypeFor):
     276        (JSC::JITJSCallThunkEntryPointsWithRef::JITJSCallThunkEntryPointsWithRef):
     277        (JSC::JITJSCallThunkEntryPointsWithRef::entryFor):
     278        (JSC::JITJSCallThunkEntryPointsWithRef::setEntryFor):
     279        (JSC::JITJSCallThunkEntryPointsWithRef::offsetOfEntryFor):
     280        (JSC::JITJSCallThunkEntryPointsWithRef::clearEntries):
     281        (JSC::JITJSCallThunkEntryPointsWithRef::codeRef):
     282        (JSC::JITJSCallThunkEntryPointsWithRef::operator=):
     283        * jit/JITOpcodes.cpp:
     284        (JSC::JIT::privateCompileJITEntryNativeCall):
     285        (JSC::JIT::privateCompileCTINativeCall): Deleted.
     286        * jit/JITOpcodes32_64.cpp:
     287        (JSC::JIT::privateCompileJITEntryNativeCall):
     288        (JSC::JIT::privateCompileCTINativeCall): Deleted.
     289        * jit/JITOperations.cpp:
     290        * jit/JITThunks.cpp:
     291        (JSC::JITThunks::jitEntryNativeCall):
     292        (JSC::JITThunks::jitEntryNativeConstruct):
     293        (JSC::JITThunks::jitEntryStub):
     294        (JSC::JITThunks::jitCallThunkEntryStub):
     295        (JSC::JITThunks::hostFunctionStub):
     296        (JSC::JITThunks::ctiNativeCall): Deleted.
     297        (JSC::JITThunks::ctiNativeConstruct): Deleted.
     298        * jit/JITThunks.h:
     299        * jit/JSInterfaceJIT.h:
     300        (JSC::JSInterfaceJIT::emitJumpIfNotInt32):
     301        (JSC::JSInterfaceJIT::emitLoadInt32):
     302        * jit/RegisterSet.cpp:
     303        (JSC::RegisterSet::argumentRegisters):
     304        * jit/RegisterSet.h:
     305        * jit/Repatch.cpp:
     306        (JSC::linkSlowFor):
     307        (JSC::revertCall):
     308        (JSC::unlinkFor):
     309        (JSC::linkVirtualFor):
     310        (JSC::linkPolymorphicCall):
     311        * jit/SpecializedThunkJIT.h:
     312        (JSC::SpecializedThunkJIT::SpecializedThunkJIT):
     313        (JSC::SpecializedThunkJIT::checkJSStringArgument):
     314        (JSC::SpecializedThunkJIT::linkFailureHere):
     315        (JSC::SpecializedThunkJIT::finalize):
     316        * jit/ThunkGenerator.h:
     317        * jit/ThunkGenerators.cpp:
     318        (JSC::createRegisterArgumentsSpillEntry):
     319        (JSC::slowPathFor):
     320        (JSC::linkCallThunkGenerator):
     321        (JSC::linkDirectCallThunkGenerator):
     322        (JSC::linkPolymorphicCallThunkGenerator):
     323        (JSC::virtualThunkFor):
     324        (JSC::nativeForGenerator):
     325        (JSC::nativeCallGenerator):
     326        (JSC::nativeTailCallGenerator):
     327        (JSC::nativeTailCallWithoutSavedTagsGenerator):
     328        (JSC::nativeConstructGenerator):
     329        (JSC::stringCharLoadRegCall):
     330        (JSC::charCodeAtThunkGenerator):
     331        (JSC::charAtThunkGenerator):
     332        (JSC::fromCharCodeThunkGenerator):
     333        (JSC::clz32ThunkGenerator):
     334        (JSC::sqrtThunkGenerator):
     335        (JSC::floorThunkGenerator):
     336        (JSC::ceilThunkGenerator):
     337        (JSC::truncThunkGenerator):
     338        (JSC::roundThunkGenerator):
     339        (JSC::expThunkGenerator):
     340        (JSC::logThunkGenerator):
     341        (JSC::absThunkGenerator):
     342        (JSC::imulThunkGenerator):
     343        (JSC::randomThunkGenerator):
     344        (JSC::boundThisNoArgsFunctionCallGenerator):
     345        * jit/ThunkGenerators.h:
     346        * jsc.cpp:
     347        (jscmain):
     348        * llint/LLIntEntrypoint.cpp:
     349        (JSC::LLInt::setFunctionEntrypoint):
     350        (JSC::LLInt::setEvalEntrypoint):
     351        (JSC::LLInt::setProgramEntrypoint):
     352        (JSC::LLInt::setModuleProgramEntrypoint):
     353        * llint/LLIntSlowPaths.cpp:
     354        (JSC::LLInt::entryOSR):
     355        (JSC::LLInt::setUpCall):
     356        * llint/LLIntThunks.cpp:
     357        (JSC::LLInt::generateThunkWithJumpTo):
     358        (JSC::LLInt::functionForRegisterCallEntryThunkGenerator):
     359        (JSC::LLInt::functionForStackCallEntryThunkGenerator):
     360        (JSC::LLInt::functionForRegisterConstructEntryThunkGenerator):
     361        (JSC::LLInt::functionForStackConstructEntryThunkGenerator):
     362        (JSC::LLInt::functionForRegisterCallArityCheckThunkGenerator):
     363        (JSC::LLInt::functionForStackCallArityCheckThunkGenerator):
     364        (JSC::LLInt::functionForRegisterConstructArityCheckThunkGenerator):
     365        (JSC::LLInt::functionForStackConstructArityCheckThunkGenerator):
     366        (JSC::LLInt::functionForCallEntryThunkGenerator): Deleted.
     367        (JSC::LLInt::functionForConstructEntryThunkGenerator): Deleted.
     368        (JSC::LLInt::functionForCallArityCheckThunkGenerator): Deleted.
     369        (JSC::LLInt::functionForConstructArityCheckThunkGenerator): Deleted.
     370        * llint/LLIntThunks.h:
     371        * runtime/ArityCheckMode.h:
     372        * runtime/ExecutableBase.cpp:
     373        (JSC::ExecutableBase::clearCode):
     374        * runtime/ExecutableBase.h:
     375        (JSC::ExecutableBase::entrypointFor):
     376        (JSC::ExecutableBase::offsetOfEntryFor):
     377        (JSC::ExecutableBase::offsetOfJITCodeWithArityCheckFor): Deleted.
     378        * runtime/JSBoundFunction.cpp:
     379        (JSC::boundThisNoArgsFunctionCall):
     380        * runtime/NativeExecutable.cpp:
     381        (JSC::NativeExecutable::finishCreation):
     382        * runtime/ScriptExecutable.cpp:
     383        (JSC::ScriptExecutable::installCode):
     384        * runtime/VM.cpp:
     385        (JSC::VM::VM):
     386        (JSC::thunkGeneratorForIntrinsic):
     387        (JSC::VM::clearCounters):
     388        (JSC::VM::dumpCounters):
     389        * runtime/VM.h:
     390        (JSC::VM::getJITEntryStub):
     391        (JSC::VM::getJITCallThunkEntryStub):
     392        (JSC::VM::addressOfCounter):
     393        (JSC::VM::counterFor):
     394        * wasm/WasmBinding.cpp:
     395        (JSC::Wasm::importStubGenerator):
     396
    13972016-12-09  Keith Miller  <[email protected]>
    2398
  • trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj

    r209630 r209653  
    13511351                65C0285C1717966800351E35 /* ARMv7DOpcode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C0285A1717966800351E35 /* ARMv7DOpcode.cpp */; };
    13521352                65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 65C0285B1717966800351E35 /* ARMv7DOpcode.h */; };
     1353                65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */ = {isa = PBXBuildFile; fileRef = 650300F21C50274600D786D7 /* JITEntryPoints.h */; settings = {ATTRIBUTES = (Private, ); }; };
    13531354                65FB5117184EEE7000C12B70 /* ProtoCallFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65FB5116184EE9BC00C12B70 /* ProtoCallFrame.cpp */; };
    13541355                65FB63A41C8EA09C0020719B /* YarrCanonicalizeUnicode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65A946141C8E9F6F00A7209A /* YarrCanonicalizeUnicode.cpp */; };
     
    37213722                62EC9BB41B7EB07C00303AD1 /* CallFrameShuffleData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallFrameShuffleData.cpp; sourceTree = "<group>"; };
    37223723                62EC9BB51B7EB07C00303AD1 /* CallFrameShuffleData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallFrameShuffleData.h; sourceTree = "<group>"; };
     3724                650300F21C50274600D786D7 /* JITEntryPoints.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITEntryPoints.h; sourceTree = "<group>"; };
    37233725                6507D2970E871E4A00D7D896 /* JSTypeInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSTypeInfo.h; sourceTree = "<group>"; };
    37243726                651122E5140469BA002B101D /* testRegExp.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = testRegExp.cpp; sourceTree = "<group>"; };
     
    55595561                                FE187A0A1C0229230038BBCA /* JITDivGenerator.cpp */,
    55605562                                FE187A0B1C0229230038BBCA /* JITDivGenerator.h */,
     5563                                650300F21C50274600D786D7 /* JITEntryPoints.h */,
    55615564                                0F46807F14BA572700BFE272 /* JITExceptions.cpp */,
    55625565                                0F46808014BA572700BFE272 /* JITExceptions.h */,
     
    77157718                                53D444DC1DAF08AB00B92784 /* B3WasmAddressValue.h in Headers */,
    77167719                                990DA67F1C8E316A00295159 /* generate_objc_protocol_type_conversions_implementation.py in Headers */,
     7720                                65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */,
    77177721                                DC17E8191C9C91DB008A6AB3 /* ShadowChickenInlines.h in Headers */,
    77187722                                DC17E8181C9C91D9008A6AB3 /* ShadowChicken.h in Headers */,
  • trunk/Source/JavaScriptCore/b3/B3ArgumentRegValue.h

    r206595 r209653  
    5656    }
    5757
     58    ArgumentRegValue(Origin origin, Reg reg, Type type)
     59        : Value(CheckedOpcode, ArgumentReg, type, origin)
     60        , m_reg(reg)
     61    {
     62        ASSERT(reg.isSet());
     63    }
     64
    5865    Reg m_reg;
    5966};
  • trunk/Source/JavaScriptCore/b3/B3Validate.cpp

    r208848 r209653  
    183183                VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
    184184                VALIDATE(!value->numChildren(), ("At ", *value));
    185                 VALIDATE(
    186                     (value->as<ArgumentRegValue>()->argumentReg().isGPR() ? pointerType() : Double)
    187                     == value->type(), ("At ", *value));
     185                // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165717
     186                // We need to handle Int32 arguments and Int64 arguments
     187                // for the same register distinctly.
     188                VALIDATE((value->as<ArgumentRegValue>()->argumentReg().isGPR()
     189                    ? (value->type() == pointerType() || value->type() == Int32)
     190                    : value->type() == Double), ("At ", *value));
    188191                break;
    189192            case Add:
  • trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp

    r208309 r209653  
    6161    , m_clearedByGC(false)
    6262    , m_allowStubs(true)
     63    , m_argumentsLocation(static_cast<unsigned>(ArgumentsLocation::StackArgs))
    6364    , m_isLinked(false)
    6465    , m_callType(None)
  • trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h

    r207475 r209653  
    2929#include "CodeLocation.h"
    3030#include "CodeSpecializationKind.h"
     31#include "JITEntryPoints.h"
    3132#include "PolymorphicCallStubRoutine.h"
    3233#include "WriteBarrier.h"
     
    158159    void unlink(VM&);
    159160
    160     void setUpCall(CallType callType, CodeOrigin codeOrigin, unsigned calleeGPR)
    161     {
     161    void setUpCall(CallType callType, ArgumentsLocation argumentsLocation, CodeOrigin codeOrigin, unsigned calleeGPR)
     162    {
     163        ASSERT(!isVarargsCallType(callType) || (argumentsLocation == StackArgs));
     164
    162165        m_callType = callType;
     166        m_argumentsLocation = static_cast<unsigned>(argumentsLocation);
    163167        m_codeOrigin = codeOrigin;
    164168        m_calleeGPR = calleeGPR;
     
    274278    {
    275279        return static_cast<CallType>(m_callType);
     280    }
     281
     282    ArgumentsLocation argumentsLocation()
     283    {
     284        return static_cast<ArgumentsLocation>(m_argumentsLocation);
     285    }
     286
     287    bool argumentsInRegisters()
     288    {
     289        return m_argumentsLocation != StackArgs;
    276290    }
    277291
     
    340354    bool m_clearedByGC : 1;
    341355    bool m_allowStubs : 1;
     356    unsigned m_argumentsLocation : 4;
    342357    bool m_isLinked : 1;
    343358    unsigned m_callType : 4; // CallType
  • trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp

    r209594 r209653  
    10331033           
    10341034            m_rareData->callLinkInfo->setUpCall(
    1035                 CallLinkInfo::Call, stubInfo.codeOrigin, loadedValueGPR);
     1035                CallLinkInfo::Call, StackArgs, stubInfo.codeOrigin, loadedValueGPR);
    10361036
    10371037            CCallHelpers::JumpList done;
     
    11061106            jit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1);
    11071107#endif
    1108             jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo::regT2);
     1108            jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo::nonArgGPR0);
    11091109            slowPathCall = jit.nearCall();
    11101110            if (m_type == Getter)
     
    11321132                    linkBuffer.link(
    11331133                        slowPathCall,
    1134                         CodeLocationLabel(vm.getCTIStub(linkCallThunkGenerator).code()));
     1134                        CodeLocationLabel(vm.getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs)));
    11351135                });
    11361136        } else {
  • trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h

    r209638 r209653  
    272272        ASSERT(!m_state.variables().operand(node->local()).isClear());
    273273        break;
    274        
     274
     275    case GetArgumentRegister:
     276        ASSERT(!m_state.variables().operand(node->local()).isClear());
     277        if (node->variableAccessData()->flushFormat() == FlushedJSValue) {
     278            forNode(node).makeBytecodeTop();
     279            break;
     280        }
     281
     282        forNode(node).setType(m_graph, typeFilterFor(node->variableAccessData()->flushFormat()));
     283        break;
     284
    275285    case LoadVarargs:
    276286    case ForwardVarargs: {
  • trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp

    r209638 r209653  
    36983698    // opposed to using a value we set explicitly.
    36993699    if (m_currentBlock == m_graph.block(0) && !inlineCallFrame()) {
    3700         m_graph.m_arguments.resize(m_numArguments);
    3701         // We will emit SetArgument nodes. They don't exit, but we're at the top of an op_enter so
    3702         // exitOK = true.
     3700        m_graph.m_argumentsOnStack.resize(m_numArguments);
     3701        m_graph.m_argumentsForChecking.resize(m_numArguments);
     3702        // Create all GetArgumentRegister nodes first and then the corresponding MovHint nodes,
     3703        // followed by the corresponding SetLocal nodes and finally any SetArgument nodes for
     3704        // the remaining arguments.
     3705        // We do this to make the exit processing correct. We start with m_exitOK = true since
     3706        // GetArgumentRegister nodes can exit, even though they don't. The MovHint's technically could
     3707        // exit but won't. The SetLocals can exit and therefore we want all the MovHints
     3708        // before the first SetLocal so that the register state is consistent.
     3709        // We do all this processing before creating any SetArgument nodes since they are
     3710        // morally equivalent to the SetLocals for GetArgumentRegister nodes.
    37033711        m_exitOK = true;
    3704         for (unsigned argument = 0; argument < m_numArguments; ++argument) {
     3712       
     3713        unsigned numRegisterArguments = std::min(m_numArguments, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
     3714
     3715        Vector<Node*, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS> getArgumentRegisterNodes;
     3716
     3717        // First create GetArgumentRegister nodes.
     3718        for (unsigned argument = 0; argument < numRegisterArguments; ++argument) {
     3719            getArgumentRegisterNodes.append(
     3720                addToGraph(GetArgumentRegister, OpInfo(0),
     3721                    OpInfo(argumentRegisterIndexForJSFunctionArgument(argument))));
     3722        }
     3723
     3724        // Create all the MovHint's for the GetArgumentRegister nodes created above.
     3725        for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) {
     3726            Node* getArgumentRegister = getArgumentRegisterNodes[i];
     3727            addToGraph(MovHint, OpInfo(virtualRegisterForArgument(i).offset()), getArgumentRegister);
     3728            // We can't exit anymore.
     3729            m_exitOK = false;
     3730        }
     3731
     3732        // Exit is now okay, but we need to fence with an ExitOK node.
     3733        m_exitOK = true;
     3734        addToGraph(ExitOK);
     3735
     3736        // Create all the SetLocals's for the GetArgumentRegister nodes created above.
     3737        for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) {
     3738            Node* getArgumentRegister = getArgumentRegisterNodes[i];
     3739            VariableAccessData* variableAccessData = newVariableAccessData(virtualRegisterForArgument(i));
     3740            variableAccessData->mergeStructureCheckHoistingFailed(
     3741                m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCache));
     3742            variableAccessData->mergeCheckArrayHoistingFailed(
     3743                m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIndexingType));
     3744            Node* setLocal = addToGraph(SetLocal, OpInfo(variableAccessData), getArgumentRegister);
     3745            m_currentBlock->variablesAtTail.argument(i) = setLocal;
     3746            getArgumentRegister->setVariableAccessData(setLocal->variableAccessData());
     3747            m_graph.m_argumentsOnStack[i] = setLocal;
     3748            m_graph.m_argumentsForChecking[i] = getArgumentRegister;
     3749        }
     3750
     3751        // Finally create any SetArgument nodes.
     3752        for (unsigned argument = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argument < m_numArguments; ++argument) {
    37053753            VariableAccessData* variable = newVariableAccessData(
    37063754                virtualRegisterForArgument(argument));
     
    37113759           
    37123760            Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
    3713             m_graph.m_arguments[argument] = setArgument;
     3761            m_graph.m_argumentsOnStack[argument] = setArgument;
     3762            m_graph.m_argumentsForChecking[argument] = setArgument;
    37143763            m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument);
    37153764        }
     
    48214870            // done by the arguments object creation node as that node may not exist.
    48224871            noticeArgumentsUse();
     4872            Terminality terminality = handleVarargsCall(currentInstruction, TailCallForwardVarargs, CallMode::Tail);
     4873            // We need to insert flush nodes for our arguments after the TailCallForwardVarargs
     4874            // node so that they will be flushed to the stack and kept alive.
    48234875            flushForReturn();
    4824             Terminality terminality = handleVarargsCall(currentInstruction, TailCallForwardVarargs, CallMode::Tail);
    48254876            ASSERT_WITH_MESSAGE(m_currentInstruction == currentInstruction, "handleVarargsCall, which may have inlined the callee, trashed m_currentInstruction");
    48264877            // If the call is terminal then we should not parse any further bytecodes as the TailCall will exit the function.
  • trunk/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp

    r203808 r209653  
    300300            //
    301301            // Head variable: describes what is live at the head of the basic block.
    302             // Head variable links may refer to Flush, PhantomLocal, Phi, or SetArgument.
    303             // SetArgument may only appear in the root block.
     302            // Head variable links may refer to Flush, PhantomLocal, Phi, GetArgumentRegister
     303            // or SetArgument.
     304            // GetArgumentRegister and SetArgument may only appear in the root block.
    304305            //
    305306            // Tail variable: the last thing that happened to the variable in the block.
    306             // It may be a Flush, PhantomLocal, GetLocal, SetLocal, SetArgument, or Phi.
    307             // SetArgument may only appear in the root block. Note that if there ever
    308             // was a GetLocal to the variable, and it was followed by PhantomLocals and
    309             // Flushes but not SetLocals, then the tail variable will be the GetLocal.
     307            // It may be a Flush, PhantomLocal, GetLocal, SetLocal, GetArgumentRegister,
     308            // SetArgument, or Phi. GetArgumentRegister and SetArgument may only appear
     309            // in the root block. Note that if there ever was a GetLocal to the variable,
     310            // and it was followed by PhantomLocals and Flushes but not SetLocals, then
     311            // the tail variable will be the GetLocal.
    310312            // This reflects the fact that you only care that the tail variable is a
    311313            // Flush or PhantomLocal if nothing else interesting happened. Likewise, if
     
    368370    void specialCaseArguments()
    369371    {
    370         // Normally, a SetArgument denotes the start of a live range for a local's value on the stack.
    371         // But those SetArguments used for the actual arguments to the machine CodeBlock get
    372         // special-cased. We could have instead used two different node types - one for the arguments
    373         // at the prologue case, and another for the other uses. But this seemed like IR overkill.
    374         for (unsigned i = m_graph.m_arguments.size(); i--;)
    375             m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_arguments[i]);
     372        // Normally, a SetArgument or SetLocal denotes the start of a live range for
     373        // a local's value on the stack. But those SetArguments and SetLocals used
     374        // for the actual arguments to the machine CodeBlock get special-cased. We could have
     375        // instead used two different node types - one for the arguments at the prologue case,
     376        // and another for the other uses. But this seemed like IR overkill.
     377        for (unsigned i = m_graph.m_argumentsOnStack.size(); i--;)
     378            m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_argumentsOnStack[i]);
    376379    }
    377380   
     
    481484            case SetLocal:
    482485            case SetArgument:
     486            case GetArgumentRegister:
    483487                break;
    484488               
  • trunk/Source/JavaScriptCore/dfg/DFGClobberize.h

    r209638 r209653  
    407407    case PhantomLocal:
    408408    case SetArgument:
     409    case GetArgumentRegister:
    409410    case Jump:
    410411    case Branch:
     
    471472        // DFG backend requires that the locals that this reads are flushed. FTL backend can handle those
    472473        // locals being promoted.
    473         if (!isFTL(graph.m_plan.mode))
     474        if (!isFTL(graph.m_plan.mode) && !node->origin.semantic.inlineCallFrame)
    474475            read(Stack);
    475476       
     
    560561    case DirectTailCall:
    561562    case TailCallVarargs:
    562     case TailCallForwardVarargs:
    563563        read(World);
    564564        write(SideState);
    565565        return;
    566566       
     567    case TailCallForwardVarargs:
     568        // We read all arguments after "this".
     569        for (unsigned arg = 1; arg < graph.m_argumentsOnStack.size(); arg++)
     570            read(AbstractHeap(Stack, virtualRegisterForArgument(arg)));
     571        read(World);
     572        write(SideState);
     573        return;
     574
    567575    case GetGetter:
    568576        read(GetterSetter_getter);
  • trunk/Source/JavaScriptCore/dfg/DFGCommon.h

    r206899 r209653  
    153153enum OptimizationFixpointState { BeforeFixpoint, FixpointNotConverged, FixpointConverged };
    154154
     155enum StrengthReduceArgumentFlushes { DontOptimizeArgumentFlushes, OptimizeArgumentFlushes };
     156
    155157// Describes the form you can expect the entire graph to be in.
    156158enum GraphForm {
  • trunk/Source/JavaScriptCore/dfg/DFGDCEPhase.cpp

    r203808 r209653  
    5454            fixupBlock(block);
    5555       
    56         cleanVariables(m_graph.m_arguments);
     56        cleanVariables(m_graph.m_argumentsOnStack);
     57        cleanVariables(m_graph.m_argumentsForChecking);
    5758
    5859        // Just do a basic Phantom/Check clean-up.
  • trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp

    r209638 r209653  
    262262    case GetFromArguments:
    263263    case PutToArguments:
     264    case GetArgumentRegister:
    264265    case GetArgument:
    265266    case LogShadowChickenPrologue:
  • trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp

    r208777 r209653  
    9191    vm.getCTIStub(osrExitGenerationThunkGenerator);
    9292    vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
    93     vm.getCTIStub(linkCallThunkGenerator);
    94     vm.getCTIStub(linkPolymorphicCallThunkGenerator);
     93    vm.getJITCallThunkEntryStub(linkCallThunkGenerator);
     94    vm.getJITCallThunkEntryStub(linkDirectCallThunkGenerator);
     95    vm.getJITCallThunkEntryStub(linkPolymorphicCallThunkGenerator);
    9596   
    9697    if (vm.typeProfiler())
  • trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp

    r209638 r209653  
    17921792        case GetLocal:
    17931793        case GetCallee:
     1794        case GetArgumentRegister:
    17941795        case GetArgumentCountIncludingThis:
    17951796        case GetRestLength:
  • trunk/Source/JavaScriptCore/dfg/DFGGenerationInfo.h

    r206525 r209653  
    105105        initGPR(node, useCount, gpr, format);
    106106    }
     107
     108    void initArgumentRegisterValue(Node* node, uint32_t useCount, GPRReg gpr, DataFormat registerFormat =  DataFormatJS)
     109    {
     110        m_node = node;
     111        m_useCount = useCount;
     112        m_registerFormat = registerFormat;
     113        m_spillFormat = DataFormatNone;
     114        m_canFill = false;
     115        u.gpr = gpr;
     116        m_bornForOSR = false;
     117        m_isConstant = false;
     118        ASSERT(m_useCount);
     119    }
    107120#elif USE(JSVALUE32_64)
    108121    void initJSValue(Node* node, uint32_t useCount, GPRReg tagGPR, GPRReg payloadGPR, DataFormat format = DataFormatJS)
  • trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp

    r208761 r209653  
    295295            out.print(comma, inContext(data.variants[i], context));
    296296    }
    297     ASSERT(node->hasVariableAccessData(*this) == node->accessesStack(*this));
    298297    if (node->hasVariableAccessData(*this)) {
    299298        VariableAccessData* variableAccessData = node->tryGetVariableAccessData();
     
    374373        out.print(comma, "default:", data->fallThrough);
    375374    }
     375    if (node->hasArgumentRegisterIndex())
     376        out.print(comma, node->argumentRegisterIndex(), "(", GPRInfo::toArgumentRegister(node->argumentRegisterIndex()), ")");
    376377    ClobberSet reads;
    377378    ClobberSet writes;
     
    397398    out.print(")");
    398399
    399     if (node->accessesStack(*this) && node->tryGetVariableAccessData())
     400    if ((node->accessesStack(*this) || node->op() == GetArgumentRegister) && node->tryGetVariableAccessData())
    400401        out.print("  predicting ", SpeculationDump(node->tryGetVariableAccessData()->prediction()));
    401402    else if (node->hasHeapPrediction())
     
    507508    if (m_form == SSA)
    508509        out.print("  Argument formats: ", listDump(m_argumentFormats), "\n");
    509     else
    510         out.print("  Arguments: ", listDump(m_arguments), "\n");
     510    else {
     511        out.print("  Arguments for checking: ", listDump(m_argumentsForChecking), "\n");
     512        out.print("  Arguments on stack: ", listDump(m_argumentsOnStack), "\n");
     513    }
    511514    out.print("\n");
    512515   
     
    16211624            CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
    16221625
    1623             if (node->accessesStack(*this)) {
     1626            if (node->accessesStack(*this) || node->op() == GetArgumentRegister) {
    16241627                ValueProfile* result = [&] () -> ValueProfile* {
    16251628                    if (!node->local().isArgument())
    16261629                        return nullptr;
    16271630                    int argument = node->local().toArgument();
    1628                     Node* argumentNode = m_arguments[argument];
    1629                     if (!argumentNode)
     1631                    Node* argumentNode = m_argumentsOnStack[argument];
     1632                    if (!argumentNode || !argumentNode->accessesStack(*this))
    16301633                        return nullptr;
    16311634                    if (node->variableAccessData() != argumentNode->variableAccessData())
  • trunk/Source/JavaScriptCore/dfg/DFGGraph.h

    r208637 r209653  
    860860   
    861861    bool needsScopeRegister() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); }
    862     bool needsFlushedThis() const { return m_codeBlock->usesEval(); }
     862    bool needsFlushedThis() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); }
    863863
    864864    VM& m_vm;
     
    879879    Bag<StorageAccessData> m_storageAccessData;
    880880   
    881     // In CPS, this is all of the SetArgument nodes for the arguments in the machine code block
    882     // that survived DCE. All of them except maybe "this" will survive DCE, because of the Flush
    883     // nodes.
     881    // In CPS, this is all of the GetArgumentRegister and SetArgument nodes for the arguments in
     882    // the machine code block that survived DCE. All of them except maybe "this" will survive DCE,
     883    // because of the Flush nodes.
    884884    //
    885885    // In SSA, this is all of the GetStack nodes for the arguments in the machine code block that
     
    904904    // If we DCE the ArithAdd and we remove the int check on x, then this won't do the side
    905905    // effects.
    906     Vector<Node*, 8> m_arguments;
     906    Vector<Node*, 8> m_argumentsOnStack;
     907    Vector<Node*, 8> m_argumentsForChecking;
    907908   
    908909    // In CPS, this is meaningless. In SSA, this is the argument speculation that we've locked in.
     
    955956    UnificationState m_unificationState;
    956957    PlanStage m_planStage { PlanStage::Initial };
     958    StrengthReduceArgumentFlushes m_strengthReduceArguments = { StrengthReduceArgumentFlushes::DontOptimizeArgumentFlushes };
    957959    RefCountState m_refCountState;
    958960    bool m_hasDebuggerEnabled;
  • trunk/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp

    r208373 r209653  
    107107            format = m_graph.m_argumentFormats[i];
    108108        else {
    109             Node* node = m_graph.m_arguments[i];
     109            Node* node = m_graph.m_argumentsOnStack[i];
    110110            if (!node)
    111111                format = FlushedJSValue;
    112112            else {
    113                 ASSERT(node->op() == SetArgument);
     113                ASSERT(node->op() == SetArgument || node->op() == SetLocal);
    114114                format = node->variableAccessData()->flushFormat();
    115115            }
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp

    r208560 r209653  
    100100}
    101101
    102 void JITCompiler::compileEntry()
    103 {
    104     // This code currently matches the old JIT. In the function header we need to
    105     // save return address and call frame via the prologue and perform a fast stack check.
    106     // FIXME: https://bugs.webkit.org/show_bug.cgi?id=56292
    107     // We'll need to convert the remaining cti_ style calls (specifically the stack
    108     // check) which will be dependent on stack layout. (We'd need to account for this in
    109     // both normal return code and when jumping to an exception handler).
    110     emitFunctionPrologue();
    111     emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
    112 }
    113 
    114102void JITCompiler::compileSetupRegistersForEntry()
    115103{
     
    278266        JSCallRecord& record = m_jsCalls[i];
    279267        CallLinkInfo& info = *record.info;
    280         linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getCTIStub(linkCallThunkGenerator).code().executableAddress()));
     268        linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress()));
    281269        info.setCallLocations(
    282270            CodeLocationLabel(linkBuffer.locationOfNearCall(record.slowCall)),
     
    288276        CallLinkInfo& info = *record.info;
    289277        linkBuffer.link(record.call, linkBuffer.locationOf(record.slowPath));
     278        if (record.hasSlowCall())
     279            linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress()));
    290280        info.setCallLocations(
    291281            CodeLocationLabel(),
     
    355345void JITCompiler::compile()
    356346{
     347    Label mainEntry(this);
     348
    357349    setStartOfCode();
    358     compileEntry();
     350    emitFunctionPrologue();
     351
     352    Label entryPoint(this);
     353    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
     354
    359355    m_speculative = std::make_unique<SpeculativeJIT>(*this);
    360356
     
    383379    m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock);
    384380
     381#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     382    m_stackArgsArityOKEntry = label();
     383    emitFunctionPrologue();
     384
     385    // Load argument values into argument registers
     386    loadPtr(addressFor(CallFrameSlot::callee), argumentRegisterForCallee());
     387    load32(payloadFor(CallFrameSlot::argumentCount), argumentRegisterForArgumentCount());
     388   
     389    for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     390        load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
     391   
     392    jump(entryPoint);
     393#endif
     394
    385395    // Generate slow path code.
    386396    m_speculative->runSlowPathGenerators(m_pcToCodeOriginMapBuilder);
     
    407417
    408418    disassemble(*linkBuffer);
    409    
     419
     420    JITEntryPoints entrypoints;
     421#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     422    entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
     423    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry));
     424#else
     425    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
     426#endif
     427
    410428    m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>(
    411         m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer));
     429        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints);
    412430}
    413431
     
    415433{
    416434    setStartOfCode();
    417     compileEntry();
     435
     436#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     437    unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters());
     438    GPRReg argCountReg = argumentRegisterForArgumentCount();
     439    JumpList continueRegisterEntry;
     440    Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1];
     441
     442    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     443        // Spill any extra register arguments passed to function onto the stack.
     444        for (unsigned extraRegisterArgumentIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1;
     445            extraRegisterArgumentIndex >= numParameters; extraRegisterArgumentIndex--) {
     446            registerArgumentsEntrypoints[extraRegisterArgumentIndex + 1] = label();
     447            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(extraRegisterArgumentIndex), extraRegisterArgumentIndex);
     448        }
     449    }
     450    incrementCounter(this, VM::RegArgsExtra);
     451
     452    continueRegisterEntry.append(jump());
     453
     454    m_registerArgsWithArityCheck = label();
     455    incrementCounter(this, VM::RegArgsArity);
     456
     457    Label registerArgsCheckArity(this);
     458
     459    Jump registerCheckArity;
     460
     461    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     462        registerCheckArity = branch32(NotEqual, argCountReg, TrustedImm32(numParameters));
     463    else {
     464        registerCheckArity = branch32(Below, argCountReg, TrustedImm32(numParameters));
     465        m_registerArgsWithPossibleExtraArgs = label();
     466    }
     467   
     468    Label registerEntryNoArity(this);
     469
     470    if (numParameters <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     471        registerArgumentsEntrypoints[numParameters] = registerEntryNoArity;
     472
     473    incrementCounter(this, VM::RegArgsNoArity);
     474
     475    continueRegisterEntry.link(this);
     476#endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     477
     478    Label mainEntry(this);
     479
     480    emitFunctionPrologue();
    418481
    419482    // === Function header code generation ===
     
    422485    // so enter after this.
    423486    Label fromArityCheck(this);
     487
     488#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     489    storePtr(argumentRegisterForCallee(), addressFor(CallFrameSlot::callee));
     490    store32(argCountReg, payloadFor(CallFrameSlot::argumentCount));
     491
     492    Label fromStackEntry(this);
     493#endif
     494   
     495    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
     496
    424497    // Plant a check that sufficient space is available in the JSStack.
    425     addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::regT1);
    426     Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo::regT1);
     498    addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::nonArgGPR0);
     499    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo::nonArgGPR0);
    427500
    428501    // Move the stack pointer down to accommodate locals
     
    453526
    454527    m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock);
    455    
    456     // The fast entry point into a function does not check the correct number of arguments
    457     // have been passed to the call (we only use the fast entry point where we can statically
    458     // determine the correct number of arguments have been passed, or have already checked).
    459     // In cases where an arity check is necessary, we enter here.
    460     // FIXME: change this from a cti call to a DFG style operation (normal C calling conventions).
    461     m_arityCheck = label();
    462     compileEntry();
     528
     529    JumpList arityOK;
     530   
     531#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     532    jump(registerArgsCheckArity);
     533
     534    JumpList registerArityNeedsFixup;
     535    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     536        registerCheckArity.link(this);
     537        registerArityNeedsFixup.append(branch32(Below, argCountReg, TrustedImm32(m_codeBlock->numParameters())));
     538
     539        // We have extra register arguments.
     540
     541        // The fast entry point into a function does not check that the correct number of arguments
     542        // have been passed to the call (we only use the fast entry point where we can statically
     543        // determine the correct number of arguments have been passed, or have already checked).
     544        // In cases where an arity check is necessary, we enter here.
     545        m_registerArgsWithPossibleExtraArgs = label();
     546
     547        incrementCounter(this, VM::RegArgsExtra);
     548
     549        // Spill extra args passed to function
     550        for (unsigned argIndex = static_cast<unsigned>(m_codeBlock->numParameters()); argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     551            branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)).linkTo(mainEntry, this);
     552            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     553        }
     554        jump(mainEntry);
     555    }
     556
     557    // Fall through
     558    if (numParameters > 0) {
     559        // There should always be a "this" parameter.
     560        unsigned registerArgumentFixupCount = std::min(numParameters - 1, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
     561        Label registerArgumentsNeedArityFixup = label();
     562
     563        for (unsigned argIndex = 1; argIndex <= registerArgumentFixupCount; argIndex++)
     564            registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup;
     565    }
     566
     567    incrementCounter(this, VM::RegArgsArity);
     568
     569    registerArityNeedsFixup.link(this);
     570
     571    if (numParameters >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     572        registerCheckArity.link(this);
     573
     574    spillArgumentRegistersToFrameBeforePrologue();
     575
     576#if ENABLE(VM_COUNTERS)
     577    Jump continueToStackArityFixup = jump();
     578#endif
     579#endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     580
     581    m_stackArgsWithArityCheck = label();
     582    incrementCounter(this, VM::StackArgsArity);
     583
     584#if ENABLE(VM_COUNTERS)
     585    continueToStackArityFixup.link(this);
     586#endif
     587
     588    emitFunctionPrologue();
    463589
    464590    load32(AssemblyHelpers::payloadFor((VirtualRegister)CallFrameSlot::argumentCount), GPRInfo::regT1);
    465     branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters())).linkTo(fromArityCheck, this);
     591    arityOK.append(branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters())));
     592
     593    incrementCounter(this, VM::ArityFixupRequired);
     594
    466595    emitStoreCodeOrigin(CodeOrigin(0));
    467596    if (maxFrameExtentForSlowPathCall)
     
    470599    if (maxFrameExtentForSlowPathCall)
    471600        addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister);
    472     branchTest32(Zero, GPRInfo::returnValueGPR).linkTo(fromArityCheck, this);
     601    arityOK.append(branchTest32(Zero, GPRInfo::returnValueGPR));
     602
    473603    emitStoreCodeOrigin(CodeOrigin(0));
    474604    move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
    475605    m_callArityFixup = call();
     606
     607#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     608    Jump toFillRegisters = jump();
     609
     610    m_stackArgsArityOKEntry = label();
     611
     612    incrementCounter(this, VM::StackArgsNoArity);
     613    emitFunctionPrologue();
     614
     615    arityOK.link(this);
     616    toFillRegisters.link(this);
     617
     618    // Load argument values into argument registers
     619    for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     620        load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
     621
     622    jump(fromStackEntry);
     623#else
     624    arityOK.linkTo(fromArityCheck, this);
    476625    jump(fromArityCheck);
     626#endif
    477627   
    478628    // Generate slow path code.
     
    503653    disassemble(*linkBuffer);
    504654
    505     MacroAssemblerCodePtr withArityCheck = linkBuffer->locationOf(m_arityCheck);
     655    JITEntryPoints entrypoints;
     656#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     657#if ENABLE(VM_COUNTERS)
     658    MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(registerEntryNoArity);
     659#else
     660    MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(mainEntry);
     661#endif
     662    entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, mainEntryCodePtr);
     663    entrypoints.setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(m_registerArgsWithPossibleExtraArgs));
     664    entrypoints.setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(m_registerArgsWithArityCheck));
     665
     666    for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
     667        MacroAssemblerCodePtr entry;
     668        if (argCount == numParameters)
     669            entry = mainEntryCodePtr;
     670        else if (registerArgumentsEntrypoints[argCount].isSet())
     671            entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]);
     672        else
     673            entry = linkBuffer->locationOf(m_registerArgsWithArityCheck);
     674        entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
     675    }
     676    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry));
     677#else
     678    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
     679#endif
     680    entrypoints.setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(m_stackArgsWithArityCheck));
    506681
    507682    m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>(
    508         m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), withArityCheck);
     683        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints);
    509684}
    510685
  • trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.h

    r207475 r209653  
    218218    }
    219219   
     220    void addJSDirectCall(Call call, Call slowCall, Label slowPath, CallLinkInfo* info)
     221    {
     222        m_jsDirectCalls.append(JSDirectCallRecord(call, slowCall, slowPath, info));
     223    }
     224   
    220225    void addJSDirectTailCall(PatchableJump patchableJump, Call call, Label slowPath, CallLinkInfo* info)
    221226    {
     
    268273   
    269274    // Internal implementation to compile.
    270     void compileEntry();
    271275    void compileSetupRegistersForEntry();
    272276    void compileEntryExecutionFlag();
     
    319323        }
    320324       
     325        JSDirectCallRecord(Call call, Call slowCall, Label slowPath, CallLinkInfo* info)
     326            : call(call)
     327            , slowCall(slowCall)
     328            , slowPath(slowPath)
     329            , info(info)
     330        {
     331        }
     332
     333        bool hasSlowCall() { return slowCall.m_label.isSet(); }
     334
    321335        Call call;
     336        Call slowCall;
    322337        Label slowPath;
    323338        CallLinkInfo* info;
     
    356371   
    357372    Call m_callArityFixup;
    358     Label m_arityCheck;
     373#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     374    Label m_registerArgsWithPossibleExtraArgs;
     375    Label m_registerArgsWithArityCheck;
     376    Label m_stackArgsArityOKEntry;
     377#endif
     378    Label m_stackArgsWithArityCheck;
    359379    std::unique_ptr<SpeculativeJIT> m_speculative;
    360380    PCToCodeOriginMapBuilder m_pcToCodeOriginMapBuilder;
  • trunk/Source/JavaScriptCore/dfg/DFGJITFinalizer.cpp

    r200933 r209653  
    3838namespace JSC { namespace DFG {
    3939
    40 JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode, std::unique_ptr<LinkBuffer> linkBuffer, MacroAssemblerCodePtr withArityCheck)
     40JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode,
     41    std::unique_ptr<LinkBuffer> linkBuffer, JITEntryPoints& entrypoints)
    4142    : Finalizer(plan)
    4243    , m_jitCode(jitCode)
    4344    , m_linkBuffer(WTFMove(linkBuffer))
    44     , m_withArityCheck(withArityCheck)
     45    , m_entrypoints(entrypoints)
    4546{
    4647}
     
    5758bool JITFinalizer::finalize()
    5859{
    59     m_jitCode->initializeCodeRef(
    60         FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())),
    61         MacroAssemblerCodePtr());
     60    MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()));
     61    m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints));
    6262   
    6363    m_plan.codeBlock->setJITCode(m_jitCode);
     
    7070bool JITFinalizer::finalizeFunction()
    7171{
    72     RELEASE_ASSERT(!m_withArityCheck.isEmptyValue());
    73     m_jitCode->initializeCodeRef(
    74         FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())),
    75         m_withArityCheck);
     72    RELEASE_ASSERT(!m_entrypoints.entryFor(StackArgsMustCheckArity).isEmptyValue());
     73    MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()));
     74
     75    m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints));
     76
    7677    m_plan.codeBlock->setJITCode(m_jitCode);
    7778   
  • trunk/Source/JavaScriptCore/dfg/DFGJITFinalizer.h

    r206525 r209653  
    3737class JITFinalizer : public Finalizer {
    3838public:
    39     JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, MacroAssemblerCodePtr withArityCheck = MacroAssemblerCodePtr(MacroAssemblerCodePtr::EmptyValue));
     39    JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, JITEntryPoints&);
    4040    virtual ~JITFinalizer();
    4141   
     
    4949    RefPtr<JITCode> m_jitCode;
    5050    std::unique_ptr<LinkBuffer> m_linkBuffer;
    51     MacroAssemblerCodePtr m_withArityCheck;
     51    JITEntryPoints m_entrypoints;
    5252};
    5353
  • trunk/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp

    r205794 r209653  
    102102            for (unsigned i = 0; i < block->size(); i++) {
    103103                Node* node = block->at(i);
    104                 bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments[node->local().toArgument()];
     104                bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_argumentsOnStack[node->local().toArgument()];
    105105                InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
    106106                if (inlineCallFrame)
  • trunk/Source/JavaScriptCore/dfg/DFGMaximalFlushInsertionPhase.cpp

    r203923 r209653  
    6868            for (unsigned i = 0; i < block->size(); i++) {
    6969                Node* node = block->at(i);
    70                 bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments[node->local().toArgument()];
    71                 if (node->op() == SetLocal || (node->op() == SetArgument && !isPrimordialSetArgument)) {
     70                if ((node->op() == SetArgument || node->op() == SetLocal)
     71                    && (!node->local().isArgument() || node != m_graph.m_argumentsOnStack[node->local().toArgument()])) {
    7272                    VirtualRegister operand = node->local();
    7373                    VariableAccessData* flushAccessData = currentBlockAccessData.operand(operand);
     
    118118                continue;
    119119
    120             DFG_ASSERT(m_graph, node, node->op() != SetLocal); // We should have inserted a Flush before this!
    121120            initialAccessData.operand(operand) = node->variableAccessData();
    122121            initialAccessNodes.operand(operand) = node;
  • trunk/Source/JavaScriptCore/dfg/DFGMayExit.cpp

    r209638 r209653  
    7373    case GetCallee:
    7474    case GetArgumentCountIncludingThis:
     75    case GetArgumentRegister:
    7576    case GetRestLength:
    7677    case GetScope:
  • trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp

    r181993 r209653  
    11/*
    2  * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012-2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4242    if (hasConstant(node->op()))
    4343        result.m_info = JSValue::encode(node->asJSValue());
     44    else if (node->op() == GetArgumentRegister)
     45        result.m_info = jsFunctionArgumentForArgumentRegisterIndex(node->argumentRegisterIndex());
    4446    else {
    4547        ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments);
  • trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.h

    r206525 r209653  
    11/*
    2  * Copyright (C) 2012, 2014, 2015 Apple Inc. All rights reserved.
     2 * Copyright (C) 2012, 2014-2016 Apple Inc. All rights reserved.
    33 *
    44 * Redistribution and use in source and binary forms, with or without
     
    4444    case PhantomDirectArguments:
    4545    case PhantomClonedArguments:
     46    case GetArgumentRegister:
    4647        return true;
    4748    default:
     
    7273        return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(m_info));
    7374    }
     75
     76    bool hasArgumentIndex() const { return hasArgumentIndex(m_op); }
     77
     78    unsigned argumentIndex() const { return m_info; }
    7479   
    7580    static MinifiedID getID(MinifiedNode* node) { return node->id(); }
     
    8994        return type == PhantomDirectArguments || type == PhantomClonedArguments;
    9095    }
     96
     97    static bool hasArgumentIndex(NodeType type)
     98    {
     99        return type == GetArgumentRegister;
     100    }
    91101   
    92102    MinifiedID m_id;
  • trunk/Source/JavaScriptCore/dfg/DFGNode.cpp

    r208320 r209653  
    7272    case SetLocal:
    7373    case SetArgument:
     74    case GetArgumentRegister:
    7475    case Flush:
    7576    case PhantomLocal:
  • trunk/Source/JavaScriptCore/dfg/DFGNode.h

    r209121 r209653  
    829829    bool accessesStack(Graph& graph)
    830830    {
     831        if (op() == GetArgumentRegister)
     832            return false;
     833
    831834        return hasVariableAccessData(graph);
    832835    }
     
    845848    {
    846849        return m_opInfo.as<VariableAccessData*>()->find();
     850    }
     851   
     852    void setVariableAccessData(VariableAccessData* variable)
     853    {
     854        m_opInfo = variable;
    847855    }
    848856   
     
    12131221    {
    12141222        return speculationFromJSType(queriedType());
     1223    }
     1224   
     1225    bool hasArgumentRegisterIndex()
     1226    {
     1227        return op() == GetArgumentRegister;
     1228    }
     1229   
     1230    unsigned argumentRegisterIndex()
     1231    {
     1232        ASSERT(hasArgumentRegisterIndex());
     1233        return m_opInfo2.as<unsigned>();
    12151234    }
    12161235   
  • trunk/Source/JavaScriptCore/dfg/DFGNodeType.h

    r209638 r209653  
    5454    macro(GetCallee, NodeResultJS) \
    5555    macro(GetArgumentCountIncludingThis, NodeResultInt32) \
     56    macro(GetArgumentRegister, NodeResultJS /* | NodeMustGenerate */) \
    5657    \
    5758    /* Nodes for local variable access. These nodes are linked together using Phi nodes. */\
  • trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp

    r209121 r209653  
    145145    }
    146146
     147    case GetArgumentRegister: {
     148        m_availability.m_locals.operand(node->local()).setNode(node);
     149        break;
     150    }
     151
    147152    case MovHint: {
    148153        m_availability.m_locals.operand(node->unlinkedLocal()).setNode(node->child1().node());
  • trunk/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp

    r198364 r209653  
    113113        origin = target->at(0)->origin;
    114114       
    115         for (int argument = 0; argument < baseline->numParameters(); ++argument) {
     115        for (unsigned argument = 0; argument < static_cast<unsigned>(baseline->numParameters()); ++argument) {
    116116            Node* oldNode = target->variablesAtHead.argument(argument);
    117117            if (!oldNode) {
    118                 // Just for sanity, always have a SetArgument even if it's not needed.
    119                 oldNode = m_graph.m_arguments[argument];
     118                // Just for sanity, always have an argument node even if it's not needed.
     119                oldNode = m_graph.m_argumentsForChecking[argument];
    120120            }
    121             Node* node = newRoot->appendNode(
    122                 m_graph, SpecNone, SetArgument, origin,
    123                 OpInfo(oldNode->variableAccessData()));
    124             m_graph.m_arguments[argument] = node;
     121            Node* node;
     122            Node* stackNode;
     123            if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     124                node = newRoot->appendNode(
     125                    m_graph, SpecNone, GetArgumentRegister, origin,
     126                    OpInfo(oldNode->variableAccessData()),
     127                    OpInfo(argumentRegisterIndexForJSFunctionArgument(argument)));
     128                stackNode = newRoot->appendNode(
     129                    m_graph, SpecNone, SetLocal, origin,
     130                    OpInfo(oldNode->variableAccessData()),
     131                    Edge(node));
     132            } else {
     133                node = newRoot->appendNode(
     134                    m_graph, SpecNone, SetArgument, origin,
     135                    OpInfo(oldNode->variableAccessData()));
     136                stackNode = node;
     137            }
     138
     139            m_graph.m_argumentsForChecking[argument] = node;
     140            m_graph.m_argumentsOnStack[argument] = stackNode;
    125141        }
    126142
  • trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp

    r208720 r209653  
    315315    performConstantFolding(dfg);
    316316    bool changed = false;
     317    dfg.m_strengthReduceArguments = OptimizeArgumentFlushes;
    317318    changed |= performCFGSimplification(dfg);
     319    changed |= performStrengthReduction(dfg);
    318320    changed |= performLocalCSE(dfg);
    319321   
  • trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h

    r209121 r209653  
    198198           
    199199        default: {
    200             // All of the outermost arguments, except this, are definitely read.
     200            // All of the outermost stack arguments, except this, are definitely read.
    201201            for (unsigned i = m_graph.m_codeBlock->numParameters(); i-- > 1;)
    202202                m_read(virtualRegisterForArgument(i));
  • trunk/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp

    r208761 r209653  
    5757                    continue;
    5858           
    59                 m_graph.m_arguments[arg]->variableAccessData()->predict(
     59                m_graph.m_argumentsForChecking[arg]->variableAccessData()->predict(
    6060                    profile->computeUpdatedPrediction(locker));
    6161            }
     
    7575                if (!node)
    7676                    continue;
    77                 ASSERT(node->accessesStack(m_graph));
     77                ASSERT(node->accessesStack(m_graph) || node->op() == GetArgumentRegister);
    7878                node->variableAccessData()->predict(
    7979                    speculationFromValue(m_graph.m_plan.mustHandleValues[i]));
  • trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp

    r209638 r209653  
    169169        }
    170170
     171        case GetArgumentRegister: {
     172            VariableAccessData* variable = node->variableAccessData();
     173            SpeculatedType prediction = variable->prediction();
     174            if (!variable->couldRepresentInt52() && (prediction & SpecInt52Only))
     175                prediction = (prediction | SpecAnyIntAsDouble) & ~SpecInt52Only;
     176            if (prediction)
     177                changed |= mergePrediction(prediction);
     178            break;
     179        }
     180           
    171181        case UInt32ToNumber: {
    172182            if (node->canSpeculateInt32(m_pass))
     
    969979        case GetLocal:
    970980        case SetLocal:
     981        case GetArgumentRegister:
    971982        case UInt32ToNumber:
    972983        case ValueAdd:
  • trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp

    r198364 r209653  
    148148        } while (changed);
    149149       
    150         // All of the arguments should be live at head of root. Note that we may find that some
     150        // All of the stack arguments should be live at head of root. Note that we may find that some
    151151        // locals are live at head of root. This seems wrong but isn't. This will happen for example
    152152        // if the function accesses closure variable #42 for some other function and we either don't
     
    158158        // For our purposes here, the imprecision in the aliasing is harmless. It just means that we
    159159        // may not do as much Phi pruning as we wanted.
    160         for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;)
    161             DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i));
     160        for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;) {
     161            if (i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     162                // Stack arguments are live at the head of root.
     163                DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i));
     164            }
     165        }
    162166       
    163167        // Next identify where we would want to sink PutStacks to. We say that there is a deferred
     
    359363                switch (node->op()) {
    360364                case PutStack:
    361                     putStacksToSink.add(node);
     365                    if (!m_graph.m_argumentsOnStack.contains(node))
     366                        putStacksToSink.add(node);
    362367                    ssaCalculator.newDef(
    363368                        operandToVariable.operand(node->stackAccessData()->local),
     
    484489                        }
    485490                   
     491                        Node* incoming = mapping.operand(operand);
     492                        // Since we don't delete argument PutStacks, no need to add one back.
     493                        if (m_graph.m_argumentsOnStack.contains(incoming))
     494                            return;
     495
    486496                        // Gotta insert a PutStack.
    487497                        if (verbose)
    488498                            dataLog("Inserting a PutStack for ", operand, " at ", node, "\n");
    489499
    490                         Node* incoming = mapping.operand(operand);
    491500                        DFG_ASSERT(m_graph, node, incoming);
    492501                   
     
    539548                    if (isConcrete(deferred.operand(operand))) {
    540549                        incoming = mapping.operand(operand);
     550                        if (m_graph.m_argumentsOnStack.contains(incoming))
     551                            continue;
    541552                        DFG_ASSERT(m_graph, phiNode, incoming);
    542553                    } else {
  • trunk/Source/JavaScriptCore/dfg/DFGRegisterBank.h

    r206525 r209653  
    237237        }
    238238
     239        void unlock() const
     240        {
     241            return m_bank->unlockAtIndex(m_index);
     242        }
     243       
    239244        void release() const
    240245        {
     
    297302        ASSERT(index < NUM_REGS);
    298303        return m_data[index].lockCount;
     304    }
     305
     306    void unlockAtIndex(unsigned index)
     307    {
     308        ASSERT(index < NUM_REGS);
     309        ASSERT(m_data[index].lockCount);
     310        --m_data[index].lockCount;
    299311    }
    300312
  • trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp

    r203808 r209653  
    7474       
    7575        // Find all SetLocals and create Defs for them. We handle SetArgument by creating a
    76         // GetLocal, and recording the flush format.
     76        // GetStack, and recording the flush format. We handle GetArgumentRegister by directly
     77        // adding the node to m_argumentMapping hash map.
    7778        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
    7879            BasicBlock* block = m_graph.block(blockIndex);
     
    8485            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
    8586                Node* node = block->at(nodeIndex);
    86                 if (node->op() != SetLocal && node->op() != SetArgument)
     87                if (node->op() != SetLocal && node->op() != SetArgument && node->op() != GetArgumentRegister)
    8788                    continue;
    8889               
    8990                VariableAccessData* variable = node->variableAccessData();
    9091               
    91                 Node* childNode;
     92                Node* childNode = nullptr;
    9293                if (node->op() == SetLocal)
    9394                    childNode = node->child1().node();
     95                else if (node->op() == GetArgumentRegister)
     96                    m_argumentMapping.add(node, node);
    9497                else {
    9598                    ASSERT(node->op() == SetArgument);
     
    102105                    m_argumentMapping.add(node, childNode);
    103106                }
    104                
    105                 m_calculator.newDef(
    106                     m_ssaVariableForVariable.get(variable), block, childNode);
     107
     108                if (childNode) {
     109                    m_calculator.newDef(
     110                        m_ssaVariableForVariable.get(variable), block, childNode);
     111                }
    107112            }
    108113           
     
    295300                    break;
    296301                }
    297                    
     302
     303                case GetArgumentRegister: {
     304                    VariableAccessData* variable = node->variableAccessData();
     305                    valueForOperand.operand(variable->local()) = node;
     306                    break;
     307                }
     308
    298309                case GetStack: {
    299310                    ASSERT(m_argumentGetters.contains(node));
     
    383394        }
    384395       
    385         m_graph.m_argumentFormats.resize(m_graph.m_arguments.size());
    386         for (unsigned i = m_graph.m_arguments.size(); i--;) {
     396        m_graph.m_argumentFormats.resize(m_graph.m_argumentsForChecking.size());
     397        for (unsigned i = m_graph.m_argumentsForChecking.size(); i--;) {
    387398            FlushFormat format = FlushedJSValue;
    388399
    389             Node* node = m_argumentMapping.get(m_graph.m_arguments[i]);
     400            Node* node = m_argumentMapping.get(m_graph.m_argumentsForChecking[i]);
    390401           
    391402            RELEASE_ASSERT(node);
    392             format = node->stackAccessData()->format;
     403            if (node->op() == GetArgumentRegister) {
     404                VariableAccessData* variable = node->variableAccessData();
     405                format = variable->flushFormat();
     406            } else
     407                format = node->stackAccessData()->format;
    393408           
    394409            m_graph.m_argumentFormats[i] = format;
    395             m_graph.m_arguments[i] = node; // Record the load that loads the arguments for the benefit of exit profiling.
     410            m_graph.m_argumentsForChecking[i] = node; // Record the load that loads the arguments for the benefit of exit profiling.
    396411        }
    397412       
  • trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h

    r209638 r209653  
    148148    case GetCallee:
    149149    case GetArgumentCountIncludingThis:
     150    case GetArgumentRegister:
    150151    case GetRestLength:
    151152    case GetLocal:
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp

    r209638 r209653  
    7575    , m_indexInBlock(0)
    7676    , m_generationInfo(m_jit.graph().frameRegisterCount())
     77    , m_argumentGenerationInfo(CallFrameSlot::callee + GPRInfo::numberOfArgumentRegisters)
    7778    , m_state(m_jit.graph())
    7879    , m_interpreter(m_jit.graph(), m_state)
     
    408409    for (unsigned i = 0; i < m_generationInfo.size(); ++i)
    409410        m_generationInfo[i] = GenerationInfo();
     411    for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i)
     412        m_argumentGenerationInfo[i] = GenerationInfo();
    410413    m_gprs = RegisterBank<GPRInfo>();
    411414    m_fprs = RegisterBank<FPRInfo>();
     
    12001203}
    12011204
     1205static void dumpRegisterInfo(GenerationInfo& info, unsigned index)
     1206{
     1207    if (info.alive())
     1208        dataLogF("    % 3d:%s%s", index, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat()));
     1209    else
     1210        dataLogF("    % 3d:[__][__]", index);
     1211    if (info.registerFormat() == DataFormatDouble)
     1212        dataLogF(":fpr%d\n", info.fpr());
     1213    else if (info.registerFormat() != DataFormatNone
     1214#if USE(JSVALUE32_64)
     1215        && !(info.registerFormat() & DataFormatJS)
     1216#endif
     1217        ) {
     1218        ASSERT(info.gpr() != InvalidGPRReg);
     1219        dataLogF(":%s\n", GPRInfo::debugName(info.gpr()));
     1220    } else
     1221        dataLogF("\n");
     1222}
     1223
    12021224void SpeculativeJIT::dump(const char* label)
    12031225{
     
    12091231    dataLogF("  fprs:\n");
    12101232    m_fprs.dump();
    1211     dataLogF("  VirtualRegisters:\n");
    1212     for (unsigned i = 0; i < m_generationInfo.size(); ++i) {
    1213         GenerationInfo& info = m_generationInfo[i];
    1214         if (info.alive())
    1215             dataLogF("    % 3d:%s%s", i, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat()));
    1216         else
    1217             dataLogF("    % 3d:[__][__]", i);
    1218         if (info.registerFormat() == DataFormatDouble)
    1219             dataLogF(":fpr%d\n", info.fpr());
    1220         else if (info.registerFormat() != DataFormatNone
    1221 #if USE(JSVALUE32_64)
    1222             && !(info.registerFormat() & DataFormatJS)
    1223 #endif
    1224             ) {
    1225             ASSERT(info.gpr() != InvalidGPRReg);
    1226             dataLogF(":%s\n", GPRInfo::debugName(info.gpr()));
    1227         } else
    1228             dataLogF("\n");
    1229     }
     1233
     1234    dataLogF("  Argument VirtualRegisters:\n");
     1235    for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i)
     1236        dumpRegisterInfo(m_argumentGenerationInfo[i], i);
     1237
     1238    dataLogF("  Local VirtualRegisters:\n");
     1239    for (unsigned i = 0; i < m_generationInfo.size(); ++i)
     1240        dumpRegisterInfo(m_generationInfo[i], i);
     1241
    12301242    if (label)
    12311243        dataLogF("</%s>\n", label);
     
    16781690    m_jit.blockHeads()[m_block->index] = m_jit.label();
    16791691
     1692    if (!m_block->index)
     1693        checkArgumentTypes();
     1694
    16801695    if (!m_block->intersectionOfCFAHasVisited) {
    16811696        // Don't generate code for basic blocks that are unreachable according to CFA.
     
    16881703    m_stream->appendAndLog(VariableEvent::reset());
    16891704   
     1705    if (!m_block->index)
     1706        setupArgumentRegistersForEntry();
     1707   
    16901708    m_jit.jitAssertHasValidCallFrame();
    16911709    m_jit.jitAssertTagsInPlace();
     
    16971715    for (size_t i = m_block->variablesAtHead.size(); i--;) {
    16981716        int operand = m_block->variablesAtHead.operandForIndex(i);
     1717        if (!m_block->index && operandIsArgument(operand)) {
     1718            unsigned argument = m_block->variablesAtHead.argumentForIndex(i);
     1719            Node* argumentNode = m_jit.graph().m_argumentsForChecking[argument];
     1720           
     1721            if (argumentNode && argumentNode->op() == GetArgumentRegister) {
     1722                if (!argumentNode->refCount())
     1723                    continue; // No need to record dead GetArgumentRegisters's.
     1724                m_stream->appendAndLog(
     1725                    VariableEvent::movHint(
     1726                        MinifiedID(argumentNode),
     1727                        argumentNode->local()));
     1728                continue;
     1729            }
     1730        }
     1731
    16991732        Node* node = m_block->variablesAtHead[i];
    17001733        if (!node)
     
    17831816
    17841817    for (int i = 0; i < m_jit.codeBlock()->numParameters(); ++i) {
    1785         Node* node = m_jit.graph().m_arguments[i];
     1818        Node* node = m_jit.graph().m_argumentsForChecking[i];
    17861819        if (!node) {
    17871820            // The argument is dead. We don't do any checks for such arguments.
     
    17891822        }
    17901823       
    1791         ASSERT(node->op() == SetArgument);
     1824        ASSERT(node->op() == SetArgument
     1825            || (node->op() == SetLocal && node->child1()->op() == GetArgumentRegister)
     1826            || node->op() == GetArgumentRegister);
    17921827        ASSERT(node->shouldGenerate());
    17931828
     
    18001835        VirtualRegister virtualRegister = variableAccessData->local();
    18011836
    1802         JSValueSource valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister));
    1803        
     1837        JSValueSource valueSource;
     1838
     1839#if USE(JSVALUE64)
     1840        GPRReg argumentRegister = InvalidGPRReg;
     1841
     1842#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     1843        if (static_cast<unsigned>(i) < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     1844            argumentRegister = argumentRegisterForFunctionArgument(i);
     1845            valueSource = JSValueSource(argumentRegister);
     1846        } else
     1847#endif
     1848#endif
     1849            valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister));
     1850
    18041851#if USE(JSVALUE64)
    18051852        switch (format) {
    18061853        case FlushedInt32: {
    1807             speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
     1854            if (argumentRegister != InvalidGPRReg)
     1855                speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, argumentRegister, GPRInfo::tagTypeNumberRegister));
     1856            else
     1857                speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
    18081858            break;
    18091859        }
    18101860        case FlushedBoolean: {
    18111861            GPRTemporary temp(this);
    1812             m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr());
     1862            if (argumentRegister != InvalidGPRReg)
     1863                m_jit.move(argumentRegister, temp.gpr());
     1864            else
     1865                m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr());
    18131866            m_jit.xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), temp.gpr());
    18141867            speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, temp.gpr(), TrustedImm32(static_cast<int32_t>(~1))));
     
    18161869        }
    18171870        case FlushedCell: {
    1818             speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
     1871            if (argumentRegister != InvalidGPRReg)
     1872                speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, argumentRegister, GPRInfo::tagMaskRegister));
     1873            else
     1874                speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
    18191875            break;
    18201876        }
     
    18471903}
    18481904
     1905void SpeculativeJIT::setupArgumentRegistersForEntry()
     1906{
     1907#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     1908    BasicBlock* firstBlock = m_jit.graph().block(0);
     1909
     1910    // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165720
     1911    // We should scan m_arguemntsForChecking instead of looking for GetArgumentRegister
     1912    // nodes in the root block.
     1913    for (size_t indexInBlock = 0; indexInBlock < firstBlock->size(); ++indexInBlock) {
     1914        Node* node = firstBlock->at(indexInBlock);
     1915
     1916        if (node->op() == GetArgumentRegister) {
     1917            VirtualRegister virtualRegister = node->virtualRegister();
     1918            GenerationInfo& info = generationInfoFromVirtualRegister(virtualRegister);
     1919            GPRReg argumentReg = GPRInfo::toArgumentRegister(node->argumentRegisterIndex());
     1920           
     1921            ASSERT(argumentReg != InvalidGPRReg);
     1922           
     1923            ASSERT(!m_gprs.isLocked(argumentReg));
     1924            m_gprs.allocateSpecific(argumentReg);
     1925            m_gprs.retain(argumentReg, virtualRegister, SpillOrderJS);
     1926            info.initArgumentRegisterValue(node, node->refCount(), argumentReg, DataFormatJS);
     1927            info.noticeOSRBirth(*m_stream, node, virtualRegister);
     1928            // Don't leave argument registers locked.
     1929            m_gprs.unlock(argumentReg);
     1930        }
     1931    }
     1932#endif
     1933}
     1934
    18491935bool SpeculativeJIT::compile()
    18501936{
    1851     checkArgumentTypes();
    1852    
    18531937    ASSERT(!m_currentNode);
    18541938    for (BlockIndex blockIndex = 0; blockIndex < m_jit.graph().numBlocks(); ++blockIndex) {
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h

    r209638 r209653  
    129129   
    130130#if USE(JSVALUE64)
    131     GPRReg fillJSValue(Edge);
     131    GPRReg fillJSValue(Edge, GPRReg gprToUse = InvalidGPRReg);
    132132#elif USE(JSVALUE32_64)
    133133    bool fillJSValue(Edge, GPRReg&, GPRReg&, FPRReg&);
     
    201201        m_jit.addRegisterAllocationAtOffset(m_jit.debugOffset());
    202202#endif
     203        if (specific == InvalidGPRReg)
     204            return allocate();
     205
    203206        VirtualRegister spillMe = m_gprs.allocateSpecific(specific);
    204207        if (spillMe.isValid()) {
     
    315318
    316319    void checkArgumentTypes();
     320
     321    void setupArgumentRegistersForEntry();
    317322
    318323    void clearGenerationInfo();
     
    486491    void spill(VirtualRegister spillMe)
    487492    {
     493        if (spillMe.isArgument() && m_block->index > 0)
     494            return;
     495
    488496        GenerationInfo& info = generationInfoFromVirtualRegister(spillMe);
    489497
     
    28742882    GenerationInfo& generationInfoFromVirtualRegister(VirtualRegister virtualRegister)
    28752883    {
    2876         return m_generationInfo[virtualRegister.toLocal()];
     2884        if (virtualRegister.isLocal())
     2885            return m_generationInfo[virtualRegister.toLocal()];
     2886        ASSERT(virtualRegister.isArgument());
     2887        return m_argumentGenerationInfo[virtualRegister.offset()];
    28772888    }
    28782889   
     
    28972908    // Virtual and physical register maps.
    28982909    Vector<GenerationInfo, 32> m_generationInfo;
     2910    Vector<GenerationInfo, 8> m_argumentGenerationInfo;
    28992911    RegisterBank<GPRInfo> m_gprs;
    29002912    RegisterBank<FPRInfo> m_fprs;
     
    29953007    }
    29963008
     3009#if USE(JSVALUE64)
     3010    explicit JSValueOperand(SpeculativeJIT* jit, Edge edge, GPRReg regToUse)
     3011        : m_jit(jit)
     3012        , m_edge(edge)
     3013        , m_gprOrInvalid(InvalidGPRReg)
     3014    {
     3015        ASSERT(m_jit);
     3016        if (!edge)
     3017            return;
     3018        if (jit->isFilled(node()) || regToUse != InvalidGPRReg)
     3019            gprUseSpecific(regToUse);
     3020    }
     3021#endif
     3022   
    29973023    ~JSValueOperand()
    29983024    {
     
    30313057        return m_gprOrInvalid;
    30323058    }
     3059    GPRReg gprUseSpecific(GPRReg regToUse)
     3060    {
     3061        if (m_gprOrInvalid == InvalidGPRReg)
     3062            m_gprOrInvalid = m_jit->fillJSValue(m_edge, regToUse);
     3063        return m_gprOrInvalid;
     3064    }
    30333065    JSValueRegs jsValueRegs()
    30343066    {
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp

    r209647 r209653  
    933933   
    934934    CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
    935     info->setUpCall(callType, node->origin.semantic, calleePayloadGPR);
     935    info->setUpCall(callType, StackArgs, node->origin.semantic, calleePayloadGPR);
    936936   
    937937    auto setResultAndResetStack = [&] () {
     
    10821082    }
    10831083
    1084     m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::regT2);
     1084    m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::nonArgGPR0);
    10851085    JITCompiler::Call slowCall = m_jit.nearCall();
    10861086
     
    56255625    case GetStack:
    56265626    case GetMyArgumentByVal:
     5627    case GetArgumentRegister:
    56275628    case GetMyArgumentByValOutOfBounds:
    56285629    case PhantomCreateRest:
  • trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp

    r209638 r209653  
    8181}
    8282
    83 GPRReg SpeculativeJIT::fillJSValue(Edge edge)
     83GPRReg SpeculativeJIT::fillJSValue(Edge edge, GPRReg gprToUse)
    8484{
    8585    VirtualRegister virtualRegister = edge->virtualRegister();
     
    8888    switch (info.registerFormat()) {
    8989    case DataFormatNone: {
    90         GPRReg gpr = allocate();
     90        GPRReg gpr = allocate(gprToUse);
    9191
    9292        if (edge->hasConstant()) {
     
    121121        // If not, we'll zero extend in place, so mark on the info that this is now type DataFormatInt32, not DataFormatJSInt32.
    122122        if (m_gprs.isLocked(gpr)) {
    123             GPRReg result = allocate();
     123            GPRReg result = allocate(gprToUse);
     124            m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result);
     125            return result;
     126        }
     127        if (gprToUse != InvalidGPRReg && gpr != gprToUse) {
     128            GPRReg result = allocate(gprToUse);
    124129            m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result);
    125130            return result;
     
    139144    case DataFormatJSBoolean: {
    140145        GPRReg gpr = info.gpr();
     146        if (gprToUse != InvalidGPRReg && gpr != gprToUse) {
     147            GPRReg result = allocate(gprToUse);
     148            m_jit.move(gpr, result);
     149            return result;
     150        }
    141151        m_gprs.lock(gpr);
    142152        return gpr;
     
    633643{
    634644    CallLinkInfo::CallType callType;
     645    ArgumentsLocation argumentsLocation = StackArgs;
    635646    bool isVarargs = false;
    636647    bool isForwardVarargs = false;
     
    715726    GPRReg calleeGPR = InvalidGPRReg;
    716727    CallFrameShuffleData shuffleData;
    717    
     728    std::optional<JSValueOperand> tailCallee;
     729    std::optional<GPRTemporary> calleeGPRTemporary;
     730
     731    incrementCounter(&m_jit, VM::DFGCaller);
     732
    718733    ExecutableBase* executable = nullptr;
    719734    FunctionExecutable* functionExecutable = nullptr;
     
    734749        unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
    735750       
     751        incrementCounter(&m_jit, VM::CallVarargs);
    736752        if (isForwardVarargs) {
    737753            flushRegisters();
     
    842858
    843859        if (isTail) {
     860            incrementCounter(&m_jit, VM::TailCall);
    844861            Edge calleeEdge = m_jit.graph().child(node, 0);
    845             JSValueOperand callee(this, calleeEdge);
    846             calleeGPR = callee.gpr();
     862            // We can't get the a specific register for the callee, since that will just move
     863            // from any current register.  When we silent fill in the slow path we'll fill
     864            // the original register and won't have the callee in the right register.
     865            // Therefore we allocate a temp register for the callee and move ourselves.
     866            tailCallee.emplace(this, calleeEdge);
     867            GPRReg tailCalleeGPR = tailCallee->gpr();
     868            calleeGPR = argumentRegisterForCallee();
     869            if (tailCalleeGPR != calleeGPR)
     870                calleeGPRTemporary = GPRTemporary(this, calleeGPR);
    847871            if (!isDirect)
    848                 callee.use();
    849 
     872                tailCallee->use();
     873
     874            argumentsLocation = argumentsLocationFor(numAllocatedArgs);
     875            shuffleData.argumentsInRegisters = argumentsLocation != StackArgs;
    850876            shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister;
    851877            shuffleData.numLocals = m_jit.graph().frameRegisterCount();
    852             shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatJS);
     878            shuffleData.callee = ValueRecovery::inGPR(tailCalleeGPR, DataFormatJS);
    853879            shuffleData.args.resize(numAllocatedArgs);
    854880
     
    865891
    866892            shuffleData.setupCalleeSaveRegisters(m_jit.codeBlock());
    867         } else {
     893        } else if (node->op() == CallEval) {
     894            // CallEval is handled with the arguments in the stack
    868895            m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), JITCompiler::calleeFramePayloadSlot(CallFrameSlot::argumentCount));
    869896
     
    879906            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i)
    880907                m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i));
     908
     909            incrementCounter(&m_jit, VM::CallEval);
     910        } else {
     911            for (unsigned i = numPassedArgs; i-- > 0;) {
     912                GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i);
     913                Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
     914                JSValueOperand arg(this, argEdge, platformArgGPR);
     915                GPRReg argGPR = arg.gpr();
     916                ASSERT(argGPR == platformArgGPR || platformArgGPR == InvalidGPRReg);
     917
     918                // Only free the non-argument registers at this point.
     919                if (platformArgGPR == InvalidGPRReg) {
     920                    use(argEdge);
     921                    m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i));
     922                }
     923            }
     924
     925            // Use the argument edges for arguments passed in registers.
     926            for (unsigned i = numPassedArgs; i-- > 0;) {
     927                GPRReg argGPR = argumentRegisterForFunctionArgument(i);
     928                if (argGPR != InvalidGPRReg) {
     929                    Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
     930                    use(argEdge);
     931                }
     932            }
     933
     934            GPRTemporary argCount(this, argumentRegisterForArgumentCount());
     935            GPRReg argCountGPR = argCount.gpr();
     936            m_jit.move(TrustedImm32(numPassedArgs), argCountGPR);
     937            argumentsLocation = argumentsLocationFor(numAllocatedArgs);
     938
     939            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) {
     940                GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i);
     941
     942                if (platformArgGPR == InvalidGPRReg)
     943                    m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i));
     944                else {
     945                    GPRTemporary argumentTemp(this, platformArgGPR);
     946                    m_jit.move(TrustedImm64(JSValue::encode(jsUndefined())), argumentTemp.gpr());
     947                }
     948            }
    881949        }
    882950    }
     
    884952    if (!isTail || isVarargs || isForwardVarargs) {
    885953        Edge calleeEdge = m_jit.graph().child(node, 0);
    886         JSValueOperand callee(this, calleeEdge);
     954        JSValueOperand callee(this, calleeEdge, argumentRegisterForCallee());
    887955        calleeGPR = callee.gpr();
    888956        callee.use();
    889         m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee));
     957        if (argumentsLocation == StackArgs)
     958            m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee));
    890959
    891960        flushRegisters();
     
    914983   
    915984    CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();
    916     callLinkInfo->setUpCall(callType, m_currentNode->origin.semantic, calleeGPR);
     985    callLinkInfo->setUpCall(callType, argumentsLocation, m_currentNode->origin.semantic, calleeGPR);
    917986
    918987    if (node->op() == CallEval) {
     
    9551024            RELEASE_ASSERT(node->op() == DirectTailCall);
    9561025           
     1026            if (calleeGPRTemporary != std::nullopt)
     1027                m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
     1028
    9571029            JITCompiler::PatchableJump patchableJump = m_jit.patchableJump();
    9581030            JITCompiler::Label mainPath = m_jit.label();
     1031
     1032            incrementCounter(&m_jit, VM::TailCall);
     1033            incrementCounter(&m_jit, VM::DirectCall);
    9591034           
    9601035            m_jit.emitStoreCallSiteIndex(callSite);
     
    9721047            silentFillAllRegisters(InvalidGPRReg);
    9731048            m_jit.exceptionCheck();
     1049            if (calleeGPRTemporary != std::nullopt)
     1050                m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
    9741051            m_jit.jump().linkTo(mainPath, &m_jit);
    9751052           
     
    9821059        JITCompiler::Label mainPath = m_jit.label();
    9831060       
     1061        incrementCounter(&m_jit, VM::DirectCall);
     1062
    9841063        m_jit.emitStoreCallSiteIndex(callSite);
    9851064       
     
    9891068        JITCompiler::Label slowPath = m_jit.label();
    9901069        if (isX86())
    991             m_jit.pop(JITCompiler::selectScratchGPR(calleeGPR));
    992 
    993         callOperation(operationLinkDirectCall, callLinkInfo, calleeGPR);
     1070            m_jit.pop(GPRInfo::nonArgGPR0);
     1071
     1072        m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
     1073        JITCompiler::Call slowCall = m_jit.nearCall();
     1074
    9941075        m_jit.exceptionCheck();
    9951076        m_jit.jump().linkTo(mainPath, &m_jit);
     
    9981079       
    9991080        setResultAndResetStack();
    1000        
    1001         m_jit.addJSDirectCall(call, slowPath, callLinkInfo);
     1081
     1082        m_jit.addJSDirectCall(call, slowCall, slowPath, callLinkInfo);
    10021083        return;
    10031084    }
    1004    
     1085
     1086    if (isTail && calleeGPRTemporary != std::nullopt)
     1087        m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
     1088
    10051089    m_jit.emitStoreCallSiteIndex(callSite);
    10061090   
     
    10261110    if (node->op() == TailCall) {
    10271111        CallFrameShuffler callFrameShuffler(m_jit, shuffleData);
    1028         callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
     1112        if (argumentsLocation == StackArgs)
     1113            callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(argumentRegisterForCallee()));
    10291114        callFrameShuffler.prepareForSlowPath();
    1030     } else {
    1031         m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0
    1032 
    1033         if (isTail)
    1034             m_jit.emitRestoreCalleeSaves(); // This needs to happen after we moved calleeGPR to regT0
    1035     }
    1036 
    1037     m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2
     1115    } else if (isTail)
     1116        m_jit.emitRestoreCalleeSaves();
     1117
     1118    m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
    10381119    JITCompiler::Call slowCall = m_jit.nearCall();
    10391120
    10401121    done.link(&m_jit);
    10411122
    1042     if (isTail)
     1123    if (isTail) {
     1124        tailCallee = std::nullopt;
     1125        calleeGPRTemporary = std::nullopt;
    10431126        m_jit.abortWithReason(JITDidReturnFromTailCall);
    1044     else
     1127    } else
    10451128        setResultAndResetStack();
    10461129
     
    41674250    }
    41684251
     4252    case GetArgumentRegister:
     4253        break;
     4254           
    41694255    case GetRestLength: {
    41704256        compileGetRestLength(node);
  • trunk/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp

    r208985 r209653  
    277277            VirtualRegister local = m_node->local();
    278278           
     279            if (local.isArgument() && m_graph.m_strengthReduceArguments != OptimizeArgumentFlushes)
     280                break;
     281
    279282            for (unsigned i = m_nodeIndex; i--;) {
    280283                Node* node = m_block->at(i);
  • trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp

    r203006 r209653  
    131131    jit.branchPtr(MacroAssembler::NotEqual, GPRInfo::regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(-static_cast<intptr_t>(CallFrame::headerSizeInRegisters)))).linkTo(loop, &jit);
    132132   
    133     jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo::regT1);
    134     MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo::regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));
     133    jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo::nonArgGPR0);
     134    MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo::nonArgGPR0, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));
    135135    jit.abortWithReason(DFGUnreasonableOSREntryJumpDestination);
    136136
    137137    ok.link(&jit);
     138
     139#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     140    // Load argument values into argument registers
     141    jit.loadPtr(MacroAssembler::Address(GPRInfo::callFrameRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register))), argumentRegisterForCallee());
     142    GPRReg argCountReg = argumentRegisterForArgumentCount();
     143    jit.load32(AssemblyHelpers::payloadFor(CallFrameSlot::argumentCount), argCountReg);
     144   
     145    MacroAssembler::JumpList doneLoadingArgs;
     146   
     147    for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     148        jit.load64(MacroAssembler::Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
     149   
     150    doneLoadingArgs.link(&jit);
     151#endif
     152   
    138153    jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer();
    139154    jit.emitMaterializeTagCheckRegisters();
    140155
    141     jit.jump(GPRInfo::regT1);
     156    jit.jump(GPRInfo::nonArgGPR0);
    142157   
    143158    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
  • trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp

    r198154 r209653  
    134134        valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables);
    135135        for (size_t i = 0; i < valueRecoveries.size(); ++i) {
    136             valueRecoveries[i] = ValueRecovery::displacedInJSStack(
    137                 VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
     136            if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     137                valueRecoveries[i] = ValueRecovery::inGPR(
     138                    argumentRegisterForFunctionArgument(i), DataFormatJS);
     139            } else {
     140                valueRecoveries[i] = ValueRecovery::displacedInJSStack(
     141                    VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
     142            }
    138143        }
    139144        return;
     
    162167            info.update(event);
    163168            generationInfos.add(event.id(), info);
     169            MinifiedNode* node = graph.at(event.id());
     170            if (node && node->hasArgumentIndex()) {
     171                unsigned argument = node->argumentIndex();
     172                VirtualRegister argumentReg = virtualRegisterForArgument(argument);
     173                operandSources.setOperand(argumentReg, ValueSource(event.id()));
     174            }
    164175            break;
    165176        }
  • trunk/Source/JavaScriptCore/dfg/DFGVirtualRegisterAllocationPhase.cpp

    r198364 r209653  
    4343    {
    4444    }
    45    
     45
     46    void allocateRegister(ScoreBoard& scoreBoard, Node* node)
     47    {
     48        // First, call use on all of the current node's children, then
     49        // allocate a VirtualRegister for this node. We do so in this
     50        // order so that if a child is on its last use, and a
     51        // VirtualRegister is freed, then it may be reused for node.
     52        if (node->flags() & NodeHasVarArgs) {
     53            for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++)
     54                scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]);
     55        } else {
     56            scoreBoard.useIfHasResult(node->child1());
     57            scoreBoard.useIfHasResult(node->child2());
     58            scoreBoard.useIfHasResult(node->child3());
     59        }
     60
     61        if (!node->hasResult())
     62            return;
     63
     64        VirtualRegister virtualRegister = scoreBoard.allocate();
     65        node->setVirtualRegister(virtualRegister);
     66        // 'mustGenerate' nodes have their useCount artificially elevated,
     67        // call use now to account for this.
     68        if (node->mustGenerate())
     69            scoreBoard.use(node);
     70    }
     71
    4672    bool run()
    4773    {
     
    6086                scoreBoard.sortFree();
    6187            }
     88
     89            // Handle GetArgumentRegister Nodes first as the register is alive on entry
     90            // to the function and may need to be spilled before any use.
     91            if (!blockIndex) {
     92                for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) {
     93                    Node* node = block->at(indexInBlock);
     94                    if (node->op() == GetArgumentRegister)
     95                        allocateRegister(scoreBoard, node);
     96                }
     97            }
     98
    6299            for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) {
    63100                Node* node = block->at(indexInBlock);
     
    74111                    ASSERT(!node->child1()->hasResult());
    75112                    break;
     113                case GetArgumentRegister:
     114                    ASSERT(!blockIndex);
     115                    continue;
    76116                default:
    77117                    break;
    78118                }
    79                
    80                 // First, call use on all of the current node's children, then
    81                 // allocate a VirtualRegister for this node. We do so in this
    82                 // order so that if a child is on its last use, and a
    83                 // VirtualRegister is freed, then it may be reused for node.
    84                 if (node->flags() & NodeHasVarArgs) {
    85                     for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++)
    86                         scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]);
    87                 } else {
    88                     scoreBoard.useIfHasResult(node->child1());
    89                     scoreBoard.useIfHasResult(node->child2());
    90                     scoreBoard.useIfHasResult(node->child3());
    91                 }
    92119
    93                 if (!node->hasResult())
    94                     continue;
    95 
    96                 VirtualRegister virtualRegister = scoreBoard.allocate();
    97                 node->setVirtualRegister(virtualRegister);
    98                 // 'mustGenerate' nodes have their useCount artificially elevated,
    99                 // call use now to account for this.
    100                 if (node->mustGenerate())
    101                     scoreBoard.use(node);
     120                allocateRegister(scoreBoard, node);
    102121            }
    103122            scoreBoard.assertClear();
  • trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp

    r209638 r209653  
    173173    case GetScope:
    174174    case GetCallee:
     175    case GetArgumentRegister:
    175176    case GetArgumentCountIncludingThis:
    176177    case ToNumber:
  • trunk/Source/JavaScriptCore/ftl/FTLJITCode.cpp

    r208985 r209653  
    4646        CommaPrinter comma;
    4747        dataLog(comma, m_b3Code);
    48         dataLog(comma, m_arityCheckEntrypoint);
     48        dataLog(comma, m_registerArgsPossibleExtraArgsEntryPoint);
     49        dataLog(comma, m_registerArgsCheckArityEntryPoint);
    4950        dataLog("\n");
    5051    }
     
    6162}
    6263
    63 void JITCode::initializeAddressForCall(CodePtr address)
     64void JITCode::initializeEntrypointThunk(CodeRef entrypointThunk)
    6465{
    65     m_addressForCall = address;
     66    m_entrypointThunk = entrypointThunk;
    6667}
    6768
    68 void JITCode::initializeArityCheckEntrypoint(CodeRef entrypoint)
     69void JITCode::setEntryFor(EntryPointType type, CodePtr entry)
    6970{
    70     m_arityCheckEntrypoint = entrypoint;
     71    m_entrypoints.setEntryFor(type, entry);
    7172}
    72 
    73 JITCode::CodePtr JITCode::addressForCall(ArityCheckMode arityCheck)
     73   
     74JITCode::CodePtr JITCode::addressForCall(EntryPointType entryType)
    7475{
    75     switch (arityCheck) {
    76     case ArityCheckNotRequired:
    77         return m_addressForCall;
    78     case MustCheckArity:
    79         return m_arityCheckEntrypoint.code();
    80     }
    81     RELEASE_ASSERT_NOT_REACHED();
    82     return CodePtr();
     76    CodePtr entry = m_entrypoints.entryFor(entryType);
     77    RELEASE_ASSERT(entry);
     78    return entry;
    8379}
    8480
    8581void* JITCode::executableAddressAtOffset(size_t offset)
    8682{
    87     return reinterpret_cast<char*>(m_addressForCall.executableAddress()) + offset;
     83#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     84    return reinterpret_cast<char*>(addressForCall(RegisterArgsArityCheckNotRequired).executableAddress()) + offset;
     85#else
     86    return reinterpret_cast<char*>(addressForCall(StackArgsArityCheckNotRequired).executableAddress()) + offset;
     87#endif
    8888}
    8989
  • trunk/Source/JavaScriptCore/ftl/FTLJITCode.h

    r208985 r209653  
    4545    ~JITCode();
    4646
    47     CodePtr addressForCall(ArityCheckMode) override;
     47    CodePtr addressForCall(EntryPointType) override;
    4848    void* executableAddressAtOffset(size_t offset) override;
    4949    void* dataAddressAtOffset(size_t offset) override;
     
    5454    void initializeB3Code(CodeRef);
    5555    void initializeB3Byproducts(std::unique_ptr<B3::OpaqueByproducts>);
    56     void initializeAddressForCall(CodePtr);
    57     void initializeArityCheckEntrypoint(CodeRef);
    58    
     56    void initializeEntrypointThunk(CodeRef);
     57    void setEntryFor(EntryPointType, CodePtr);
     58
    5959    void validateReferences(const TrackedReferences&) override;
    6060
     
    7878    CodeRef m_b3Code;
    7979    std::unique_ptr<B3::OpaqueByproducts> m_b3Byproducts;
    80     CodeRef m_arityCheckEntrypoint;
     80    CodeRef m_entrypointThunk;
     81    JITEntryPoints m_entrypoints;
     82    CodePtr m_registerArgsPossibleExtraArgsEntryPoint;
     83    CodePtr m_registerArgsCheckArityEntryPoint;
     84    CodePtr m_stackArgsArityOKEntryPoint;
     85    CodePtr m_stackArgsCheckArityEntrypoint;
    8186};
    8287
  • trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp

    r205462 r209653  
    7777            ("FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data())));
    7878
    79     jitCode->initializeArityCheckEntrypoint(
     79    jitCode->initializeEntrypointThunk(
    8080        FINALIZE_CODE_IF(
    8181            dumpDisassembly, *entrypointLinkBuffer,
  • trunk/Source/JavaScriptCore/ftl/FTLLink.cpp

    r203990 r209653  
    128128    switch (graph.m_plan.mode) {
    129129    case FTLMode: {
    130         CCallHelpers::JumpList mainPathJumps;
    131    
    132         jit.load32(
    133             frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount),
    134             GPRInfo::regT1);
    135         mainPathJumps.append(jit.branch32(
    136             CCallHelpers::AboveOrEqual, GPRInfo::regT1,
    137             CCallHelpers::TrustedImm32(codeBlock->numParameters())));
     130        CCallHelpers::JumpList fillRegistersAndContinueMainPath;
     131        CCallHelpers::JumpList toMainPath;
     132
     133        unsigned numParameters = static_cast<unsigned>(codeBlock->numParameters());
     134        unsigned maxRegisterArgumentCount = std::min(numParameters, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
     135
     136        GPRReg argCountReg = argumentRegisterForArgumentCount();
     137
     138        CCallHelpers::Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1];
     139
     140        if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     141            // Spill any extra register arguments passed to function onto the stack.
     142            for (unsigned argIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1; argIndex >= numParameters; argIndex--) {
     143                registerArgumentsEntrypoints[argIndex + 1] = jit.label();
     144                jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     145            }
     146            incrementCounter(&jit, VM::RegArgsExtra);
     147            toMainPath.append(jit.jump());
     148        }
     149
     150        CCallHelpers::JumpList continueToArityFixup;
     151
     152        CCallHelpers::Label stackArgsCheckArityEntry = jit.label();
     153        incrementCounter(&jit, VM::StackArgsArity);
     154        jit.load32(frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount), GPRInfo::regT1);
     155        continueToArityFixup.append(jit.branch32(
     156            CCallHelpers::Below, GPRInfo::regT1,
     157            CCallHelpers::TrustedImm32(numParameters)));
     158
     159#if ENABLE(VM_COUNTERS)
     160        CCallHelpers::Jump continueToStackArityOk = jit.jump();
     161#endif
     162
     163        CCallHelpers::Label stackArgsArityOKEntry = jit.label();
     164
     165        incrementCounter(&jit, VM::StackArgsArity);
     166
     167#if ENABLE(VM_COUNTERS)
     168        continueToStackArityOk.link(&jit);
     169#endif
     170
     171        // Load argument values into argument registers
     172
     173        // FIXME: Would like to eliminate these to load, but we currently can't jump into
     174        // the B3 compiled code at an arbitrary point from the slow entry where the
     175        // registers are stored to the stack.
     176        jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
     177        jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount());
     178
     179        for (unsigned argIndex = 0; argIndex < maxRegisterArgumentCount; argIndex++)
     180            jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
     181
     182        toMainPath.append(jit.jump());
     183
     184        CCallHelpers::Label registerArgsCheckArityEntry = jit.label();
     185        incrementCounter(&jit, VM::RegArgsArity);
     186
     187        CCallHelpers::JumpList continueToRegisterArityFixup;
     188        CCallHelpers::Label checkForExtraRegisterArguments;
     189
     190        if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     191            toMainPath.append(jit.branch32(
     192                CCallHelpers::Equal, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
     193            continueToRegisterArityFixup.append(jit.branch32(
     194                CCallHelpers::Below, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
     195            //  Fall through to the "extra register arity" case.
     196
     197            checkForExtraRegisterArguments = jit.label();
     198            // Spill any extra register arguments passed to function onto the stack.
     199            for (unsigned argIndex = numParameters; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     200                toMainPath.append(jit.branch32(CCallHelpers::BelowOrEqual, argCountReg, CCallHelpers::TrustedImm32(argIndex)));
     201                jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     202            }
     203
     204            incrementCounter(&jit, VM::RegArgsExtra);
     205            toMainPath.append(jit.jump());
     206        } else
     207            toMainPath.append(jit.branch32(
     208                CCallHelpers::AboveOrEqual, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
     209
     210#if ENABLE(VM_COUNTERS)
     211        continueToRegisterArityFixup.append(jit.jump());
     212#endif
     213
     214        if (numParameters > 0) {
     215            //  There should always be a "this" parameter.
     216            CCallHelpers::Label registerArgumentsNeedArityFixup = jit.label();
     217
     218            for (unsigned argIndex = 1; argIndex < numParameters && argIndex <= maxRegisterArgumentCount; argIndex++)
     219                registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup;
     220        }
     221
     222#if ENABLE(VM_COUNTERS)
     223        incrementCounter(&jit, VM::RegArgsArity);
     224#endif
     225
     226        continueToRegisterArityFixup.link(&jit);
     227
     228        jit.spillArgumentRegistersToFrameBeforePrologue(maxRegisterArgumentCount);
     229
     230        continueToArityFixup.link(&jit);
     231
     232        incrementCounter(&jit, VM::ArityFixupRequired);
     233
    138234        jit.emitFunctionPrologue();
    139235        jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
     
    156252        jit.move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
    157253        jit.emitFunctionEpilogue();
    158         mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
     254        fillRegistersAndContinueMainPath.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
    159255        jit.emitFunctionPrologue();
    160256        CCallHelpers::Call callArityFixup = jit.call();
    161257        jit.emitFunctionEpilogue();
    162         mainPathJumps.append(jit.jump());
     258
     259        fillRegistersAndContinueMainPath.append(jit.jump());
     260
     261        fillRegistersAndContinueMainPath.linkTo(stackArgsArityOKEntry, &jit);
     262
     263#if ENABLE(VM_COUNTERS)
     264        CCallHelpers::Label registerEntryNoArity = jit.label();
     265        incrementCounter(&jit, VM::RegArgsNoArity);
     266        toMainPath.append(jit.jump());
     267#endif
    163268
    164269        linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail);
     
    170275        linkBuffer->link(callLookupExceptionHandlerFromCallerFrame, lookupExceptionHandlerFromCallerFrame);
    171276        linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
    172         linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
    173 
    174         state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)));
     277        linkBuffer->link(toMainPath, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
     278
     279        state.jitCode->setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(stackArgsCheckArityEntry));
     280        state.jitCode->setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(stackArgsArityOKEntry));
     281
     282#if ENABLE(VM_COUNTERS)
     283        MacroAssemblerCodePtr mainEntry = linkBuffer->locationOf(registerEntryNoArity);
     284#else
     285        MacroAssemblerCodePtr mainEntry = MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction));
     286#endif
     287        state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, mainEntry);
     288
     289        if (checkForExtraRegisterArguments.isSet())
     290            state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(checkForExtraRegisterArguments));
     291        else
     292            state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, mainEntry);
     293                                                                             
     294        state.jitCode->setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(registerArgsCheckArityEntry));
     295
     296        for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
     297            MacroAssemblerCodePtr entry;
     298            if (argCount == numParameters)
     299                entry = mainEntry;
     300            else if (registerArgumentsEntrypoints[argCount].isSet())
     301                entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]);
     302            else
     303                entry = linkBuffer->locationOf(registerArgsCheckArityEntry);
     304            state.jitCode->setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
     305        }
    175306        break;
    176307    }
     
    182313        // call to the B3-generated code.
    183314        CCallHelpers::Label start = jit.label();
     315
    184316        jit.emitFunctionEpilogue();
     317
     318        // Load argument values into argument registers
     319
     320        // FIXME: Would like to eliminate these to load, but we currently can't jump into
     321        // the B3 compiled code at an arbitrary point from the slow entry where the
     322        // registers are stored to the stack.
     323        jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
     324        jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount());
     325
     326        for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     327            jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
     328
    185329        CCallHelpers::Jump mainPathJump = jit.jump();
    186330       
     
    192336        linkBuffer->link(mainPathJump, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
    193337
    194         state.jitCode->initializeAddressForCall(linkBuffer->locationOf(start));
     338        state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(start));
    195339        break;
    196340    }
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp

    r209638 r209653  
    197197        m_proc.addFastConstant(m_tagMask->key());
    198198       
     199        // Store out callee and argument count for possible OSR exit.
     200        m_out.store64(m_out.argumentRegister(argumentRegisterForCallee()), addressFor(CallFrameSlot::callee));
     201        m_out.store32(m_out.argumentRegisterInt32(argumentRegisterForArgumentCount()), payloadFor(CallFrameSlot::argumentCount));
     202
    199203        m_out.storePtr(m_out.constIntPtr(codeBlock()), addressFor(CallFrameSlot::codeBlock));
    200204
     
    248252        availabilityMap().clear();
    249253        availabilityMap().m_locals = Operands<Availability>(codeBlock()->numParameters(), 0);
     254
     255        Vector<Node*, 8> argumentNodes;
     256        Vector<LValue, 8> argumentValues;
     257
     258        argumentNodes.resize(codeBlock()->numParameters());
     259        argumentValues.resize(codeBlock()->numParameters());
     260
     261        m_highBlock = m_graph.block(0);
     262
    250263        for (unsigned i = codeBlock()->numParameters(); i--;) {
    251             availabilityMap().m_locals.argument(i) =
    252                 Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i)));
    253         }
    254         m_node = nullptr;
    255         m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
    256         for (unsigned i = codeBlock()->numParameters(); i--;) {
    257             Node* node = m_graph.m_arguments[i];
     264            Node* node = m_graph.m_argumentsForChecking[i];
    258265            VirtualRegister operand = virtualRegisterForArgument(i);
    259266           
    260             LValue jsValue = m_out.load64(addressFor(operand));
    261            
     267            LValue jsValue = nullptr;
     268
    262269            if (node) {
    263                 DFG_ASSERT(m_graph, node, operand == node->stackAccessData()->machineLocal);
     270                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     271                    availabilityMap().m_locals.argument(i) = Availability(node);
     272                    jsValue = m_out.argumentRegister(GPRInfo::toArgumentRegister(node->argumentRegisterIndex()));
     273
     274                    setJSValue(node, jsValue);
     275                } else {
     276                    availabilityMap().m_locals.argument(i) =
     277                        Availability(FlushedAt(FlushedJSValue, operand));
     278                    jsValue = m_out.load64(addressFor(virtualRegisterForArgument(i)));
     279                }
     280           
     281                DFG_ASSERT(m_graph, node, node->hasArgumentRegisterIndex() || operand == node->stackAccessData()->machineLocal);
    264282               
    265283                // This is a hack, but it's an effective one. It allows us to do CSE on the
     
    269287                m_loadedArgumentValues.add(node, jsValue);
    270288            }
    271            
     289
     290            argumentNodes[i] = node;
     291            argumentValues[i] = jsValue;
     292        }
     293
     294        m_node = nullptr;
     295        m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
     296        for (unsigned i = codeBlock()->numParameters(); i--;) {
     297            Node* node = argumentNodes[i];
     298           
     299            if (!node)
     300                continue;
     301
     302            LValue jsValue = argumentValues[i];
     303
    272304            switch (m_graph.m_argumentFormats[i]) {
    273305            case FlushedInt32:
     
    813845        case GetArgumentCountIncludingThis:
    814846            compileGetArgumentCountIncludingThis();
     847            break;
     848        case GetArgumentRegister:
     849            compileGetArgumentRegister();
    815850            break;
    816851        case GetScope:
     
    54035438    }
    54045439   
     5440    void compileGetArgumentRegister()
     5441    {
     5442        // We might have already have a value for this node.
     5443        if (LValue value = m_loadedArgumentValues.get(m_node)) {
     5444            setJSValue(value);
     5445            return;
     5446        }
     5447        setJSValue(m_out.argumentRegister(GPRInfo::toArgumentRegister(m_node->argumentRegisterIndex())));
     5448    }
     5449   
    54055450    void compileGetScope()
    54065451    {
     
    58155860        Vector<ConstrainedValue> arguments;
    58165861
    5817         // Make sure that the callee goes into GPR0 because that's where the slow path thunks expect the
    5818         // callee to be.
    5819         arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
     5862        // Make sure that the callee goes into argumentRegisterForCallee() because that's where
     5863        // the slow path thunks expect the callee to be.
     5864        GPRReg calleeReg = argumentRegisterForCallee();
     5865        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
    58205866
    58215867        auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) {
     
    58255871        };
    58265872
    5827         addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
    5828         addArgument(m_out.constInt32(numArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset);
    5829         for (unsigned i = 0; i < numArgs; ++i)
    5830             addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
     5873        ArgumentsLocation argLocation = argumentsLocationFor(numArgs);
     5874        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
     5875        arguments.append(ConstrainedValue(m_out.constInt32(numArgs), ValueRep::reg(argumentRegisterForArgumentCount())));
     5876
     5877        for (unsigned i = 0; i < numArgs; ++i) {
     5878            if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     5879                arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
     5880            else
     5881                addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
     5882        }
    58315883
    58325884        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
     
    58575909                CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
    58585910
     5911                incrementCounter(&jit, VM::FTLCaller);
     5912
    58595913                CCallHelpers::DataLabelPtr targetToCheck;
    58605914                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
    5861                     CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
     5915                    CCallHelpers::NotEqual, calleeReg, targetToCheck,
    58625916                    CCallHelpers::TrustedImmPtr(0));
    58635917
     
    58675921                slowPath.link(&jit);
    58685922
    5869                 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
     5923                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
    58705924                CCallHelpers::Call slowCall = jit.nearCall();
    58715925                done.link(&jit);
     
    58735927                callLinkInfo->setUpCall(
    58745928                    node->op() == Construct ? CallLinkInfo::Construct : CallLinkInfo::Call,
    5875                     node->origin.semantic, GPRInfo::regT0);
     5929                    argLocation, node->origin.semantic, argumentRegisterForCallee());
    58765930
    58775931                jit.addPtr(
     
    58825936                    [=] (LinkBuffer& linkBuffer) {
    58835937                        MacroAssemblerCodePtr linkCall =
    5884                             linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
     5938                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
    58855939                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
    58865940
     
    59265980        Vector<ConstrainedValue> arguments;
    59275981       
    5928         arguments.append(ConstrainedValue(jsCallee, ValueRep::SomeRegister));
     5982        // Make sure that the callee goes into argumentRegisterForCallee() because that's where
     5983        // the slow path thunks expect the callee to be.
     5984        GPRReg calleeReg = argumentRegisterForCallee();
     5985        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
    59295986        if (!isTail) {
    59305987            auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) {
     
    59335990                arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP)));
    59345991            };
    5935            
     5992
     5993            arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
     5994#if ENABLE(CALLER_SPILLS_CALLEE)
    59365995            addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
     5996#endif
     5997            arguments.append(ConstrainedValue(m_out.constInt32(numPassedArgs), ValueRep::reg(argumentRegisterForArgumentCount())));
     5998#if ENABLE(CALLER_SPILLS_ARGCOUNT)
    59375999            addArgument(m_out.constInt32(numPassedArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset);
    5938             for (unsigned i = 0; i < numPassedArgs; ++i)
    5939                 addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
    5940             for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i)
    5941                 addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0);
     6000#endif
     6001           
     6002            for (unsigned i = 0; i < numPassedArgs; ++i) {
     6003                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     6004                    arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
     6005                else
     6006                    addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
     6007            }
     6008            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) {
     6009                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     6010                    arguments.append(ConstrainedValue(m_out.constInt64(JSValue::encode(jsUndefined())), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
     6011                else
     6012                    addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0);
     6013            }
    59426014        } else {
    59436015            for (unsigned i = 0; i < numPassedArgs; ++i)
     
    59816053                   
    59826054                    RegisterSet toSave = params.unavailableRegisters();
     6055                    shuffleData.argumentsInRegisters = true;
    59836056                    shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatCell);
    59846057                    toSave.set(calleeGPR);
     
    59996072                    CCallHelpers::PatchableJump patchableJump = jit.patchableJump();
    60006073                    CCallHelpers::Label mainPath = jit.label();
    6001                    
     6074
     6075                    incrementCounter(&jit, VM::FTLCaller);
     6076                    incrementCounter(&jit, VM::TailCall);
     6077                    incrementCounter(&jit, VM::DirectCall);
     6078
    60026079                    jit.store32(
    60036080                        CCallHelpers::TrustedImm32(callSiteIndex.bits()),
     
    60206097                   
    60216098                    callLinkInfo->setUpCall(
    6022                         CallLinkInfo::DirectTailCall, node->origin.semantic, InvalidGPRReg);
     6099                        CallLinkInfo::DirectTailCall, argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg);
    60236100                    callLinkInfo->setExecutableDuringCompilation(executable);
    60246101                    if (numAllocatedArgs > numPassedArgs)
     
    60436120                CCallHelpers::Label mainPath = jit.label();
    60446121
     6122                incrementCounter(&jit, VM::FTLCaller);
     6123                incrementCounter(&jit, VM::DirectCall);
     6124
    60456125                jit.store32(
    60466126                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
     
    60546134                callLinkInfo->setUpCall(
    60556135                    isConstruct ? CallLinkInfo::DirectConstruct : CallLinkInfo::DirectCall,
    6056                     node->origin.semantic, InvalidGPRReg);
     6136                    argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg);
    60576137                callLinkInfo->setExecutableDuringCompilation(executable);
    60586138                if (numAllocatedArgs > numPassedArgs)
     
    60656145                        CCallHelpers::Label slowPath = jit.label();
    60666146                        if (isX86())
    6067                             jit.pop(CCallHelpers::selectScratchGPR(calleeGPR));
    6068                        
    6069                         callOperation(
    6070                             *state, params.unavailableRegisters(), jit,
    6071                             node->origin.semantic, exceptions.get(), operationLinkDirectCall,
    6072                             InvalidGPRReg, CCallHelpers::TrustedImmPtr(callLinkInfo),
    6073                             calleeGPR).call();
     6147                            jit.pop(GPRInfo::nonArgGPR0);
     6148
     6149                        jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
     6150                        CCallHelpers::Call slowCall = jit.nearCall();
     6151                        exceptions->append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
    60746152                        jit.jump().linkTo(mainPath, &jit);
    60756153                       
     
    60806158                               
    60816159                                linkBuffer.link(call, slowPathLocation);
     6160                                MacroAssemblerCodePtr linkCall =
     6161                                    linkBuffer.vm().getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
     6162                                linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
    60826163                               
    60836164                                callLinkInfo->setCallLocations(
     
    61116192        Vector<ConstrainedValue> arguments;
    61126193
    6113         arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
     6194        GPRReg calleeReg = argumentRegisterForCallee();
     6195        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
    61146196
    61156197        for (unsigned i = 0; i < numArgs; ++i) {
     
    61456227                CallSiteIndex callSiteIndex = state->jitCode->common.addUniqueCallSiteIndex(codeOrigin);
    61466228
     6229                incrementCounter(&jit, VM::FTLCaller);
     6230                incrementCounter(&jit, VM::TailCall);
     6231
    61476232                CallFrameShuffleData shuffleData;
     6233                shuffleData.argumentsInRegisters = true;
    61486234                shuffleData.numLocals = state->jitCode->common.frameRegisterCount;
    6149                 shuffleData.callee = ValueRecovery::inGPR(GPRInfo::regT0, DataFormatJS);
     6235                shuffleData.callee = ValueRecovery::inGPR(calleeReg, DataFormatJS);
    61506236
    61516237                for (unsigned i = 0; i < numArgs; ++i)
     
    61586244                CCallHelpers::DataLabelPtr targetToCheck;
    61596245                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
    6160                     CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
     6246                    CCallHelpers::NotEqual, calleeReg, targetToCheck,
    61616247                    CCallHelpers::TrustedImmPtr(0));
    61626248
     
    61766262
    61776263                CallFrameShuffler slowPathShuffler(jit, shuffleData);
    6178                 slowPathShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
    61796264                slowPathShuffler.prepareForSlowPath();
    61806265
    6181                 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
     6266                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
    61826267                CCallHelpers::Call slowCall = jit.nearCall();
    61836268
    61846269                jit.abortWithReason(JITDidReturnFromTailCall);
    61856270
    6186                 callLinkInfo->setUpCall(CallLinkInfo::TailCall, codeOrigin, GPRInfo::regT0);
     6271                callLinkInfo->setUpCall(CallLinkInfo::TailCall, argumentsLocationFor(numArgs), codeOrigin, calleeReg);
    61876272
    61886273                jit.addLinkTask(
    61896274                    [=] (LinkBuffer& linkBuffer) {
    61906275                        MacroAssemblerCodePtr linkCall =
    6191                             linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
     6276                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
    61926277                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
    61936278
     
    62796364
    62806365                CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
     6366                ArgumentsLocation argumentsLocation = StackArgs;
    62816367
    62826368                RegisterSet usedRegisters = RegisterSet::allRegisters();
     
    64286514                    jit.emitRestoreCalleeSaves();
    64296515                ASSERT(!usedRegisters.get(GPRInfo::regT2));
    6430                 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
     6516                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
    64316517                CCallHelpers::Call slowCall = jit.nearCall();
    64326518               
     
    64366522                    done.link(&jit);
    64376523               
    6438                 callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);
     6524                callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0);
    64396525
    64406526                jit.addPtr(
     
    64456531                    [=] (LinkBuffer& linkBuffer) {
    64466532                        MacroAssemblerCodePtr linkCall =
    6447                             linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
     6533                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs);
    64486534                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
    64496535                       
     
    65466632                exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
    65476633
     6634                incrementCounter(&jit, VM::FTLCaller);
     6635                incrementCounter(&jit, VM::CallVarargs);
     6636               
    65486637                jit.store32(
    65496638                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
     
    65516640
    65526641                CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
     6642                ArgumentsLocation argumentsLocation = StackArgs;
    65536643                CallVarargsData* data = node->callVarargsData();
    65546644
     
    67116801                if (isTailCall)
    67126802                    jit.emitRestoreCalleeSaves();
    6713                 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
     6803                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
    67146804                CCallHelpers::Call slowCall = jit.nearCall();
    67156805               
     
    67196809                    done.link(&jit);
    67206810               
    6721                 callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);
     6811                callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0);
    67226812               
    67236813                jit.addPtr(
     
    67286818                    [=] (LinkBuffer& linkBuffer) {
    67296819                        MacroAssemblerCodePtr linkCall =
    6730                             linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
     6820                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs);
    67316821                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
    67326822                       
     
    67976887               
    67986888                exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
    6799                
     6889
     6890                incrementCounter(&jit, VM::FTLCaller);
     6891                incrementCounter(&jit, VM::CallEval);
     6892
    68006893                jit.store32(
    68016894                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
     
    68036896               
    68046897                CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
    6805                 callLinkInfo->setUpCall(CallLinkInfo::Call, node->origin.semantic, GPRInfo::regT0);
     6898                callLinkInfo->setUpCall(CallLinkInfo::Call, StackArgs, node->origin.semantic, GPRInfo::regT0);
    68066899               
    68076900                jit.addPtr(CCallHelpers::TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), CCallHelpers::stackPointerRegister, GPRInfo::regT1);
  • trunk/Source/JavaScriptCore/ftl/FTLOSREntry.cpp

    r203081 r209653  
    7272        dataLog("    Values at entry: ", values, "\n");
    7373   
    74     for (int argument = values.numberOfArguments(); argument--;) {
     74    for (unsigned argument = values.numberOfArguments(); argument--;) {
     75#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     76        if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     77            break;
     78#endif
    7579        JSValue valueOnStack = exec->r(virtualRegisterForArgument(argument).offset()).asanUnsafeJSValue();
    7680        JSValue reconstructedValue = values.argument(argument);
     
    100104   
    101105    exec->setCodeBlock(entryCodeBlock);
    102    
    103     void* result = entryCode->addressForCall(ArityCheckNotRequired).executableAddress();
     106
     107#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     108    void* result = entryCode->addressForCall(RegisterArgsArityCheckNotRequired).executableAddress();
     109#else
     110    void* result = entryCode->addressForCall(StackArgsArityCheckNotRequired).executableAddress();
     111#endif
    104112    if (Options::verboseOSR())
    105113        dataLog("    Entry will succeed, going to address", RawPointer(result), "\n");
  • trunk/Source/JavaScriptCore/ftl/FTLOutput.cpp

    r208720 r209653  
    9090}
    9191
     92LValue Output::argumentRegister(Reg reg)
     93{
     94    return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg);
     95}
     96
     97LValue Output::argumentRegisterInt32(Reg reg)
     98{
     99    return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg, Int32);
     100}
     101
    92102LValue Output::framePointer()
    93103{
  • trunk/Source/JavaScriptCore/ftl/FTLOutput.h

    r208720 r209653  
    9999    B3::Origin origin() { return B3::Origin(m_origin); }
    100100
     101    LValue argumentRegister(Reg reg);
     102    LValue argumentRegisterInt32(Reg reg);
    101103    LValue framePointer();
    102104
  • trunk/Source/JavaScriptCore/interpreter/ShadowChicken.cpp

    r209229 r209653  
    285285            bool isTailDeleted = false;
    286286            JSScope* scope = nullptr;
     287            JSValue thisValue = jsUndefined();
    287288            CodeBlock* codeBlock = callFrame->codeBlock();
    288             if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes() && codeBlock->scopeRegister().isValid()) {
    289                 scope = callFrame->scope(codeBlock->scopeRegister().offset());
    290                 RELEASE_ASSERT(scope->inherits(JSScope::info()));
     289            if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes()) {
     290                if (codeBlock->scopeRegister().isValid()) {
     291                    scope = callFrame->scope(codeBlock->scopeRegister().offset());
     292                    RELEASE_ASSERT(scope->inherits(JSScope::info()));
     293                }
     294                thisValue = callFrame->thisValue();
    291295            } else if (foundFrame) {
    292                 scope = m_log[indexInLog].scope;
    293                 if (scope)
    294                     RELEASE_ASSERT(scope->inherits(JSScope::info()));
    295             }
    296             toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, callFrame->thisValue(), scope, codeBlock, callFrame->callSiteIndex()));
     296                if (!scope) {
     297                    scope = m_log[indexInLog].scope;
     298                    if (scope)
     299                        RELEASE_ASSERT(scope->inherits(JSScope::info()));
     300                }
     301                if (thisValue.isUndefined())
     302                    thisValue = m_log[indexInLog].thisValue;
     303            }
     304            toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, thisValue, scope, codeBlock, callFrame->callSiteIndex()));
    297305
    298306            if (indexInLog < logCursorIndex
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp

    r208720 r209653  
    617617void AssemblyHelpers::emitDumbVirtualCall(CallLinkInfo* info)
    618618{
    619     move(TrustedImmPtr(info), GPRInfo::regT2);
     619    move(TrustedImmPtr(info), GPRInfo::nonArgGPR0);
    620620    Call call = nearCall();
    621621    addLinkTask(
    622622        [=] (LinkBuffer& linkBuffer) {
    623             MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info);
    624             info->setSlowStub(createJITStubRoutine(virtualThunk, linkBuffer.vm(), nullptr, true));
    625             linkBuffer.link(call, CodeLocationLabel(virtualThunk.code()));
     623            JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info);
     624            info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), linkBuffer.vm(), nullptr, true));
     625            linkBuffer.link(call, CodeLocationLabel(virtualThunk.entryFor(StackArgs)));
    626626        });
    627627}
  • trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h

    r209594 r209653  
    415415    }
    416416
     417    enum SpillRegisterType { SpillAll, SpillExactly };
     418
     419    void spillArgumentRegistersToFrameBeforePrologue(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll)
     420    {
     421#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     422        JumpList doneStoringArgs;
     423
     424        emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForCallee(), CallFrameSlot::callee);
     425        GPRReg argCountReg = argumentRegisterForArgumentCount();
     426        emitPutToCallFrameHeaderBeforePrologue(argCountReg, CallFrameSlot::argumentCount);
     427
     428        unsigned argIndex = 0;
     429        // Always spill "this"
     430        minimumArgsToSpill = std::max(minimumArgsToSpill, 1U);
     431
     432        for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     433            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     434
     435        if (spillType == SpillAll) {
     436            // Spill extra args passed to function
     437            for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     438                doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
     439                emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     440            }
     441        }
     442
     443        doneStoringArgs.link(this);
     444#else
     445        UNUSED_PARAM(minimumArgsToSpill);
     446        UNUSED_PARAM(spillType);
     447#endif
     448    }
     449
     450    void spillArgumentRegistersToFrame(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll)
     451    {
     452#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     453        JumpList doneStoringArgs;
     454
     455        emitPutToCallFrameHeader(argumentRegisterForCallee(), CallFrameSlot::callee);
     456        GPRReg argCountReg = argumentRegisterForArgumentCount();
     457        emitPutToCallFrameHeader(argCountReg, CallFrameSlot::argumentCount);
     458       
     459        unsigned argIndex = 0;
     460        // Always spill "this"
     461        minimumArgsToSpill = std::max(minimumArgsToSpill, 1U);
     462       
     463        for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
     464            emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex);
     465       
     466        if (spillType == SpillAll) {
     467            // Spill extra args passed to function
     468            for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     469                doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
     470                emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex);
     471            }
     472        }
     473       
     474        doneStoringArgs.link(this);
     475#else
     476        UNUSED_PARAM(minimumArgsToSpill);
     477        UNUSED_PARAM(spillType);
     478#endif
     479    }
     480   
     481    void fillArgumentRegistersFromFrameBeforePrologue()
     482    {
     483#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     484        JumpList doneLoadingArgs;
     485
     486        emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
     487        GPRReg argCountReg = argumentRegisterForArgumentCount();
     488        emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argCountReg);
     489       
     490        for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     491            if (argIndex) // Always load "this"
     492                doneLoadingArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
     493            emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
     494        }
     495       
     496        doneLoadingArgs.link(this);
     497#endif
     498    }
     499
    417500#if CPU(X86_64) || CPU(X86)
    418501    static size_t prologueStackPointerDelta()
     
    624707    {
    625708        storePtr(from, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
     709    }
     710
     711    void emitPutArgumentToCallFrameBeforePrologue(GPRReg from, unsigned argument)
     712    {
     713        storePtr(from, Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
     714    }
     715
     716    void emitPutArgumentToCallFrame(GPRReg from, unsigned argument)
     717    {
     718        emitPutToCallFrameHeader(from, CallFrameSlot::thisArgument + argument);
     719    }
     720
     721    void emitGetFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to)
     722    {
     723        loadPtr(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to);
     724    }
     725   
     726    void emitGetFromCallFrameArgumentBeforePrologue(unsigned argument, GPRReg to)
     727    {
     728        loadPtr(Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to);
     729    }
     730   
     731    void emitGetPayloadFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to)
     732    {
     733        load32(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), to);
    626734    }
    627735#else
     
    16611769    void wangsInt64Hash(GPRReg inputAndResult, GPRReg scratch);
    16621770#endif
    1663    
     1771
     1772#if ENABLE(VM_COUNTERS)
     1773    void incrementCounter(VM::VMCounterType counterType)
     1774    {
     1775        addPtr(TrustedImm32(1), AbsoluteAddress(vm()->addressOfCounter(counterType)));
     1776    }
     1777#endif
     1778
    16641779protected:
    16651780    VM* m_vm;
     
    16701785};
    16711786
     1787#if ENABLE(VM_COUNTERS)
     1788#define incrementCounter(jit, counterType) (jit)->incrementCounter(counterType)
     1789#else
     1790#define incrementCounter(jit, counterType) ((void)0)
     1791#endif
     1792
    16721793} // namespace JSC
    16731794
  • trunk/Source/JavaScriptCore/jit/CachedRecovery.cpp

    r189999 r209653  
    3030
    3131namespace JSC {
     32
     33void CachedRecovery::addTargetJSValueRegs(JSValueRegs jsValueRegs)
     34{
     35    ASSERT(m_wantedFPR == InvalidFPRReg);
     36    size_t existing = m_gprTargets.find(jsValueRegs);
     37    if (existing == WTF::notFound) {
     38#if USE(JSVALUE64)
     39        if (m_gprTargets.size() > 0 && m_recovery.isSet() && m_recovery.isInGPR()) {
     40            // If we are recovering to the same GPR, make that GPR the first target.
     41            GPRReg sourceGPR = m_recovery.gpr();
     42            if (jsValueRegs.gpr() == sourceGPR) {
     43                // Append the current first GPR below.
     44                jsValueRegs = JSValueRegs(m_gprTargets[0].gpr());
     45                m_gprTargets[0] = JSValueRegs(sourceGPR);
     46            }
     47        }
     48#endif
     49        m_gprTargets.append(jsValueRegs);
     50    }
     51}
    3252
    3353// We prefer loading doubles and undetermined JSValues into FPRs
  • trunk/Source/JavaScriptCore/jit/CachedRecovery.h

    r206525 r209653  
    5151
    5252    const Vector<VirtualRegister, 1>& targets() const { return m_targets; }
     53    const Vector<JSValueRegs, 1>& gprTargets() const { return m_gprTargets; }
    5354
    5455    void addTarget(VirtualRegister reg)
     
    6970    }
    7071
    71     void setWantedJSValueRegs(JSValueRegs jsValueRegs)
    72     {
    73         ASSERT(m_wantedFPR == InvalidFPRReg);
    74         m_wantedJSValueRegs = jsValueRegs;
    75     }
     72    void addTargetJSValueRegs(JSValueRegs);
    7673
    7774    void setWantedFPR(FPRReg fpr)
    7875    {
    79         ASSERT(!m_wantedJSValueRegs);
     76        ASSERT(m_gprTargets.isEmpty());
    8077        m_wantedFPR = fpr;
    8178    }
     
    120117    void setRecovery(ValueRecovery recovery) { m_recovery = recovery; }
    121118
    122     JSValueRegs wantedJSValueRegs() const { return m_wantedJSValueRegs; }
     119    JSValueRegs wantedJSValueRegs() const
     120    {
     121        if (m_gprTargets.isEmpty())
     122            return JSValueRegs();
     123
     124        return m_gprTargets[0];
     125    }
    123126
    124127    FPRReg wantedFPR() const { return m_wantedFPR; }
    125128private:
    126129    ValueRecovery m_recovery;
    127     JSValueRegs m_wantedJSValueRegs;
    128130    FPRReg m_wantedFPR { InvalidFPRReg };
    129131    Vector<VirtualRegister, 1> m_targets;
     132    Vector<JSValueRegs, 1> m_gprTargets;
    130133};
    131134
  • trunk/Source/JavaScriptCore/jit/CallFrameShuffleData.h

    r206525 r209653  
    4040    Vector<ValueRecovery> args;
    4141#if USE(JSVALUE64)
     42    bool argumentsInRegisters { false };
    4243    RegisterMap<ValueRecovery> registers;
    4344    GPRReg tagTypeNumber { InvalidGPRReg };
  • trunk/Source/JavaScriptCore/jit/CallFrameShuffler.cpp

    r203006 r209653  
    4343    , m_alignedNewFrameSize(CallFrame::headerSizeInRegisters
    4444        + roundArgumentCountToAlignFrame(data.args.size()))
     45#if USE(JSVALUE64)
     46    , m_argumentsInRegisters(data.argumentsInRegisters)
     47#endif
    4548    , m_frameDelta(m_alignedNewFrameSize - m_alignedOldFrameSize)
    4649    , m_lockedRegisters(RegisterSet::allRegisters())
     
    5558
    5659    ASSERT(!data.callee.isInJSStack() || data.callee.virtualRegister().isLocal());
    57     addNew(VirtualRegister(CallFrameSlot::callee), data.callee);
    58 
     60#if USE(JSVALUE64)
     61    if (data.argumentsInRegisters)
     62        addNew(JSValueRegs(argumentRegisterForCallee()), data.callee);
     63    else
     64#endif
     65        addNew(VirtualRegister(CallFrameSlot::callee), data.callee);
     66   
    5967    for (size_t i = 0; i < data.args.size(); ++i) {
    6068        ASSERT(!data.args[i].isInJSStack() || data.args[i].virtualRegister().isLocal());
    61         addNew(virtualRegisterForArgument(i), data.args[i]);
     69#if USE(JSVALUE64)
     70        if (data.argumentsInRegisters && i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     71            addNew(JSValueRegs(argumentRegisterForFunctionArgument(i)), data.args[i]);
     72        else
     73#endif
     74            addNew(virtualRegisterForArgument(i), data.args[i]);
    6275    }
    6376
     
    186199        }
    187200#else
    188         if (newCachedRecovery)
     201        if (newCachedRecovery) {
    189202            out.print("         ", reg, " <- ", newCachedRecovery->recovery());
     203            if (newCachedRecovery->gprTargets().size() > 1) {
     204                for (size_t i = 1; i < newCachedRecovery->gprTargets().size(); i++)
     205                    out.print(", ", newCachedRecovery->gprTargets()[i].gpr(), " <- ", newCachedRecovery->recovery());
     206            }
     207        }
    190208#endif
    191209        out.print("\n");
     
    497515        || cachedRecovery.recovery().isConstant());
    498516
    499     if (verbose)
     517    if (verbose && cachedRecovery.targets().size())
    500518        dataLog("   * Storing ", cachedRecovery.recovery());
    501519    for (size_t i = 0; i < cachedRecovery.targets().size(); ++i) {
     
    506524        emitStore(cachedRecovery, addressForNew(target));
    507525        setNew(target, nullptr);
    508     }
    509     if (verbose)
    510         dataLog("\n");
     526        if (verbose)
     527            dataLog("\n");
     528    }
    511529    cachedRecovery.clearTargets();
    512530    if (!cachedRecovery.wantedJSValueRegs() && cachedRecovery.wantedFPR() == InvalidFPRReg)
     
    607625    ASSERT(!isUndecided());
    608626
    609     updateDangerFrontier();
     627    initDangerFrontier();
    610628
    611629    // First, we try to store any value that goes above the danger
     
    703721    }
    704722
    705 #if USE(JSVALUE64)
    706     if (m_tagTypeNumber != InvalidGPRReg && m_newRegisters[m_tagTypeNumber])
    707         releaseGPR(m_tagTypeNumber);
    708 #endif
    709 
    710723    // Handle 2) by loading all registers. We don't have to do any
    711724    // writes, since they have been taken care of above.
     725    // Note that we need m_tagTypeNumber to remain locked to box wanted registers.
    712726    if (verbose)
    713727        dataLog("  Loading wanted registers into registers\n");
     
    743757    // We need to handle 4) first because it implies releasing
    744758    // m_newFrameBase, which could be a wanted register.
     759    // Note that we delay setting the argument count register as it needs to be released in step 3.
    745760    if (verbose)
    746761        dataLog("   * Storing the argument count into ", VirtualRegister { CallFrameSlot::argumentCount }, "\n");
    747     m_jit.store32(MacroAssembler::TrustedImm32(0),
    748         addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset));
    749     m_jit.store32(MacroAssembler::TrustedImm32(argCount()),
    750         addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset));
     762#if USE(JSVALUE64)
     763    if (!m_argumentsInRegisters) {
     764#endif
     765        m_jit.store32(MacroAssembler::TrustedImm32(0),
     766            addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset));
     767        m_jit.store32(MacroAssembler::TrustedImm32(argCount()),
     768            addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset));
     769#if USE(JSVALUE64)
     770    }
     771#endif
    751772
    752773    if (!isSlowPath()) {
     
    768789        emitDisplace(*cachedRecovery);
    769790    }
     791
     792#if USE(JSVALUE64)
     793    // For recoveries with multiple register targets, copy the contents of the first target to the
     794    // remaining targets.
     795    for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
     796        CachedRecovery* cachedRecovery { m_newRegisters[reg] };
     797        if (!cachedRecovery || cachedRecovery->gprTargets().size() < 2)
     798            continue;
     799
     800        GPRReg sourceGPR = cachedRecovery->gprTargets()[0].gpr();
     801        for (size_t i = 1; i < cachedRecovery->gprTargets().size(); i++)
     802            m_jit.move(sourceGPR, cachedRecovery->gprTargets()[i].gpr());
     803    }
     804
     805    if (m_argumentsInRegisters)
     806        m_jit.move(MacroAssembler::TrustedImm32(argCount()), argumentRegisterForArgumentCount());
     807#endif
    770808}
    771809
  • trunk/Source/JavaScriptCore/jit/CallFrameShuffler.h

    r206525 r209653  
    9797    // arguments/callee/callee-save registers are by taking into
    9898    // account any spilling that acquireGPR() could have done.
    99     CallFrameShuffleData snapshot() const
     99    CallFrameShuffleData snapshot(ArgumentsLocation argumentsLocation) const
    100100    {
    101101        ASSERT(isUndecided());
     
    103103        CallFrameShuffleData data;
    104104        data.numLocals = numLocals();
    105         data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery();
     105#if USE(JSVALUE64)
     106        data.argumentsInRegisters = argumentsLocation != StackArgs;
     107#endif
     108        if (argumentsLocation == StackArgs)
     109            data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery();
     110        else {
     111            Reg reg { argumentRegisterForCallee() };
     112            CachedRecovery* cachedRecovery { m_newRegisters[reg] };
     113            data.callee = cachedRecovery->recovery();
     114        }
    106115        data.args.resize(argCount());
    107         for (size_t i = 0; i < argCount(); ++i)
    108             data.args[i] = getNew(virtualRegisterForArgument(i))->recovery();
     116        for (size_t i = 0; i < argCount(); ++i) {
     117            if (argumentsLocation == StackArgs || i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     118                data.args[i] = getNew(virtualRegisterForArgument(i))->recovery();
     119            else {
     120                Reg reg { argumentRegisterForFunctionArgument(i) };
     121                CachedRecovery* cachedRecovery { m_newRegisters[reg] };
     122                data.args[i] = cachedRecovery->recovery();
     123            }
     124        }
    109125        for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
     126            if (reg.isGPR() && argumentsLocation != StackArgs
     127                && GPRInfo::toArgumentIndex(reg.gpr()) < argumentRegisterIndexForJSFunctionArgument(argCount()))
     128                continue;
     129
    110130            CachedRecovery* cachedRecovery { m_newRegisters[reg] };
    111131            if (!cachedRecovery)
     
    377397    int m_alignedOldFrameSize;
    378398    int m_alignedNewFrameSize;
     399#if USE(JSVALUE64)
     400    bool m_argumentsInRegisters;
     401#endif
    379402
    380403    // This is the distance, in slots, between the base of the new
     
    642665        CachedRecovery* cachedRecovery = addCachedRecovery(recovery);
    643666#if USE(JSVALUE64)
    644         if (cachedRecovery->wantedJSValueRegs())
    645             m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr;
    646         m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
     667        if (cachedRecovery->wantedJSValueRegs()) {
     668            if (recovery.isInGPR() && jsValueRegs.gpr() == recovery.gpr()) {
     669                m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr;
     670                m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
     671            }
     672        } else
     673            m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
    647674#else
    648675        if (JSValueRegs oldRegs { cachedRecovery->wantedJSValueRegs() }) {
     
    657684            m_newRegisters[jsValueRegs.tagGPR()] = cachedRecovery;
    658685#endif
    659         ASSERT(!cachedRecovery->wantedJSValueRegs());
    660         cachedRecovery->setWantedJSValueRegs(jsValueRegs);
     686        cachedRecovery->addTargetJSValueRegs(jsValueRegs);
    661687    }
    662688
     
    756782    }
    757783
     784    void initDangerFrontier()
     785    {
     786        findDangerFrontierFrom(lastNew());
     787    }
     788
    758789    void updateDangerFrontier()
    759790    {
     791        findDangerFrontierFrom(m_dangerFrontier - 1);
     792    }
     793
     794    void findDangerFrontierFrom(VirtualRegister nextReg)
     795    {
    760796        ASSERT(!isUndecided());
    761797
    762798        m_dangerFrontier = firstNew() - 1;
    763         for (VirtualRegister reg = lastNew(); reg >= firstNew(); reg -= 1) {
    764             if (!getNew(reg) || !isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg)))
     799        for (VirtualRegister reg = nextReg; reg >= firstNew(); reg -= 1) {
     800            if (!isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg)))
    765801                continue;
    766802
  • trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp

    r196756 r209653  
    324324        else
    325325            m_jit.move64ToDouble(cachedRecovery.recovery().gpr(), wantedReg.fpr());
    326         RELEASE_ASSERT(cachedRecovery.recovery().dataFormat() == DataFormatJS);
     326        DataFormat format = cachedRecovery.recovery().dataFormat();
     327        RELEASE_ASSERT(format == DataFormatJS || format == DataFormatCell);
    327328        updateRecovery(cachedRecovery,
    328329            ValueRecovery::inRegister(wantedReg, DataFormatJS));
  • trunk/Source/JavaScriptCore/jit/GPRInfo.h

    r206899 r209653  
    7070    explicit operator bool() const { return m_gpr != InvalidGPRReg; }
    7171
    72     bool operator==(JSValueRegs other) { return m_gpr == other.m_gpr; }
    73     bool operator!=(JSValueRegs other) { return !(*this == other); }
     72    bool operator==(JSValueRegs other) const { return m_gpr == other.m_gpr; }
     73    bool operator!=(JSValueRegs other) const { return !(*this == other); }
    7474   
    7575    GPRReg gpr() const { return m_gpr; }
     
    332332#if CPU(X86)
    333333#define NUMBER_OF_ARGUMENT_REGISTERS 0u
     334#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
    334335#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
    335336
     
    354355    static const GPRReg argumentGPR3 = X86Registers::ebx; // regT3
    355356    static const GPRReg nonArgGPR0 = X86Registers::esi; // regT4
     357    static const GPRReg nonArgGPR1 = X86Registers::edi; // regT5
    356358    static const GPRReg returnValueGPR = X86Registers::eax; // regT0
    357359    static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1
     
    378380        unsigned result = indexForRegister[reg];
    379381        return result;
     382    }
     383
     384    static unsigned toArgumentIndex(GPRReg reg)
     385    {
     386        ASSERT(reg != InvalidGPRReg);
     387        ASSERT(static_cast<int>(reg) < 8);
     388        static const unsigned indexForArgumentRegister[8] = { 2, 0, 1, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
     389        return indexForArgumentRegister[reg];
    380390    }
    381391
     
    400410#define NUMBER_OF_ARGUMENT_REGISTERS 6u
    401411#define NUMBER_OF_CALLEE_SAVES_REGISTERS 5u
     412#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u)
    402413#else
    403414#define NUMBER_OF_ARGUMENT_REGISTERS 4u
    404415#define NUMBER_OF_CALLEE_SAVES_REGISTERS 7u
     416#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
    405417#endif
    406418
     
    465477#endif
    466478    static const GPRReg nonArgGPR0 = X86Registers::r10; // regT5 (regT4 on Windows)
     479    static const GPRReg nonArgGPR1 = X86Registers::eax; // regT0
    467480    static const GPRReg returnValueGPR = X86Registers::eax; // regT0
    468481    static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 or regT2
     
    509522    }
    510523
     524    static unsigned toArgumentIndex(GPRReg reg)
     525    {
     526        ASSERT(reg != InvalidGPRReg);
     527        ASSERT(static_cast<int>(reg) < 16);
     528#if !OS(WINDOWS)
     529        static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 3, 2, InvalidIndex, InvalidIndex, InvalidIndex, 1, 0, 4, 5, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
     530#else
     531        static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 0, 1, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 2, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
     532#endif
     533        return indexForArgumentRegister[reg];
     534    }
     535   
    511536    static const char* debugName(GPRReg reg)
    512537    {
     
    539564#if CPU(ARM)
    540565#define NUMBER_OF_ARGUMENT_REGISTERS 4u
     566#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
    541567#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
    542568
     
    602628    }
    603629
     630    static unsigned toArgumentIndex(GPRReg reg)
     631    {
     632        ASSERT(reg != InvalidGPRReg);
     633        ASSERT(static_cast<int>(reg) < 16);
     634        if (reg > argumentGPR3)
     635            return InvalidIndex;
     636        return (unsigned)reg;
     637    }
     638   
    604639    static const char* debugName(GPRReg reg)
    605640    {
     
    622657#if CPU(ARM64)
    623658#define NUMBER_OF_ARGUMENT_REGISTERS 8u
     659#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u)
    624660// Callee Saves includes x19..x28 and FP registers q8..q15
    625661#define NUMBER_OF_CALLEE_SAVES_REGISTERS 18u
     
    699735    COMPILE_ASSERT(ARM64Registers::q14 == 14, q14_is_14);
    700736    COMPILE_ASSERT(ARM64Registers::q15 == 15, q15_is_15);
     737
    701738    static GPRReg toRegister(unsigned index)
    702739    {
     
    714751        ASSERT(index < numberOfArgumentRegisters);
    715752        return toRegister(index);
     753    }
     754
     755    static unsigned toArgumentIndex(GPRReg reg)
     756    {
     757        ASSERT(reg != InvalidGPRReg);
     758        if (reg > argumentGPR7)
     759            return InvalidIndex;
     760        return (unsigned)reg;
    716761    }
    717762
     
    747792#if CPU(MIPS)
    748793#define NUMBER_OF_ARGUMENT_REGISTERS 4u
     794#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
    749795#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
    750796
     
    774820    static const GPRReg argumentGPR3 = MIPSRegisters::a3;
    775821    static const GPRReg nonArgGPR0 = regT4;
     822    static const GPRReg nonArgGPR1 = regT5;
    776823    static const GPRReg returnValueGPR = regT0;
    777824    static const GPRReg returnValueGPR2 = regT1;
     
    826873#if CPU(SH4)
    827874#define NUMBER_OF_ARGUMENT_REGISTERS 4u
     875#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
    828876#define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
    829877
     
    856904    static const GPRReg argumentGPR3 = SH4Registers::r7; // regT3
    857905    static const GPRReg nonArgGPR0 = regT4;
     906    static const GPRReg nonArgGPR1 = regT5;
    858907    static const GPRReg returnValueGPR = regT0;
    859908    static const GPRReg returnValueGPR2 = regT1;
     
    892941#endif // CPU(SH4)
    893942
     943inline GPRReg argumentRegisterFor(unsigned argumentIndex)
     944{
     945#if NUMBER_OF_ARGUMENT_REGISTERS
     946    if (argumentIndex >= NUMBER_OF_ARGUMENT_REGISTERS)
     947        return InvalidGPRReg;
     948    return GPRInfo::toArgumentRegister(argumentIndex);
     949#else
     950    UNUSED_PARAM(argumentIndex);
     951    RELEASE_ASSERT_NOT_REACHED();
     952    return InvalidGPRReg;
     953#endif
     954}
     955
     956inline GPRReg argumentRegisterForCallee()
     957{
     958#if NUMBER_OF_ARGUMENT_REGISTERS
     959    return argumentRegisterFor(0);
     960#else
     961    return GPRInfo::regT0;
     962#endif
     963}
     964
     965inline GPRReg argumentRegisterForArgumentCount()
     966{
     967    return argumentRegisterFor(1);
     968}
     969
     970inline unsigned argumentRegisterIndexForJSFunctionArgument(unsigned argument)
     971{
     972    return argument + 2;
     973}
     974
     975inline unsigned jsFunctionArgumentForArgumentRegisterIndex(unsigned index)
     976{
     977#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0
     978    ASSERT(index >= 2);
     979    return index - 2;
     980#else
     981    UNUSED_PARAM(index);
     982    RELEASE_ASSERT_NOT_REACHED();
     983    return 0;
     984#endif
     985}
     986
     987inline unsigned jsFunctionArgumentForArgumentRegister(GPRReg gpr)
     988{
     989#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0
     990    unsigned argumentRegisterIndex = GPRInfo::toArgumentIndex(gpr);
     991    ASSERT(argumentRegisterIndex != GPRInfo::InvalidIndex);
     992    return jsFunctionArgumentForArgumentRegisterIndex(argumentRegisterIndex);
     993#else
     994    UNUSED_PARAM(gpr);
     995    RELEASE_ASSERT_NOT_REACHED();
     996    return 0;
     997#endif
     998}
     999
     1000inline GPRReg argumentRegisterForFunctionArgument(unsigned argumentIndex)
     1001{
     1002    return argumentRegisterFor(argumentRegisterIndexForJSFunctionArgument(argumentIndex));
     1003}
     1004
     1005inline unsigned numberOfRegisterArgumentsFor(unsigned argumentCount)
     1006{
     1007    return std::min(argumentCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
     1008}
     1009
    8941010// The baseline JIT uses "accumulator" style execution with regT0 (for 64-bit)
    8951011// and regT0 + regT1 (for 32-bit) serving as the accumulator register(s) for
  • trunk/Source/JavaScriptCore/jit/JIT.cpp

    r208761 r209653  
    6565        CodeLocationCall(MacroAssemblerCodePtr(returnAddress)),
    6666        newCalleeFunction);
    67 }
    68 
    69 JIT::CodeRef JIT::compileCTINativeCall(VM* vm, NativeFunction func)
    70 {
    71     if (!vm->canUseJIT())
    72         return CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
    73     JIT jit(vm, 0);
    74     return jit.privateCompileCTINativeCall(vm, func);
    7567}
    7668
     
    580572        nop();
    581573
     574#if USE(JSVALUE64)
     575    spillArgumentRegistersToFrameBeforePrologue(static_cast<unsigned>(m_codeBlock->numParameters()));
     576    incrementCounter(this, VM::RegArgsNoArity);
     577#if ENABLE(VM_COUNTERS)
     578    Jump continueStackEntry = jump();
     579#endif
     580#endif
     581    m_stackArgsArityOKEntry = label();
     582    incrementCounter(this, VM::StackArgsNoArity);
     583
     584#if USE(JSVALUE64) && ENABLE(VM_COUNTERS)
     585    continueStackEntry.link(this);
     586#endif
     587
    582588    emitFunctionPrologue();
    583589    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
     
    636642
    637643    if (m_codeBlock->codeType() == FunctionCode) {
    638         m_arityCheck = label();
     644        m_registerArgsWithArityCheck = label();
     645
     646        incrementCounter(this, VM::RegArgsArity);
     647
     648        spillArgumentRegistersToFrameBeforePrologue();
     649
     650#if ENABLE(VM_COUNTERS)
     651        Jump continueStackArityEntry = jump();
     652#endif
     653
     654        m_stackArgsWithArityCheck = label();
     655        incrementCounter(this, VM::StackArgsArity);
     656#if ENABLE(VM_COUNTERS)
     657        continueStackArityEntry.link(this);
     658#endif
    639659        store8(TrustedImm32(0), &m_codeBlock->m_shouldAlwaysBeInlined);
    640660        emitFunctionPrologue();
     
    643663        load32(payloadFor(CallFrameSlot::argumentCount), regT1);
    644664        branch32(AboveOrEqual, regT1, TrustedImm32(m_codeBlock->m_numParameters)).linkTo(beginLabel, this);
     665
     666        incrementCounter(this, VM::ArityFixupRequired);
    645667
    646668        m_bytecodeOffset = 0;
     
    779801    m_codeBlock->setJITCodeMap(jitCodeMapEncoder.finish());
    780802
    781     MacroAssemblerCodePtr withArityCheck;
    782     if (m_codeBlock->codeType() == FunctionCode)
    783         withArityCheck = patchBuffer.locationOf(m_arityCheck);
     803    MacroAssemblerCodePtr stackEntryArityOKPtr = patchBuffer.locationOf(m_stackArgsArityOKEntry);
     804   
     805    MacroAssemblerCodePtr registerEntryWithArityCheckPtr;
     806    MacroAssemblerCodePtr stackEntryWithArityCheckPtr;
     807    if (m_codeBlock->codeType() == FunctionCode) {
     808        registerEntryWithArityCheckPtr = patchBuffer.locationOf(m_registerArgsWithArityCheck);
     809        stackEntryWithArityCheckPtr = patchBuffer.locationOf(m_stackArgsWithArityCheck);
     810    }
    784811
    785812    if (Options::dumpDisassembly()) {
     
    805832
    806833    m_codeBlock->shrinkToFit(CodeBlock::LateShrink);
     834    JITEntryPoints entrypoints(result.code(), registerEntryWithArityCheckPtr, registerEntryWithArityCheckPtr, stackEntryArityOKPtr, stackEntryWithArityCheckPtr);
     835
     836    unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters());
     837    for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
     838        MacroAssemblerCodePtr entry;
     839        if (argCount == numParameters)
     840            entry = result.code();
     841        else
     842            entry = registerEntryWithArityCheckPtr;
     843        entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
     844    }
     845
    807846    m_codeBlock->setJITCode(
    808         adoptRef(new DirectJITCode(result, withArityCheck, JITCode::BaselineJIT)));
     847        adoptRef(new DirectJITCode(JITEntryPointsWithRef(result, entrypoints), JITCode::BaselineJIT)));
    809848
    810849#if ENABLE(JIT_VERBOSE)
  • trunk/Source/JavaScriptCore/jit/JIT.h

    r208637 r209653  
    4444#include "JITMathIC.h"
    4545#include "JSInterfaceJIT.h"
     46#include "LowLevelInterpreter.h"
    4647#include "PCToCodeOriginMap.h"
    4748#include "UnusedPointer.h"
     
    247248        }
    248249
    249         static CodeRef compileCTINativeCall(VM*, NativeFunction);
     250        static JITEntryPointsWithRef compileNativeCallEntryPoints(VM* vm, NativeFunction func)
     251        {
     252            if (!vm->canUseJIT()) {
     253                CodeRef nativeCallRef = CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
     254                return JITEntryPointsWithRef(nativeCallRef, nativeCallRef.code(), nativeCallRef.code());
     255            }
     256            JIT jit(vm, 0);
     257            return jit.privateCompileJITEntryNativeCall(vm, func);
     258        }
    250259
    251260        static unsigned frameRegisterCountFor(CodeBlock*);
     
    267276        void privateCompileHasIndexedProperty(ByValInfo*, ReturnAddressPtr, JITArrayMode);
    268277
    269         Label privateCompileCTINativeCall(VM*, bool isConstruct = false);
    270         CodeRef privateCompileCTINativeCall(VM*, NativeFunction);
     278        JITEntryPointsWithRef privateCompileJITEntryNativeCall(VM*, NativeFunction);
    271279        void privateCompilePatchGetArrayLength(ReturnAddressPtr returnAddress);
    272280
     
    950958        unsigned m_byValInstructionIndex;
    951959        unsigned m_callLinkInfoIndex;
    952        
    953         Label m_arityCheck;
     960
     961        Label m_stackArgsArityOKEntry;
     962        Label m_stackArgsWithArityCheck;
     963        Label m_registerArgsWithArityCheck;
    954964        std::unique_ptr<LinkBuffer> m_linkBuffer;
    955965
  • trunk/Source/JavaScriptCore/jit/JITCall.cpp

    r207475 r209653  
    9292
    9393    addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister);
     94    incrementCounter(this, VM::BaselineCaller);
     95    incrementCounter(this, VM::CallVarargs);
    9496}
    9597
     
    99101    storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset()));
    100102
     103    incrementCounter(this, VM::BaselineCaller);
     104    incrementCounter(this, VM::CallEval);
     105
    101106    addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
    102107    checkStackPointerAlignment();
     
    114119{
    115120    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
    116     info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);
     121    info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
    117122
    118123    linkSlowCase(iter);
     
    155160
    156161    CallLinkInfo* info = nullptr;
     162    ArgumentsLocation argumentsLocation = StackArgs;
     163
    157164    if (opcodeID != op_call_eval)
    158165        info = m_codeBlock->addCallLinkInfo();
     
    160167        compileSetupVarargsFrame(opcodeID, instruction, info);
    161168    else {
    162         int argCount = instruction[3].u.operand;
     169        unsigned argCount = instruction[3].u.unsignedValue;
    163170        int registerOffset = -instruction[4].u.operand;
    164171
     
    172179   
    173180        addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister);
     181        if (argumentsLocation != StackArgs) {
     182            move(TrustedImm32(argCount), argumentRegisterForArgumentCount());
     183            unsigned registerArgs = std::min(argCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
     184            for (unsigned arg = 0; arg < registerArgs; arg++)
     185                load64(Address(stackPointerRegister, (CallFrameSlot::thisArgument + arg) * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)), argumentRegisterForFunctionArgument(arg));
     186        }
    174187        store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)));
    175188    } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized.
     189
     190    incrementCounter(this, VM::BaselineCaller);
    176191   
    177192    uint32_t bytecodeOffset = instruction - m_codeBlock->instructions().begin();
     
    179194    store32(TrustedImm32(locationBits), Address(callFrameRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + TagOffset));
    180195
    181     emitGetVirtualRegister(callee, regT0); // regT0 holds callee.
    182     store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
     196    GPRReg calleeRegister = argumentRegisterForCallee();
     197
     198    emitGetVirtualRegister(callee, calleeRegister);
     199    store64(calleeRegister, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
    183200
    184201    if (opcodeID == op_call_eval) {
     
    188205
    189206    DataLabelPtr addressOfLinkedFunctionCheck;
    190     Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(0));
     207    Jump slowCase = branchPtrWithPatch(NotEqual, calleeRegister, addressOfLinkedFunctionCheck, TrustedImmPtr(0));
    191208    addSlowCase(slowCase);
    192209
    193210    ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex);
    194     info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);
     211    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), argumentsLocation, CodeOrigin(m_bytecodeOffset), calleeRegister);
    195212    m_callCompilationInfo.append(CallCompilationInfo());
    196213    m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck;
     
    198215
    199216    if (opcodeID == op_tail_call) {
     217        incrementCounter(this, VM::TailCall);
     218
    200219        CallFrameShuffleData shuffleData;
    201220        shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister;
     
    210229        }
    211230        shuffleData.callee =
    212             ValueRecovery::inGPR(regT0, DataFormatJS);
     231            ValueRecovery::inGPR(calleeRegister, DataFormatJS);
    213232        shuffleData.setupCalleeSaveRegisters(m_codeBlock);
    214233        info->setFrameShuffleData(shuffleData);
     
    247266        emitRestoreCalleeSaves();
    248267
    249     move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2);
    250 
    251     m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code());
     268    CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo;
     269    move(TrustedImmPtr(callLinkInfo), nonArgGPR0);
     270
     271    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()));
    252272
    253273    if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {
  • trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp

    r207475 r209653  
    204204{
    205205    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
    206     info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);
     206    info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
    207207
    208208    linkSlowCase(iter);
     
    212212    addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister);
    213213
    214     move(TrustedImmPtr(info), regT2);
     214    move(TrustedImmPtr(info), nonArgGPR0);
    215215
    216216    emitLoad(CallFrameSlot::callee, regT1, regT0);
    217     MacroAssemblerCodeRef virtualThunk = virtualThunkFor(m_vm, *info);
    218     info->setSlowStub(createJITStubRoutine(virtualThunk, *m_vm, nullptr, true));
    219     emitNakedCall(virtualThunk.code());
     217    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(m_vm, *info);
     218    info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *m_vm, nullptr, true));
     219    emitNakedCall(virtualThunk.entryFor(StackArgs));
    220220    addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
    221221    checkStackPointerAlignment();
     
    287287
    288288    ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex);
    289     info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);
     289    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
    290290    m_callCompilationInfo.append(CallCompilationInfo());
    291291    m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck;
     
    318318    linkSlowCase(iter);
    319319
    320     move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2);
     320    CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo;
     321    move(TrustedImmPtr(callLinkInfo), nonArgGPR0);
    321322
    322323    if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs)
    323324        emitRestoreCalleeSaves();
    324325
    325     m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code());
     326    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()));
    326327
    327328    if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {
  • trunk/Source/JavaScriptCore/jit/JITCode.cpp

    r205569 r209653  
    7676    if (!function || !protoCallFrame->needArityCheck()) {
    7777        ASSERT(!protoCallFrame->needArityCheck());
    78         entryAddress = executableAddress();
     78        entryAddress = addressForCall(StackArgsArityCheckNotRequired).executableAddress();
    7979    } else
    80         entryAddress = addressForCall(MustCheckArity).executableAddress();
     80        entryAddress = addressForCall(StackArgsMustCheckArity).executableAddress();
    8181    JSValue result = JSValue::decode(vmEntryToJavaScript(entryAddress, vm, protoCallFrame));
    8282    return scope.exception() ? jsNull() : result;
     
    163163}
    164164
    165 DirectJITCode::DirectJITCode(JITCode::CodeRef ref, JITCode::CodePtr withArityCheck, JITType jitType)
    166     : JITCodeWithCodeRef(ref, jitType)
    167     , m_withArityCheck(withArityCheck)
     165DirectJITCode::DirectJITCode(JITEntryPointsWithRef entries, JITType jitType)
     166    : JITCodeWithCodeRef(entries.codeRef(), jitType)
     167    , m_entryPoints(entries)
    168168{
    169169}
     
    173173}
    174174
    175 void DirectJITCode::initializeCodeRef(JITCode::CodeRef ref, JITCode::CodePtr withArityCheck)
     175void DirectJITCode::initializeEntryPoints(JITEntryPointsWithRef entries)
    176176{
    177177    RELEASE_ASSERT(!m_ref);
    178     m_ref = ref;
    179     m_withArityCheck = withArityCheck;
    180 }
    181 
    182 JITCode::CodePtr DirectJITCode::addressForCall(ArityCheckMode arity)
    183 {
    184     switch (arity) {
    185     case ArityCheckNotRequired:
    186         RELEASE_ASSERT(m_ref);
    187         return m_ref.code();
    188     case MustCheckArity:
    189         RELEASE_ASSERT(m_withArityCheck);
    190         return m_withArityCheck;
    191     }
    192     RELEASE_ASSERT_NOT_REACHED();
    193     return CodePtr();
     178    m_ref = entries.codeRef();
     179    m_entryPoints = entries;
     180}
     181
     182JITCode::CodePtr DirectJITCode::addressForCall(EntryPointType type)
     183{
     184    return m_entryPoints.entryFor(type);
    194185}
    195186
     
    214205}
    215206
    216 JITCode::CodePtr NativeJITCode::addressForCall(ArityCheckMode)
     207JITCode::CodePtr NativeJITCode::addressForCall(EntryPointType)
    217208{
    218209    RELEASE_ASSERT(!!m_ref);
  • trunk/Source/JavaScriptCore/jit/JITCode.h

    r208985 r209653  
    2626#pragma once
    2727
    28 #include "ArityCheckMode.h"
    2928#include "CallFrame.h"
    3029#include "CodeOrigin.h"
    3130#include "Disassembler.h"
     31#include "JITEntryPoints.h"
    3232#include "JSCJSValue.h"
    3333#include "MacroAssemblerCodeRef.h"
     
    174174    }
    175175   
    176     virtual CodePtr addressForCall(ArityCheckMode) = 0;
     176    virtual CodePtr addressForCall(EntryPointType) = 0;
    177177    virtual void* executableAddressAtOffset(size_t offset) = 0;
    178     void* executableAddress() { return executableAddressAtOffset(0); }
    179178    virtual void* dataAddressAtOffset(size_t offset) = 0;
    180179    virtual unsigned offsetOf(void* pointerIntoCode) = 0;
     
    225224public:
    226225    DirectJITCode(JITType);
    227     DirectJITCode(CodeRef, CodePtr withArityCheck, JITType);
     226    DirectJITCode(JITEntryPointsWithRef, JITType);
    228227    virtual ~DirectJITCode();
    229228   
    230     void initializeCodeRef(CodeRef, CodePtr withArityCheck);
    231 
    232     CodePtr addressForCall(ArityCheckMode) override;
     229    void initializeEntryPoints(JITEntryPointsWithRef);
     230
     231    CodePtr addressForCall(EntryPointType) override;
    233232
    234233private:
    235     CodePtr m_withArityCheck;
     234    JITEntryPoints m_entryPoints;
    236235};
    237236
     
    244243    void initializeCodeRef(CodeRef);
    245244
    246     CodePtr addressForCall(ArityCheckMode) override;
     245    CodePtr addressForCall(EntryPointType) override;
    247246};
    248247
  • trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp

    r209570 r209653  
    5050#if USE(JSVALUE64)
    5151
    52 JIT::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction)
    53 {
    54     return vm->getCTIStub(nativeCallGenerator);
     52JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction)
     53{
     54    return vm->getJITEntryStub(nativeCallGenerator);
    5555}
    5656
  • trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp

    r209647 r209653  
    4747namespace JSC {
    4848
    49 JIT::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction func)
     49JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction func)
    5050{
    5151    // FIXME: This should be able to log ShadowChicken prologue packets.
     
    130130
    131131    patchBuffer.link(nativeCall, FunctionPtr(func));
    132     return FINALIZE_CODE(patchBuffer, ("JIT CTI native call"));
     132    JIT::CodeRef codeRef = FINALIZE_CODE(patchBuffer, ("JIT CTI native call"));
     133   
     134    return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code());
    133135}
    134136
  • trunk/Source/JavaScriptCore/jit/JITOperations.cpp

    r209570 r209653  
    891891    ExecutableBase* executable = callee->executable();
    892892
    893     MacroAssemblerCodePtr codePtr;
     893    MacroAssemblerCodePtr codePtr, codePtrForLinking;
    894894    CodeBlock* codeBlock = 0;
    895895    if (executable->isHostFunction()) {
    896         codePtr = executable->entrypointFor(kind, MustCheckArity);
     896        codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity);
     897#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     898        if (callLinkInfo->argumentsInRegisters())
     899            codePtrForLinking = executable->entrypointFor(kind, RegisterArgsMustCheckArity);
     900#endif
    897901    } else {
    898902        FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
     
    915919        }
    916920        codeBlock = *codeBlockSlot;
    917         ArityCheckMode arity;
    918         if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs())
    919             arity = MustCheckArity;
    920         else
    921             arity = ArityCheckNotRequired;
    922         codePtr = functionExecutable->entrypointFor(kind, arity);
     921        EntryPointType entryType;
     922        size_t callerArgumentCount = execCallee->argumentCountIncludingThis();
     923        size_t calleeArgumentCount = static_cast<size_t>(codeBlock->numParameters());
     924        if (callerArgumentCount < calleeArgumentCount || callLinkInfo->isVarargs()) {
     925#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     926            if (callLinkInfo->argumentsInRegisters()) {
     927                codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount));
     928                if (!codePtrForLinking)
     929                    codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsMustCheckArity);
     930            }
     931#endif
     932            entryType = StackArgsMustCheckArity;
     933            (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
     934#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     935        } else if (callLinkInfo->argumentsInRegisters()) {
     936            if (callerArgumentCount == calleeArgumentCount || calleeArgumentCount >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     937                codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsArityCheckNotRequired);
     938            else {
     939                codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount));
     940                if (!codePtrForLinking)
     941                    codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsPossibleExtraArgs);
     942            }
     943            //  Prepopulate the entry points the virtual thunk might use.
     944            (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
     945
     946            entryType = StackArgsArityCheckNotRequired;
     947#endif
     948        } else
     949            entryType = StackArgsArityCheckNotRequired;
     950        codePtr = functionExecutable->entrypointFor(kind, entryType);
    923951    }
    924952    if (!callLinkInfo->seenOnce())
    925953        callLinkInfo->setSeen();
    926954    else
    927         linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr);
     955        linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtrForLinking ? codePtrForLinking : codePtr);
    928956   
    929957    return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));
     
    960988    CodeBlock* codeBlock = nullptr;
    961989    if (executable->isHostFunction())
    962         codePtr = executable->entrypointFor(kind, MustCheckArity);
     990#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     991        codePtr = executable->entrypointFor(kind, callLinkInfo->argumentsInRegisters() ? RegisterArgsMustCheckArity : StackArgsMustCheckArity);
     992#else
     993    codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity);
     994#endif
    963995    else {
    964996        FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
     
    9721004            return;
    9731005        }
    974         ArityCheckMode arity;
     1006        EntryPointType entryType;
    9751007        unsigned argumentStackSlots = callLinkInfo->maxNumArguments();
    976         if (argumentStackSlots < static_cast<size_t>(codeBlock->numParameters()))
    977             arity = MustCheckArity;
     1008        size_t codeBlockParameterCount = static_cast<size_t>(codeBlock->numParameters());
     1009#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     1010        if (callLinkInfo->argumentsInRegisters()) {
     1011            // This logic could probably be simplified!
     1012            if (argumentStackSlots < codeBlockParameterCount)
     1013                entryType = entryPointTypeFor(callLinkInfo->argumentsLocation());
     1014            else if (argumentStackSlots > NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
     1015                if (codeBlockParameterCount < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     1016                    entryType = RegisterArgsPossibleExtraArgs;
     1017                else
     1018                    entryType = RegisterArgsArityCheckNotRequired;
     1019            } else
     1020                entryType = registerEntryPointTypeFor(argumentStackSlots);
     1021        } else if (argumentStackSlots < codeBlockParameterCount)
     1022#else
     1023        if (argumentStackSlots < codeBlockParameterCount)
     1024#endif
     1025            entryType = StackArgsMustCheckArity;
    9781026        else
    979             arity = ArityCheckNotRequired;
    980         codePtr = functionExecutable->entrypointFor(kind, arity);
     1027            entryType = StackArgsArityCheckNotRequired;
     1028        codePtr = functionExecutable->entrypointFor(kind, entryType);
    9811029    }
    9821030   
     
    10211069        }
    10221070    }
     1071#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     1072    if (callLinkInfo->argumentsInRegisters()) {
     1073        // Pull into the cache the arity check register entry if the caller wants a register entry.
     1074        // This will be used by the generic virtual call thunk.
     1075        (void) executable->entrypointFor(kind, RegisterArgsMustCheckArity);
     1076        (void) executable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
     1077
     1078    }
     1079#endif
    10231080    return encodeResult(executable->entrypointFor(
    1024         kind, MustCheckArity).executableAddress(),
     1081        kind, StackArgsMustCheckArity).executableAddress(),
    10251082        reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));
    10261083}
  • trunk/Source/JavaScriptCore/jit/JITThunks.cpp

    r208320 r209653  
    4545}
    4646
    47 MacroAssemblerCodePtr JITThunks::ctiNativeCall(VM* vm)
     47JITEntryPointsWithRef JITThunks::jitEntryNativeCall(VM* vm)
    4848{
    49     if (!vm->canUseJIT())
    50         return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
    51     return ctiStub(vm, nativeCallGenerator).code();
     49    if (!vm->canUseJIT()) {
     50        MacroAssemblerCodePtr nativeCallStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
     51        return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeCallStub), nativeCallStub, nativeCallStub);
     52    }
     53    return jitEntryStub(vm, nativeCallGenerator);
    5254}
    5355
    54 MacroAssemblerCodePtr JITThunks::ctiNativeConstruct(VM* vm)
     56JITEntryPointsWithRef JITThunks::jitEntryNativeConstruct(VM* vm)
    5557{
    56     if (!vm->canUseJIT())
    57         return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
    58     return ctiStub(vm, nativeConstructGenerator).code();
     58    if (!vm->canUseJIT()) {
     59        MacroAssemblerCodePtr nativeConstructStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
     60        return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeConstructStub), nativeConstructStub, nativeConstructStub);
     61    }
     62    return jitEntryStub(vm, nativeConstructGenerator);
    5963}
    6064
     
    8387}
    8488
     89JITEntryPointsWithRef JITThunks::jitEntryStub(VM* vm, JITEntryGenerator generator)
     90{
     91    LockHolder locker(m_lock);
     92    JITEntryStubMap::AddResult entry = m_jitEntryStubMap.add(generator, JITEntryPointsWithRef());
     93    if (entry.isNewEntry) {
     94        // Compilation thread can only retrieve existing entries.
     95        ASSERT(!isCompilationThread());
     96        entry.iterator->value = generator(vm);
     97    }
     98    return entry.iterator->value;
     99}
     100
     101JITJSCallThunkEntryPointsWithRef JITThunks::jitCallThunkEntryStub(VM* vm, JITCallThunkEntryGenerator generator)
     102{
     103    LockHolder locker(m_lock);
     104    JITCallThunkEntryStubMap::AddResult entry = m_jitCallThunkEntryStubMap.add(generator, JITJSCallThunkEntryPointsWithRef());
     105    if (entry.isNewEntry) {
     106        // Compilation thread can only retrieve existing entries.
     107        ASSERT(!isCompilationThread());
     108        entry.iterator->value = generator(vm);
     109    }
     110    return entry.iterator->value;
     111}
     112
    85113void JITThunks::finalize(Handle<Unknown> handle, void*)
    86114{
     
    94122}
    95123
    96 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, ThunkGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name)
     124NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, JITEntryGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name)
    97125{
    98126    ASSERT(!isCompilationThread());   
     
    104132    RefPtr<JITCode> forCall;
    105133    if (generator) {
    106         MacroAssemblerCodeRef entry = generator(vm);
    107         forCall = adoptRef(new DirectJITCode(entry, entry.code(), JITCode::HostCallThunk));
     134        JITEntryPointsWithRef entry = generator(vm);
     135        forCall = adoptRef(new DirectJITCode(entry, JITCode::HostCallThunk));
    108136    } else
    109         forCall = adoptRef(new NativeJITCode(JIT::compileCTINativeCall(vm, function), JITCode::HostCallThunk));
     137        forCall = adoptRef(new DirectJITCode(JIT::compileNativeCallEntryPoints(vm, function), JITCode::HostCallThunk));
    110138   
    111     RefPtr<JITCode> forConstruct = adoptRef(new NativeJITCode(MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct(vm)), JITCode::HostCallThunk));
     139    RefPtr<JITCode> forConstruct = adoptRef(new DirectJITCode(jitEntryNativeConstruct(vm), JITCode::HostCallThunk));
    112140   
    113141    NativeExecutable* nativeExecutable = NativeExecutable::create(*vm, forCall, function, forConstruct, constructor, intrinsic, signature, name);
     
    116144}
    117145
    118 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, ThunkGenerator generator, Intrinsic intrinsic, const String& name)
     146NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, JITEntryGenerator generator, Intrinsic intrinsic, const String& name)
    119147{
    120148    return hostFunctionStub(vm, function, callHostFunctionAsConstructor, generator, intrinsic, nullptr, name);
  • trunk/Source/JavaScriptCore/jit/JITThunks.h

    r208320 r209653  
    3030#include "CallData.h"
    3131#include "Intrinsic.h"
     32#include "JITEntryPoints.h"
    3233#include "MacroAssemblerCodeRef.h"
    3334#include "ThunkGenerator.h"
     
    5354    virtual ~JITThunks();
    5455
    55     MacroAssemblerCodePtr ctiNativeCall(VM*);
    56     MacroAssemblerCodePtr ctiNativeConstruct(VM*);
     56    JITEntryPointsWithRef jitEntryNativeCall(VM*);
     57    JITEntryPointsWithRef jitEntryNativeConstruct(VM*);
    5758    MacroAssemblerCodePtr ctiNativeTailCall(VM*);   
    5859    MacroAssemblerCodePtr ctiNativeTailCallWithoutSavedTags(VM*);   
    5960
    6061    MacroAssemblerCodeRef ctiStub(VM*, ThunkGenerator);
     62    JITEntryPointsWithRef jitEntryStub(VM*, JITEntryGenerator);
     63    JITJSCallThunkEntryPointsWithRef jitCallThunkEntryStub(VM*, JITCallThunkEntryGenerator);
    6164
    6265    NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, const String& name);
    63     NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, ThunkGenerator, Intrinsic, const DOMJIT::Signature*, const String& name);
    64     NativeExecutable* hostFunctionStub(VM*, NativeFunction, ThunkGenerator, Intrinsic, const String& name);
     66    NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, JITEntryGenerator, Intrinsic, const DOMJIT::Signature*, const String& name);
     67    NativeExecutable* hostFunctionStub(VM*, NativeFunction, JITEntryGenerator, Intrinsic, const String& name);
    6568
    6669    void clearHostFunctionStubs();
     
    7174    typedef HashMap<ThunkGenerator, MacroAssemblerCodeRef> CTIStubMap;
    7275    CTIStubMap m_ctiStubMap;
     76    typedef HashMap<JITEntryGenerator, JITEntryPointsWithRef> JITEntryStubMap;
     77    JITEntryStubMap m_jitEntryStubMap;
     78    typedef HashMap<JITCallThunkEntryGenerator, JITJSCallThunkEntryPointsWithRef> JITCallThunkEntryStubMap;
     79    JITCallThunkEntryStubMap m_jitCallThunkEntryStubMap;
    7380
    7481    typedef std::tuple<NativeFunction, NativeFunction, String> HostFunctionKey;
  • trunk/Source/JavaScriptCore/jit/JSInterfaceJIT.h

    r206525 r209653  
    6464        Jump emitJumpIfNumber(RegisterID);
    6565        Jump emitJumpIfNotNumber(RegisterID);
     66        Jump emitJumpIfNotInt32(RegisterID reg);
    6667        void emitTagInt(RegisterID src, RegisterID dest);
    6768#endif
     
    164165    }
    165166   
     167    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotInt32(RegisterID reg)
     168    {
     169        Jump result = branch64(Below, reg, tagTypeNumberRegister);
     170        zeroExtend32ToPtr(reg, reg);
     171        return result;
     172    }
     173
    166174    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst)
    167175    {
    168176        load64(addressFor(virtualRegisterIndex), dst);
    169         Jump result = branch64(Below, dst, tagTypeNumberRegister);
    170         zeroExtend32ToPtr(dst, dst);
    171         return result;
     177        return emitJumpIfNotInt32(dst);
    172178    }
    173179
  • trunk/Source/JavaScriptCore/jit/RegisterSet.cpp

    r209560 r209653  
    160160}
    161161
     162RegisterSet RegisterSet::argumentRegisters()
     163{
     164    RegisterSet result;
     165#if USE(JSVALUE64)
     166    for (unsigned argumentIndex = 0; argumentIndex < NUMBER_OF_ARGUMENT_REGISTERS; argumentIndex++) {
     167        GPRReg argumentReg = argumentRegisterFor(argumentIndex);
     168
     169        if (argumentReg != InvalidGPRReg)
     170            result.set(argumentReg);
     171    }
     172#endif
     173    return result;
     174}
     175
    162176RegisterSet RegisterSet::vmCalleeSaveRegisters()
    163177{
  • trunk/Source/JavaScriptCore/jit/RegisterSet.h

    r207434 r209653  
    5050    static RegisterSet specialRegisters(); // The union of stack, reserved hardware, and runtime registers.
    5151    JS_EXPORT_PRIVATE static RegisterSet calleeSaveRegisters();
     52    static RegisterSet argumentRegisters(); // Registers used to pass arguments when making JS Calls
    5253    static RegisterSet vmCalleeSaveRegisters(); // Callee save registers that might be saved and used by any tier.
    5354    static RegisterSet llintBaselineCalleeSaveRegisters(); // Registers saved and used by the LLInt.
  • trunk/Source/JavaScriptCore/jit/Repatch.cpp

    r209597 r209653  
    541541}
    542542
    543 static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
    544 {
    545     MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel(codeRef.code()));
    546 }
    547 
    548 static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, ThunkGenerator generator)
    549 {
    550     linkSlowFor(vm, callLinkInfo, vm->getCTIStub(generator));
     543static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef thunkEntryPoints)
     544{
     545    MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel(thunkEntryPoints.entryFor(callLinkInfo.argumentsLocation())));
     546}
     547
     548static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, JITCallThunkEntryGenerator generator)
     549{
     550    linkSlowFor(vm, callLinkInfo, vm->getJITCallThunkEntryStub(generator));
    551551}
    552552
    553553static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo)
    554554{
    555     MacroAssemblerCodeRef virtualThunk = virtualThunkFor(vm, callLinkInfo);
     555    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(vm, callLinkInfo);
    556556    linkSlowFor(vm, callLinkInfo, virtualThunk);
    557     callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk, *vm, nullptr, true));
     557    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *vm, nullptr, true));
    558558}
    559559
     
    645645}
    646646
    647 static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
     647static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef codeRef)
    648648{
    649649    if (callLinkInfo.isDirect()) {
     
    672672        dataLog("Unlinking call at ", callLinkInfo.hotPathOther(), "\n");
    673673   
    674     revertCall(&vm, callLinkInfo, vm.getCTIStub(linkCallThunkGenerator));
     674    revertCall(&vm, callLinkInfo, vm.getJITCallThunkEntryStub(linkCallThunkGenerator));
    675675}
    676676
     
    684684        dataLog("Linking virtual call at ", *callerCodeBlock, " ", callerFrame->codeOrigin(), "\n");
    685685
    686     MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&vm, callLinkInfo);
     686    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&vm, callLinkInfo);
    687687    revertCall(&vm, callLinkInfo, virtualThunk);
    688     callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk, vm, nullptr, true));
     688    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), vm, nullptr, true));
    689689}
    690690
     
    741741   
    742742    Vector<PolymorphicCallCase> callCases;
     743    size_t callerArgumentCount = exec->argumentCountIncludingThis();
    743744   
    744745    // Figure out what our cases are.
     
    752753            // If we cannot handle a callee, either because we don't have a CodeBlock or because arity mismatch,
    753754            // assume that it's better for this whole thing to be a virtual call.
    754             if (!codeBlock || exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) {
     755            if (!codeBlock || callerArgumentCount < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) {
    755756                linkVirtualFor(exec, callLinkInfo);
    756757                return;
     
    776777   
    777778    GPRReg calleeGPR = static_cast<GPRReg>(callLinkInfo.calleeGPR());
    778    
     779
     780    if (callLinkInfo.argumentsInRegisters())
     781        ASSERT(calleeGPR == argumentRegisterForCallee());
     782
    779783    CCallHelpers stubJit(&vm, callerCodeBlock);
    780784   
     
    798802        if (frameShuffler)
    799803            scratchGPR = frameShuffler->acquireGPR();
     804        else if (callLinkInfo.argumentsInRegisters())
     805            scratchGPR = GPRInfo::nonArgGPR0;
    800806        else
    801807            scratchGPR = AssemblyHelpers::selectScratchGPR(calleeGPR);
     
    863869    if (frameShuffler)
    864870        fastCountsBaseGPR = frameShuffler->acquireGPR();
     871    else if (callLinkInfo.argumentsInRegisters())
     872#if CPU(ARM64)
     873        fastCountsBaseGPR = GPRInfo::nonArgGPR1;
     874#else
     875        fastCountsBaseGPR = GPRInfo::regT0;
     876#endif
    865877    else {
    866878        fastCountsBaseGPR =
    867879            AssemblyHelpers::selectScratchGPR(calleeGPR, comparisonValueGPR, GPRInfo::regT3);
    868880    }
    869     stubJit.move(CCallHelpers::TrustedImmPtr(fastCounts.get()), fastCountsBaseGPR);
     881    if (fastCounts)
     882        stubJit.move(CCallHelpers::TrustedImmPtr(fastCounts.get()), fastCountsBaseGPR);
    870883    if (!frameShuffler && callLinkInfo.isTailCall())
    871884        stubJit.emitRestoreCalleeSaves();
     885
     886    incrementCounter(&stubJit, VM::PolymorphicCall);
     887
    872888    BinarySwitch binarySwitch(comparisonValueGPR, caseValues, BinarySwitch::IntPtr);
    873889    CCallHelpers::JumpList done;
     
    878894       
    879895        ASSERT(variant.executable()->hasJITCodeForCall());
     896
     897        EntryPointType entryType = StackArgsArityCheckNotRequired;
     898#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     899        if (callLinkInfo.argumentsInRegisters()) {
     900            CodeBlock* codeBlock = callCases[caseIndex].codeBlock();
     901            if (codeBlock) {
     902                size_t calleeArgumentCount = static_cast<size_t>(codeBlock->numParameters());
     903                if (calleeArgumentCount == callerArgumentCount || calleeArgumentCount >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
     904                    entryType = RegisterArgsArityCheckNotRequired;
     905                else {
     906                    EntryPointType entryForArgCount = JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount);
     907                    MacroAssemblerCodePtr codePtr =
     908                        variant.executable()->generatedJITCodeForCall()->addressForCall(entryForArgCount);
     909                    if (codePtr)
     910                        entryType = entryForArgCount;
     911                    else
     912                        entryType = RegisterArgsPossibleExtraArgs;
     913                }
     914            } else
     915                entryType = RegisterArgsPossibleExtraArgs;
     916        }
     917#endif
     918
    880919        MacroAssemblerCodePtr codePtr =
    881             variant.executable()->generatedJITCodeForCall()->addressForCall(ArityCheckNotRequired);
     920            variant.executable()->generatedJITCodeForCall()->addressForCall(entryType);
     921        ASSERT(codePtr);
    882922       
    883923        if (fastCounts) {
     
    887927        }
    888928        if (frameShuffler) {
    889             CallFrameShuffler(stubJit, frameShuffler->snapshot()).prepareForTailCall();
     929            CallFrameShuffler(stubJit, frameShuffler->snapshot(callLinkInfo.argumentsLocation())).prepareForTailCall();
    890930            calls[caseIndex].call = stubJit.nearTailCall();
    891931        } else if (callLinkInfo.isTailCall()) {
     
    908948        frameShuffler->setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT1, GPRInfo::regT0));
    909949#else
    910         frameShuffler->setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
     950        if (callLinkInfo.argumentsLocation() == StackArgs)
     951            frameShuffler->setCalleeJSValueRegs(JSValueRegs(argumentRegisterForCallee()));
    911952#endif
    912953        frameShuffler->prepareForSlowPath();
    913954    } else {
    914         stubJit.move(calleeGPR, GPRInfo::regT0);
    915955#if USE(JSVALUE32_64)
    916956        stubJit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1);
    917957#endif
    918958    }
    919     stubJit.move(CCallHelpers::TrustedImmPtr(&callLinkInfo), GPRInfo::regT2);
    920     stubJit.move(CCallHelpers::TrustedImmPtr(callLinkInfo.callReturnLocation().executableAddress()), GPRInfo::regT4);
    921    
    922     stubJit.restoreReturnAddressBeforeReturn(GPRInfo::regT4);
     959    stubJit.move(CCallHelpers::TrustedImmPtr(callLinkInfo.callReturnLocation().executableAddress()), GPRInfo::nonArgGPR1);
     960    stubJit.restoreReturnAddressBeforeReturn(GPRInfo::nonArgGPR1);
     961
     962    stubJit.move(CCallHelpers::TrustedImmPtr(&callLinkInfo), GPRInfo::nonArgGPR0);
    923963    AssemblyHelpers::Jump slow = stubJit.jump();
    924964       
     
    941981    else
    942982        patchBuffer.link(done, callLinkInfo.hotPathOther().labelAtOffset(0));
    943     patchBuffer.link(slow, CodeLocationLabel(vm.getCTIStub(linkPolymorphicCallThunkGenerator).code()));
     983    patchBuffer.link(slow, CodeLocationLabel(vm.getJITCallThunkEntryStub(linkPolymorphicCallThunkGenerator).entryFor(callLinkInfo.argumentsLocation())));
    944984   
    945985    auto stubRoutine = adoptRef(*new PolymorphicCallStubRoutine(
  • trunk/Source/JavaScriptCore/jit/SpecializedThunkJIT.h

    r208063 r209653  
    2929
    3030#include "JIT.h"
     31#include "JITEntryPoints.h"
    3132#include "JITInlines.h"
    3233#include "JSInterfaceJIT.h"
     
    3839    public:
    3940        static const int ThisArgument = -1;
    40         SpecializedThunkJIT(VM* vm, int expectedArgCount)
     41        enum ArgLocation { OnStack, InRegisters };
     42
     43        SpecializedThunkJIT(VM* vm, int expectedArgCount, AssemblyHelpers::SpillRegisterType spillType = AssemblyHelpers::SpillExactly, ArgLocation argLocation = OnStack)
    4144            : JSInterfaceJIT(vm)
    4245        {
    43             emitFunctionPrologue();
    44             emitSaveThenMaterializeTagRegisters();
    45             // Check that we have the expected number of arguments
    46             m_failures.append(branch32(NotEqual, payloadFor(CallFrameSlot::argumentCount), TrustedImm32(expectedArgCount + 1)));
     46#if !NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     47            UNUSED_PARAM(spillType);
     48            UNUSED_PARAM(argLocation);
     49#else
     50            if (argLocation == InRegisters) {
     51                m_stackArgumentsEntry = label();
     52                fillArgumentRegistersFromFrameBeforePrologue();
     53                m_registerArgumentsEntry = label();
     54                emitFunctionPrologue();
     55                emitSaveThenMaterializeTagRegisters();
     56                // Check that we have the expected number of arguments
     57                m_failures.append(branch32(NotEqual, argumentRegisterForArgumentCount(), TrustedImm32(expectedArgCount + 1)));
     58            } else {
     59                spillArgumentRegistersToFrameBeforePrologue(expectedArgCount + 1, spillType);
     60                m_stackArgumentsEntry = label();
     61#endif
     62                emitFunctionPrologue();
     63                emitSaveThenMaterializeTagRegisters();
     64                // Check that we have the expected number of arguments
     65                m_failures.append(branch32(NotEqual, payloadFor(CallFrameSlot::argumentCount), TrustedImm32(expectedArgCount + 1)));
     66#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     67                }
     68#endif
    4769        }
    4870       
     
    5072            : JSInterfaceJIT(vm)
    5173        {
     74#if USE(JSVALUE64)
     75            spillArgumentRegistersToFrameBeforePrologue();
     76            m_stackArgumentsEntry = Label();
     77#endif
    5278            emitFunctionPrologue();
    5379            emitSaveThenMaterializeTagRegisters();
     
    95121            m_failures.append(conversionFailed);
    96122        }
     123
     124        void checkJSStringArgument(VM& vm, RegisterID argument)
     125        {
     126            m_failures.append(emitJumpIfNotJSCell(argument));
     127            m_failures.append(branchStructure(NotEqual,
     128                Address(argument, JSCell::structureIDOffset()),
     129                vm.stringStructure.get()));
     130        }
    97131       
    98132        void appendFailure(const Jump& failure)
     
    100134            m_failures.append(failure);
    101135        }
     136
     137        void linkFailureHere()
     138        {
     139            m_failures.link(this);
     140            m_failures.clear();
     141        }
     142
    102143#if USE(JSVALUE64)
    103144        void returnJSValue(RegisterID src)
     
    165206        }
    166207       
    167         MacroAssemblerCodeRef finalize(MacroAssemblerCodePtr fallback, const char* thunkKind)
     208        JITEntryPointsWithRef finalize(MacroAssemblerCodePtr fallback, const char* thunkKind)
    168209        {
    169210            LinkBuffer patchBuffer(*m_vm, *this, GLOBAL_THUNK_ID);
     
    171212            for (unsigned i = 0; i < m_calls.size(); i++)
    172213                patchBuffer.link(m_calls[i].first, m_calls[i].second);
    173             return FINALIZE_CODE(patchBuffer, ("Specialized thunk for %s", thunkKind));
     214
     215            MacroAssemblerCodePtr stackEntry;
     216            if (m_stackArgumentsEntry.isSet())
     217                stackEntry = patchBuffer.locationOf(m_stackArgumentsEntry);
     218            MacroAssemblerCodePtr registerEntry;
     219            if (m_registerArgumentsEntry.isSet())
     220                registerEntry = patchBuffer.locationOf(m_registerArgumentsEntry);
     221
     222            MacroAssemblerCodeRef entry = FINALIZE_CODE(patchBuffer, ("Specialized thunk for %s", thunkKind));
     223
     224            if (m_stackArgumentsEntry.isSet()) {
     225                if (m_registerArgumentsEntry.isSet())
     226                    return JITEntryPointsWithRef(entry, registerEntry, registerEntry, registerEntry, stackEntry, stackEntry);
     227                return JITEntryPointsWithRef(entry, entry.code(), entry.code(), entry.code(), stackEntry, stackEntry);
     228            }
     229
     230            return JITEntryPointsWithRef(entry, entry.code(), entry.code());
    174231        }
    175232
     
    208265       
    209266        MacroAssembler::JumpList m_failures;
     267        MacroAssembler::Label m_registerArgumentsEntry;
     268        MacroAssembler::Label m_stackArgumentsEntry;
    210269        Vector<std::pair<Call, FunctionPtr>> m_calls;
    211270    };
  • trunk/Source/JavaScriptCore/jit/ThunkGenerator.h

    r206525 r209653  
    3131class VM;
    3232class MacroAssemblerCodeRef;
     33class JITEntryPointsWithRef;
     34class JITJSCallThunkEntryPointsWithRef;
    3335
    3436typedef MacroAssemblerCodeRef (*ThunkGenerator)(VM*);
     37typedef JITEntryPointsWithRef (*JITEntryGenerator)(VM*);
     38typedef JITJSCallThunkEntryPointsWithRef (*JITCallThunkEntryGenerator)(VM*);
    3539
    3640} // namespace JSC
  • trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp

    r203081 r209653  
    7878}
    7979
     80static void createRegisterArgumentsSpillEntry(CCallHelpers& jit, MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount])
     81{
     82#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     83    for (unsigned argIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex-- > 0;) {
     84        entryPoints[thunkEntryPointTypeFor(argIndex + 1)] = jit.label();
     85        jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
     86    }
     87
     88    jit.emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForCallee(), CallFrameSlot::callee);
     89    jit.emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForArgumentCount(), CallFrameSlot::argumentCount);
     90#else
     91    UNUSED_PARAM(jit);
     92    UNUSED_PARAM(entryPoints);
     93#endif
     94    entryPoints[StackArgs] = jit.label();
     95}
     96
    8097static void slowPathFor(
    8198    CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction)
     
    89106    // and space for the 16 byte return area.
    90107    jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);
    91     jit.move(GPRInfo::regT2, GPRInfo::argumentGPR2);
     108    jit.move(GPRInfo::nonArgGPR0, GPRInfo::argumentGPR2);
    92109    jit.addPtr(CCallHelpers::TrustedImm32(32), CCallHelpers::stackPointerRegister, GPRInfo::argumentGPR0);
    93110    jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
     
    101118    if (maxFrameExtentForSlowPathCall)
    102119        jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);
    103     jit.setupArgumentsWithExecState(GPRInfo::regT2);
     120    jit.setupArgumentsWithExecState(GPRInfo::nonArgGPR0);
    104121    jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0);
    105122    emitPointerValidation(jit, GPRInfo::nonArgGPR0);
     
    128145}
    129146
    130 MacroAssemblerCodeRef linkCallThunkGenerator(VM* vm)
     147JITJSCallThunkEntryPointsWithRef linkCallThunkGenerator(VM* vm)
    131148{
    132149    // The return address is on the stack or in the link register. We will hence
     
    136153    // been adjusted, and all other registers to be available for use.
    137154    CCallHelpers jit(vm);
    138    
     155
     156    MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount];
     157
     158    createRegisterArgumentsSpillEntry(jit, entryPoints);
    139159    slowPathFor(jit, vm, operationLinkCall);
    140160   
    141161    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
    142     return FINALIZE_CODE(patchBuffer, ("Link call slow path thunk"));
     162    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link call slow path thunk"));
     163    JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef);
     164
     165    for (unsigned entryIndex = StackArgs; entryIndex <  ThunkEntryPointTypeCount; entryIndex++) {
     166        callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex),
     167            patchBuffer.locationOf(entryPoints[entryIndex]));
     168    }
     169
     170    return callEntryPoints;
     171}
     172
     173JITJSCallThunkEntryPointsWithRef linkDirectCallThunkGenerator(VM* vm)
     174{
     175    // The return address is on the stack or in the link register. We will hence
     176    // save the return address to the call frame while we make a C++ function call
     177    // to perform linking and lazy compilation if necessary. We expect the CallLinkInfo
     178    // to be in GPRInfo::nonArgGPR0, the callee to be in argumentRegisterForCallee(),
     179    // the CallFrame to have already been adjusted, and arguments in argument registers
     180    // and/or in the stack as appropriate.
     181    CCallHelpers jit(vm);
     182   
     183    MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount];
     184
     185    createRegisterArgumentsSpillEntry(jit, entryPoints);
     186
     187    jit.move(GPRInfo::callFrameRegister, GPRInfo::nonArgGPR1); // Save callee's frame pointer
     188    jit.emitFunctionPrologue();
     189    jit.storePtr(GPRInfo::callFrameRegister, &vm->topCallFrame);
     190
     191    if (maxFrameExtentForSlowPathCall)
     192        jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);
     193    jit.setupArguments(GPRInfo::nonArgGPR1, GPRInfo::nonArgGPR0, argumentRegisterForCallee());
     194    jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(operationLinkDirectCall)), GPRInfo::nonArgGPR0);
     195    emitPointerValidation(jit, GPRInfo::nonArgGPR0);
     196    jit.call(GPRInfo::nonArgGPR0);
     197    if (maxFrameExtentForSlowPathCall)
     198        jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister);
     199   
     200    jit.emitFunctionEpilogue();
     201
     202#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     203    jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
     204    GPRReg argCountReg = argumentRegisterForArgumentCount();
     205    jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argCountReg);
     206
     207    // load "this"
     208    jit.emitGetFromCallFrameArgumentBeforePrologue(0, argumentRegisterForFunctionArgument(0));
     209
     210    CCallHelpers::Jump fillUndefined[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS];
     211   
     212    for (unsigned argIndex = 1; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     213        fillUndefined[argIndex] = jit.branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex));
     214        jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
     215    }
     216
     217    CCallHelpers::Jump doneFilling = jit.jump();
     218
     219    for (unsigned argIndex = 1; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
     220        fillUndefined[argIndex].link(&jit);
     221        jit.move(CCallHelpers::TrustedImm64(JSValue::encode(jsUndefined())), argumentRegisterForFunctionArgument(argIndex));
     222    }
     223
     224    doneFilling.link(&jit);
     225#endif
     226
     227
     228    jit.ret();
     229
     230    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
     231    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link direct call thunk"));
     232    JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef);
     233   
     234    for (unsigned entryIndex = StackArgs; entryIndex <  ThunkEntryPointTypeCount; entryIndex++) {
     235        callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex),
     236            patchBuffer.locationOf(entryPoints[entryIndex]));
     237    }
     238   
     239    return callEntryPoints;
    143240}
    144241
    145242// For closure optimizations, we only include calls, since if you're using closures for
    146243// object construction then you're going to lose big time anyway.
    147 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM* vm)
     244JITJSCallThunkEntryPointsWithRef linkPolymorphicCallThunkGenerator(VM* vm)
    148245{
    149246    CCallHelpers jit(vm);
    150247   
     248    MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount];
     249
     250    createRegisterArgumentsSpillEntry(jit, entryPoints);
     251
    151252    slowPathFor(jit, vm, operationLinkPolymorphicCall);
    152253   
    153254    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
    154     return FINALIZE_CODE(patchBuffer, ("Link polymorphic call slow path thunk"));
     255    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link polymorphic call slow path thunk"));
     256    JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef);
     257   
     258    for (unsigned entryIndex = StackArgs; entryIndex <  ThunkEntryPointTypeCount; entryIndex++) {
     259        callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex),
     260            patchBuffer.locationOf(entryPoints[entryIndex]));
     261    }
     262   
     263    return callEntryPoints;
    155264}
    156265
     
    159268// virtual calls by using the shuffler.
    160269// https://bugs.webkit.org/show_bug.cgi?id=148831
    161 MacroAssemblerCodeRef virtualThunkFor(VM* vm, CallLinkInfo& callLinkInfo)
    162 {
    163     // The callee is in regT0 (for JSVALUE32_64, the tag is in regT1).
    164     // The return address is on the stack, or in the link register. We will hence
    165     // jump to the callee, or save the return address to the call frame while we
    166     // make a C++ function call to the appropriate JIT operation.
     270JITJSCallThunkEntryPointsWithRef virtualThunkFor(VM* vm, CallLinkInfo& callLinkInfo)
     271{
     272    // The callee is in argumentRegisterForCallee() (for JSVALUE32_64, it is in regT1:regT0).
     273    // The CallLinkInfo is in GPRInfo::nonArgGPR0.
     274    // The return address is on the stack, or in the link register.
     275    /// We will hence jump to the callee, or save the return address to the call
     276    // frame while we make a C++ function call to the appropriate JIT operation.
    167277
    168278    CCallHelpers jit(vm);
    169279   
    170280    CCallHelpers::JumpList slowCase;
    171    
    172     // This is a slow path execution, and regT2 contains the CallLinkInfo. Count the
    173     // slow path execution for the profiler.
     281
     282    GPRReg calleeReg = argumentRegisterForCallee();
     283#if USE(JSVALUE32_64)
     284    GPRReg calleeTagReg = GPRInfo::regT1;
     285#endif
     286    GPRReg targetReg = GPRInfo::nonArgGPR1;
     287    // This is the CallLinkInfo* on entry and used later as a temp.
     288    GPRReg callLinkInfoAndTempReg = GPRInfo::nonArgGPR0;
     289
     290    jit.fillArgumentRegistersFromFrameBeforePrologue();
     291
     292#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     293    MacroAssembler::Label registerEntry = jit.label();
     294#endif
     295
     296    incrementCounter(&jit, VM::VirtualCall);
     297
     298    // This is a slow path execution. Count the slow path execution for the profiler.
    174299    jit.add32(
    175300        CCallHelpers::TrustedImm32(1),
    176         CCallHelpers::Address(GPRInfo::regT2, CallLinkInfo::offsetOfSlowPathCount()));
     301        CCallHelpers::Address(callLinkInfoAndTempReg, CallLinkInfo::offsetOfSlowPathCount()));
    177302
    178303    // FIXME: we should have a story for eliminating these checks. In many cases,
     
    182307    slowCase.append(
    183308        jit.branchTest64(
    184             CCallHelpers::NonZero, GPRInfo::regT0, GPRInfo::tagMaskRegister));
     309            CCallHelpers::NonZero, calleeReg, GPRInfo::tagMaskRegister));
    185310#else
    186311    slowCase.append(
    187312        jit.branch32(
    188             CCallHelpers::NotEqual, GPRInfo::regT1,
     313            CCallHelpers::NotEqual, calleeTagReg,
    189314            CCallHelpers::TrustedImm32(JSValue::CellTag)));
    190315#endif
    191     slowCase.append(jit.branchIfNotType(GPRInfo::regT0, JSFunctionType));
     316    slowCase.append(jit.branchIfNotType(calleeReg, JSFunctionType));
    192317   
    193318    // Now we know we have a JSFunction.
    194319   
    195320    jit.loadPtr(
    196         CCallHelpers::Address(GPRInfo::regT0, JSFunction::offsetOfExecutable()),
    197         GPRInfo::regT4);
     321        CCallHelpers::Address(calleeReg, JSFunction::offsetOfExecutable()),
     322        targetReg);
    198323    jit.loadPtr(
    199324        CCallHelpers::Address(
    200             GPRInfo::regT4, ExecutableBase::offsetOfJITCodeWithArityCheckFor(
    201                 callLinkInfo.specializationKind())),
    202         GPRInfo::regT4);
    203     slowCase.append(jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT4));
     325            targetReg, ExecutableBase::offsetOfEntryFor(
     326                callLinkInfo.specializationKind(),
     327                entryPointTypeFor(callLinkInfo.argumentsLocation()))),
     328        targetReg);
     329    slowCase.append(jit.branchTestPtr(CCallHelpers::Zero, targetReg));
    204330   
    205331    // Now we know that we have a CodeBlock, and we're committed to making a fast
     
    207333   
    208334    // Make a tail call. This will return back to JIT code.
    209     emitPointerValidation(jit, GPRInfo::regT4);
     335    emitPointerValidation(jit, targetReg);
    210336    if (callLinkInfo.isTailCall()) {
    211         jit.preserveReturnAddressAfterCall(GPRInfo::regT0);
    212         jit.prepareForTailCallSlow(GPRInfo::regT4);
     337        jit.spillArgumentRegistersToFrameBeforePrologue();
     338        jit.preserveReturnAddressAfterCall(callLinkInfoAndTempReg);
     339        jit.prepareForTailCallSlow(targetReg);
    213340    }
    214     jit.jump(GPRInfo::regT4);
    215 
     341    jit.jump(targetReg);
    216342    slowCase.link(&jit);
    217    
     343
     344    incrementCounter(&jit, VM::VirtualSlowCall);
     345
    218346    // Here we don't know anything, so revert to the full slow path.
     347    jit.spillArgumentRegistersToFrameBeforePrologue();
    219348   
    220349    slowPathFor(jit, vm, operationVirtualCall);
    221350   
    222351    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
    223     return FINALIZE_CODE(
    224         patchBuffer,
     352    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer,
    225353        ("Virtual %s slow path thunk",
    226354        callLinkInfo.callMode() == CallMode::Regular ? "call" : callLinkInfo.callMode() == CallMode::Tail ? "tail call" : "construct"));
     355    JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef);
     356
     357    callEntryPoints.setEntryFor(StackArgsEntry, codeRef.code());
     358
     359#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     360    MacroAssemblerCodePtr registerEntryPtr = patchBuffer.locationOf(registerEntry);
     361
     362    for (unsigned entryIndex = Register1ArgEntry; entryIndex <  ThunkEntryPointTypeCount; entryIndex++)
     363        callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex), registerEntryPtr);
     364#endif
     365
     366    return callEntryPoints;
    227367}
    228368
    229369enum ThunkEntryType { EnterViaCall, EnterViaJumpWithSavedTags, EnterViaJumpWithoutSavedTags };
    230370
    231 static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind kind, ThunkEntryType entryType = EnterViaCall)
     371static JITEntryPointsWithRef nativeForGenerator(VM* vm, CodeSpecializationKind kind, ThunkEntryType entryType = EnterViaCall)
    232372{
    233373    // FIXME: This should be able to log ShadowChicken prologue packets.
     
    238378    JSInterfaceJIT jit(vm);
    239379
     380    MacroAssembler::Label stackArgsEntry;
     381
    240382    switch (entryType) {
    241383    case EnterViaCall:
     384        jit.spillArgumentRegistersToFrameBeforePrologue();
     385
     386        stackArgsEntry = jit.label();
     387
    242388        jit.emitFunctionPrologue();
    243389        break;
     
    380526
    381527    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
    382     return FINALIZE_CODE(patchBuffer, ("native %s%s trampoline", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data()));
    383 }
    384 
    385 MacroAssemblerCodeRef nativeCallGenerator(VM* vm)
     528    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("native %s%s trampoline", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data()));
     529    if (entryType == EnterViaCall) {
     530        MacroAssemblerCodePtr stackEntryPtr = patchBuffer.locationOf(stackArgsEntry);
     531
     532        return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code(), codeRef.code(), stackEntryPtr, stackEntryPtr);
     533    }
     534
     535    return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code());
     536
     537}
     538
     539JITEntryPointsWithRef nativeCallGenerator(VM* vm)
    386540{
    387541    return nativeForGenerator(vm, CodeForCall);
     
    390544MacroAssemblerCodeRef nativeTailCallGenerator(VM* vm)
    391545{
    392     return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithSavedTags);
     546    return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithSavedTags).codeRef();
    393547}
    394548
    395549MacroAssemblerCodeRef nativeTailCallWithoutSavedTagsGenerator(VM* vm)
    396550{
    397     return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithoutSavedTags);
    398 }
    399 
    400 MacroAssemblerCodeRef nativeConstructGenerator(VM* vm)
     551    return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithoutSavedTags).codeRef();
     552}
     553
     554JITEntryPointsWithRef nativeConstructGenerator(VM* vm)
    401555{
    402556    return nativeForGenerator(vm, CodeForConstruct);
     
    537691}
    538692
     693#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     694static void stringCharLoadRegCall(SpecializedThunkJIT& jit, VM* vm)
     695{
     696    // load string
     697    GPRReg thisReg = argumentRegisterForFunctionArgument(0);
     698    GPRReg indexReg = argumentRegisterForFunctionArgument(2);
     699    GPRReg lengthReg = argumentRegisterForFunctionArgument(3);
     700    GPRReg tempReg = SpecializedThunkJIT::nonArgGPR0;
     701
     702    jit.checkJSStringArgument(*vm, thisReg);
     703
     704    // Load string length to regT2, and start the process of loading the data pointer into regT0
     705    jit.load32(MacroAssembler::Address(thisReg, ThunkHelpers::jsStringLengthOffset()), lengthReg);
     706    jit.loadPtr(MacroAssembler::Address(thisReg, ThunkHelpers::jsStringValueOffset()), tempReg);
     707    jit.appendFailure(jit.branchTest32(MacroAssembler::Zero, tempReg));
     708
     709    // load index
     710    jit.move(argumentRegisterForFunctionArgument(1), indexReg);
     711    jit.appendFailure(jit.emitJumpIfNotInt32(indexReg));
     712
     713    // Do an unsigned compare to simultaneously filter negative indices as well as indices that are too large
     714    jit.appendFailure(jit.branch32(MacroAssembler::AboveOrEqual, indexReg, lengthReg));
     715
     716    // Load the character
     717    SpecializedThunkJIT::JumpList is16Bit;
     718    SpecializedThunkJIT::JumpList cont8Bit;
     719    // Load the string flags
     720    jit.loadPtr(MacroAssembler::Address(tempReg, StringImpl::flagsOffset()), lengthReg);
     721    jit.loadPtr(MacroAssembler::Address(tempReg, StringImpl::dataOffset()), tempReg);
     722    is16Bit.append(jit.branchTest32(MacroAssembler::Zero, lengthReg, MacroAssembler::TrustedImm32(StringImpl::flagIs8Bit())));
     723    jit.load8(MacroAssembler::BaseIndex(tempReg, indexReg, MacroAssembler::TimesOne, 0), tempReg);
     724    cont8Bit.append(jit.jump());
     725    is16Bit.link(&jit);
     726    jit.load16(MacroAssembler::BaseIndex(tempReg, indexReg, MacroAssembler::TimesTwo, 0), tempReg);
     727    cont8Bit.link(&jit);
     728}
     729#else
    539730static void stringCharLoad(SpecializedThunkJIT& jit, VM* vm)
    540731{
     
    566757    cont8Bit.link(&jit);
    567758}
     759#endif
    568760
    569761static void charToString(SpecializedThunkJIT& jit, VM* vm, MacroAssembler::RegisterID src, MacroAssembler::RegisterID dst, MacroAssembler::RegisterID scratch)
     
    575767}
    576768
    577 MacroAssemblerCodeRef charCodeAtThunkGenerator(VM* vm)
    578 {
     769JITEntryPointsWithRef charCodeAtThunkGenerator(VM* vm)
     770{
     771#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     772    SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters);
     773    stringCharLoadRegCall(jit, vm);
     774    jit.returnInt32(SpecializedThunkJIT::nonArgGPR0);
     775    jit.linkFailureHere();
     776    jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillExactly);
     777    jit.appendFailure(jit.jump());
     778    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charCodeAt");
     779#else
    579780    SpecializedThunkJIT jit(vm, 1);
    580781    stringCharLoad(jit, vm);
    581782    jit.returnInt32(SpecializedThunkJIT::regT0);
    582783    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charCodeAt");
    583 }
    584 
    585 MacroAssemblerCodeRef charAtThunkGenerator(VM* vm)
    586 {
     784#endif
     785}
     786
     787JITEntryPointsWithRef charAtThunkGenerator(VM* vm)
     788{
     789#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     790    SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters);
     791    stringCharLoadRegCall(jit, vm);
     792    charToString(jit, vm, SpecializedThunkJIT::nonArgGPR0, SpecializedThunkJIT::returnValueGPR, argumentRegisterForFunctionArgument(3));
     793    jit.returnJSCell(SpecializedThunkJIT::returnValueGPR);
     794    jit.linkFailureHere();
     795    jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillExactly);
     796    jit.appendFailure(jit.jump());
     797    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charAt");
     798#else
    587799    SpecializedThunkJIT jit(vm, 1);
    588800    stringCharLoad(jit, vm);
     
    590802    jit.returnJSCell(SpecializedThunkJIT::regT0);
    591803    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charAt");
    592 }
    593 
    594 MacroAssemblerCodeRef fromCharCodeThunkGenerator(VM* vm)
    595 {
    596     SpecializedThunkJIT jit(vm, 1);
     804#endif
     805}
     806
     807JITEntryPointsWithRef fromCharCodeThunkGenerator(VM* vm)
     808{
     809#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
     810    SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters);
     811    // load char code
     812    jit.move(argumentRegisterForFunctionArgument(1), SpecializedThunkJIT::nonArgGPR0);
     813    jit.appendFailure(jit.emitJumpIfNotInt32(SpecializedThunkJIT::nonArgGPR0));
     814
     815    charToString(jit, vm, SpecializedThunkJIT::nonArgGPR0, SpecializedThunkJIT::returnValueGPR, argumentRegisterForFunctionArgument(3));
     816    jit.returnJSCell(SpecializedThunkJIT::returnValueGPR);
     817    jit.linkFailureHere();
     818    jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillAll);
     819    jit.appendFailure(jit.jump());
     820    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "fromCharCode");
     821#else
     822    SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillAll);
    597823    // load char code
    598824    jit.loadInt32Argument(0, SpecializedThunkJIT::regT0);
     
    600826    jit.returnJSCell(SpecializedThunkJIT::regT0);
    601827    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "fromCharCode");
    602 }
    603 
    604 MacroAssemblerCodeRef clz32ThunkGenerator(VM* vm)
     828#endif
     829}
     830
     831JITEntryPointsWithRef clz32ThunkGenerator(VM* vm)
    605832{
    606833    SpecializedThunkJIT jit(vm, 1);
     
    623850}
    624851
    625 MacroAssemblerCodeRef sqrtThunkGenerator(VM* vm)
     852JITEntryPointsWithRef sqrtThunkGenerator(VM* vm)
    626853{
    627854    SpecializedThunkJIT jit(vm, 1);
    628855    if (!jit.supportsFloatingPointSqrt())
    629         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     856        return vm->jitStubs->jitEntryNativeCall(vm);
    630857
    631858    jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
     
    7831010static const double halfConstant = 0.5;
    7841011   
    785 MacroAssemblerCodeRef floorThunkGenerator(VM* vm)
     1012JITEntryPointsWithRef floorThunkGenerator(VM* vm)
    7861013{
    7871014    SpecializedThunkJIT jit(vm, 1);
    7881015    MacroAssembler::Jump nonIntJump;
    7891016    if (!UnaryDoubleOpWrapper(floor) || !jit.supportsFloatingPoint())
    790         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1017        return vm->jitStubs->jitEntryNativeCall(vm);
    7911018    jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump);
    7921019    jit.returnInt32(SpecializedThunkJIT::regT0);
     
    8261053}
    8271054
    828 MacroAssemblerCodeRef ceilThunkGenerator(VM* vm)
     1055JITEntryPointsWithRef ceilThunkGenerator(VM* vm)
    8291056{
    8301057    SpecializedThunkJIT jit(vm, 1);
    8311058    if (!UnaryDoubleOpWrapper(ceil) || !jit.supportsFloatingPoint())
    832         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1059        return vm->jitStubs->jitEntryNativeCall(vm);
    8331060    MacroAssembler::Jump nonIntJump;
    8341061    jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump);
     
    8491076}
    8501077
    851 MacroAssemblerCodeRef truncThunkGenerator(VM* vm)
     1078JITEntryPointsWithRef truncThunkGenerator(VM* vm)
    8521079{
    8531080    SpecializedThunkJIT jit(vm, 1);
    8541081    if (!UnaryDoubleOpWrapper(trunc) || !jit.supportsFloatingPoint())
    855         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1082        return vm->jitStubs->jitEntryNativeCall(vm);
    8561083    MacroAssembler::Jump nonIntJump;
    8571084    jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump);
     
    8721099}
    8731100
    874 MacroAssemblerCodeRef roundThunkGenerator(VM* vm)
     1101JITEntryPointsWithRef roundThunkGenerator(VM* vm)
    8751102{
    8761103    SpecializedThunkJIT jit(vm, 1);
    8771104    if (!UnaryDoubleOpWrapper(jsRound) || !jit.supportsFloatingPoint())
    878         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1105        return vm->jitStubs->jitEntryNativeCall(vm);
    8791106    MacroAssembler::Jump nonIntJump;
    8801107    jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump);
     
    9061133}
    9071134
    908 MacroAssemblerCodeRef expThunkGenerator(VM* vm)
     1135JITEntryPointsWithRef expThunkGenerator(VM* vm)
    9091136{
    9101137    if (!UnaryDoubleOpWrapper(exp))
    911         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1138        return vm->jitStubs->jitEntryNativeCall(vm);
    9121139    SpecializedThunkJIT jit(vm, 1);
    9131140    if (!jit.supportsFloatingPoint())
    914         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1141        return vm->jitStubs->jitEntryNativeCall(vm);
    9151142    jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
    9161143    jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(exp));
     
    9191146}
    9201147
    921 MacroAssemblerCodeRef logThunkGenerator(VM* vm)
     1148JITEntryPointsWithRef logThunkGenerator(VM* vm)
    9221149{
    9231150    if (!UnaryDoubleOpWrapper(log))
    924         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1151        return vm->jitStubs->jitEntryNativeCall(vm);
    9251152    SpecializedThunkJIT jit(vm, 1);
    9261153    if (!jit.supportsFloatingPoint())
    927         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1154        return vm->jitStubs->jitEntryNativeCall(vm);
    9281155    jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0);
    9291156    jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(log));
     
    9321159}
    9331160
    934 MacroAssemblerCodeRef absThunkGenerator(VM* vm)
     1161JITEntryPointsWithRef absThunkGenerator(VM* vm)
    9351162{
    9361163    SpecializedThunkJIT jit(vm, 1);
    9371164    if (!jit.supportsFloatingPointAbs())
    938         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1165        return vm->jitStubs->jitEntryNativeCall(vm);
    9391166
    9401167#if USE(JSVALUE64)
     
    9891216}
    9901217
    991 MacroAssemblerCodeRef imulThunkGenerator(VM* vm)
     1218JITEntryPointsWithRef imulThunkGenerator(VM* vm)
    9921219{
    9931220    SpecializedThunkJIT jit(vm, 2);
     
    10201247}
    10211248
    1022 MacroAssemblerCodeRef randomThunkGenerator(VM* vm)
     1249JITEntryPointsWithRef randomThunkGenerator(VM* vm)
    10231250{
    10241251    SpecializedThunkJIT jit(vm, 0);
    10251252    if (!jit.supportsFloatingPoint())
    1026         return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
     1253        return vm->jitStubs->jitEntryNativeCall(vm);
    10271254
    10281255#if USE(JSVALUE64)
     
    10321259    return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "random");
    10331260#else
    1034     return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));
    1035 #endif
    1036 }
    1037 
    1038 MacroAssemblerCodeRef boundThisNoArgsFunctionCallGenerator(VM* vm)
    1039 {
    1040     CCallHelpers jit(vm);
     1261    return vm->jitStubs->jitEntryNativeCall(vm);
     1262#endif
     1263}
     1264
     1265JITEntryPointsWithRef boundThisNoArgsFunctionCallGenerator(VM* vm)
     1266{
     1267    JSInterfaceJIT jit(vm);
     1268
     1269    MacroAssembler::JumpList failures;
     1270
     1271    jit.spillArgumentRegistersToFrameBeforePrologue();
     1272   
     1273    SpecializedThunkJIT::Label stackArgsEntry(&jit);
    10411274   
    10421275    jit.emitFunctionPrologue();
    1043    
     1276
    10441277    // Set up our call frame.
    10451278    jit.storePtr(CCallHelpers::TrustedImmPtr(nullptr), CCallHelpers::addressFor(CallFrameSlot::codeBlock));
     
    11111344    jit.loadPtr(
    11121345        CCallHelpers::Address(
    1113             GPRInfo::regT0, ExecutableBase::offsetOfJITCodeWithArityCheckFor(CodeForCall)),
     1346            GPRInfo::regT0, ExecutableBase::offsetOfEntryFor(CodeForCall, StackArgsMustCheckArity)),
    11141347        GPRInfo::regT0);
    1115     CCallHelpers::Jump noCode = jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT0);
     1348    failures.append(jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT0));
    11161349   
    11171350    emitPointerValidation(jit, GPRInfo::regT0);
     
    11201353    jit.emitFunctionEpilogue();
    11211354    jit.ret();
    1122    
    1123     LinkBuffer linkBuffer(*vm, jit, GLOBAL_THUNK_ID);
    1124     linkBuffer.link(noCode, CodeLocationLabel(vm->jitStubs->ctiNativeTailCallWithoutSavedTags(vm)));
    1125     return FINALIZE_CODE(
    1126         linkBuffer, ("Specialized thunk for bound function calls with no arguments"));
     1355
     1356    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
     1357    patchBuffer.link(failures, CodeLocationLabel(vm->jitStubs->ctiNativeTailCallWithoutSavedTags(vm)));
     1358
     1359    MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Specialized thunk for bound function calls with no arguments"));
     1360    MacroAssemblerCodePtr stackEntryPtr = patchBuffer.locationOf(stackArgsEntry);
     1361
     1362    return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code(), codeRef.code(), stackEntryPtr, stackEntryPtr);
    11271363}
    11281364
  • trunk/Source/JavaScriptCore/jit/ThunkGenerators.h

    r206525 r209653  
    2727
    2828#include "CodeSpecializationKind.h"
     29#include "JITEntryPoints.h"
    2930#include "ThunkGenerator.h"
    3031
     
    3738
    3839MacroAssemblerCodeRef linkCallThunk(VM*, CallLinkInfo&, CodeSpecializationKind);
    39 MacroAssemblerCodeRef linkCallThunkGenerator(VM*);
    40 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM*);
     40JITJSCallThunkEntryPointsWithRef linkCallThunkGenerator(VM*);
     41JITJSCallThunkEntryPointsWithRef linkDirectCallThunkGenerator(VM*);
     42JITJSCallThunkEntryPointsWithRef linkPolymorphicCallThunkGenerator(VM*);
    4143
    42 MacroAssemblerCodeRef virtualThunkFor(VM*, CallLinkInfo&);
     44JITJSCallThunkEntryPointsWithRef virtualThunkFor(VM*, CallLinkInfo&);
    4345
    44 MacroAssemblerCodeRef nativeCallGenerator(VM*);
    45 MacroAssemblerCodeRef nativeConstructGenerator(VM*);
     46JITEntryPointsWithRef nativeCallGenerator(VM*);
     47JITEntryPointsWithRef nativeConstructGenerator(VM*);
    4648MacroAssemblerCodeRef nativeTailCallGenerator(VM*);
    4749MacroAssemblerCodeRef nativeTailCallWithoutSavedTagsGenerator(VM*);
     
    4951MacroAssemblerCodeRef unreachableGenerator(VM*);
    5052
    51 MacroAssemblerCodeRef charCodeAtThunkGenerator(VM*);
    52 MacroAssemblerCodeRef charAtThunkGenerator(VM*);
    53 MacroAssemblerCodeRef clz32ThunkGenerator(VM*);
    54 MacroAssemblerCodeRef fromCharCodeThunkGenerator(VM*);
    55 MacroAssemblerCodeRef absThunkGenerator(VM*);
    56 MacroAssemblerCodeRef ceilThunkGenerator(VM*);
    57 MacroAssemblerCodeRef expThunkGenerator(VM*);
    58 MacroAssemblerCodeRef floorThunkGenerator(VM*);
    59 MacroAssemblerCodeRef logThunkGenerator(VM*);
    60 MacroAssemblerCodeRef roundThunkGenerator(VM*);
    61 MacroAssemblerCodeRef sqrtThunkGenerator(VM*);
    62 MacroAssemblerCodeRef imulThunkGenerator(VM*);
    63 MacroAssemblerCodeRef randomThunkGenerator(VM*);
    64 MacroAssemblerCodeRef truncThunkGenerator(VM*);
     53JITEntryPointsWithRef charCodeAtThunkGenerator(VM*);
     54JITEntryPointsWithRef charAtThunkGenerator(VM*);
     55JITEntryPointsWithRef clz32ThunkGenerator(VM*);
     56JITEntryPointsWithRef fromCharCodeThunkGenerator(VM*);
     57JITEntryPointsWithRef absThunkGenerator(VM*);
     58JITEntryPointsWithRef ceilThunkGenerator(VM*);
     59JITEntryPointsWithRef expThunkGenerator(VM*);
     60JITEntryPointsWithRef floorThunkGenerator(VM*);
     61JITEntryPointsWithRef logThunkGenerator(VM*);
     62JITEntryPointsWithRef roundThunkGenerator(VM*);
     63JITEntryPointsWithRef sqrtThunkGenerator(VM*);
     64JITEntryPointsWithRef imulThunkGenerator(VM*);
     65JITEntryPointsWithRef randomThunkGenerator(VM*);
     66JITEntryPointsWithRef truncThunkGenerator(VM*);
    6567
    66 MacroAssemblerCodeRef boundThisNoArgsFunctionCallGenerator(VM* vm);
     68JITEntryPointsWithRef boundThisNoArgsFunctionCallGenerator(VM*);
    6769
    6870}
  • trunk/Source/JavaScriptCore/jsc.cpp

    r209630 r209653  
    32523252    result = runJSC(vm, options);
    32533253
     3254#if ENABLE(VM_COUNTERS)
     3255    vm->dumpCounters();
     3256#endif
    32543257    if (Options::gcAtEnd()) {
    32553258        // We need to hold the API lock to do a GC.
  • trunk/Source/JavaScriptCore/llint/LLIntEntrypoint.cpp

    r192937 r209653  
    4747        if (kind == CodeForCall) {
    4848            codeBlock->setJITCode(
    49                 adoptRef(new DirectJITCode(vm.getCTIStub(functionForCallEntryThunkGenerator), vm.getCTIStub(functionForCallArityCheckThunkGenerator).code(), JITCode::InterpreterThunk)));
     49                adoptRef(new DirectJITCode(
     50                    JITEntryPointsWithRef(vm.getCTIStub(functionForRegisterCallEntryThunkGenerator),
     51                        vm.getCTIStub(functionForRegisterCallEntryThunkGenerator).code(),
     52                        vm.getCTIStub(functionForRegisterCallEntryThunkGenerator).code(),
     53                        vm.getCTIStub(functionForRegisterCallArityCheckThunkGenerator).code(),
     54                        vm.getCTIStub(functionForStackCallEntryThunkGenerator).code(),
     55                        vm.getCTIStub(functionForStackCallArityCheckThunkGenerator).code()),
     56                    JITCode::InterpreterThunk)));
    5057            return;
    5158        }
    5259        ASSERT(kind == CodeForConstruct);
    5360        codeBlock->setJITCode(
    54             adoptRef(new DirectJITCode(vm.getCTIStub(functionForConstructEntryThunkGenerator), vm.getCTIStub(functionForConstructArityCheckThunkGenerator).code(), JITCode::InterpreterThunk)));
     61            adoptRef(new DirectJITCode(
     62                JITEntryPointsWithRef(vm.getCTIStub(functionForRegisterCallEntryThunkGenerator),
     63                    vm.getCTIStub(functionForRegisterConstructEntryThunkGenerator).code(),
     64                    vm.getCTIStub(functionForRegisterConstructEntryThunkGenerator).code(),
     65                    vm.getCTIStub(functionForRegisterConstructArityCheckThunkGenerator).code(),
     66                    vm.getCTIStub(functionForStackConstructEntryThunkGenerator).code(),
     67                    vm.getCTIStub(functionForStackConstructArityCheckThunkGenerator).code()),
     68                JITCode::InterpreterThunk)));
    5569        return;
    5670    }
     
    6074    if (kind == CodeForCall) {
    6175        codeBlock->setJITCode(
    62             adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_call_prologue), MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check), JITCode::InterpreterThunk)));
     76            adoptRef(new DirectJITCode(
     77                JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_call_prologue),
     78                    MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue),
     79                    MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue),
     80                    MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue),
     81                    MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check),
     82                    MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check)),
     83                JITCode::InterpreterThunk)));
    6384        return;
    6485    }
    6586    ASSERT(kind == CodeForConstruct);
    6687    codeBlock->setJITCode(
    67         adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_construct_prologue), MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check), JITCode::InterpreterThunk)));
     88        adoptRef(new DirectJITCode(
     89            JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_construct_prologue),
     90                MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue),
     91                MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue),
     92                MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue),
     93                MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check),
     94                MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check)),
     95            JITCode::InterpreterThunk)));
    6896}
    6997
     
    73101    if (vm.canUseJIT()) {
    74102        codeBlock->setJITCode(
    75             adoptRef(new DirectJITCode(vm.getCTIStub(evalEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
    76         return;
    77     }
    78 #endif // ENABLE(JIT)
    79 
    80     UNUSED_PARAM(vm);
    81     codeBlock->setJITCode(
    82         adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
     103            adoptRef(new DirectJITCode(
     104                JITEntryPointsWithRef(vm.getCTIStub(evalEntryThunkGenerator),
     105                    MacroAssemblerCodePtr(),
     106                    MacroAssemblerCodePtr(),
     107                    MacroAssemblerCodePtr(),
     108                    vm.getCTIStub(evalEntryThunkGenerator).code(),
     109                    vm.getCTIStub(evalEntryThunkGenerator).code()),
     110                JITCode::InterpreterThunk)));
     111        return;
     112    }
     113#endif // ENABLE(JIT)
     114
     115    UNUSED_PARAM(vm);
     116    codeBlock->setJITCode(
     117        adoptRef(new DirectJITCode(
     118            JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue),
     119                MacroAssemblerCodePtr(),
     120                MacroAssemblerCodePtr(),
     121                MacroAssemblerCodePtr(),
     122                MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue).code(),
     123                MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue).code()),
     124            JITCode::InterpreterThunk)));
    83125}
    84126
     
    88130    if (vm.canUseJIT()) {
    89131        codeBlock->setJITCode(
    90             adoptRef(new DirectJITCode(vm.getCTIStub(programEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
    91         return;
    92     }
    93 #endif // ENABLE(JIT)
    94 
    95     UNUSED_PARAM(vm);
    96     codeBlock->setJITCode(
    97         adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
     132            adoptRef(new DirectJITCode(
     133                JITEntryPointsWithRef(vm.getCTIStub(programEntryThunkGenerator),
     134                MacroAssemblerCodePtr(),
     135                MacroAssemblerCodePtr(),
     136                MacroAssemblerCodePtr(),
     137                vm.getCTIStub(programEntryThunkGenerator).code(),
     138                vm.getCTIStub(programEntryThunkGenerator).code()),
     139                JITCode::InterpreterThunk)));
     140        return;
     141    }
     142#endif // ENABLE(JIT)
     143
     144    UNUSED_PARAM(vm);
     145    codeBlock->setJITCode(
     146        adoptRef(new DirectJITCode(
     147            JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue),
     148                MacroAssemblerCodePtr(),
     149                MacroAssemblerCodePtr(),
     150                MacroAssemblerCodePtr(),
     151                MacroAssemblerCodePtr::createLLIntCodePtr(llint_program_prologue),
     152                MacroAssemblerCodePtr::createLLIntCodePtr(llint_program_prologue)),
     153            JITCode::InterpreterThunk)));
    98154}
    99155
     
    103159    if (vm.canUseJIT()) {
    104160        codeBlock->setJITCode(
    105             adoptRef(new DirectJITCode(vm.getCTIStub(moduleProgramEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
    106         return;
    107     }
    108 #endif // ENABLE(JIT)
    109 
    110     UNUSED_PARAM(vm);
    111     codeBlock->setJITCode(
    112         adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));
     161            adoptRef(new DirectJITCode(
     162                JITEntryPointsWithRef(vm.getCTIStub(moduleProgramEntryThunkGenerator),
     163                    MacroAssemblerCodePtr(),
     164                    MacroAssemblerCodePtr(),
     165                    MacroAssemblerCodePtr(),
     166                    vm.getCTIStub(moduleProgramEntryThunkGenerator).code(),
     167                    vm.getCTIStub(moduleProgramEntryThunkGenerator).code()),
     168                JITCode::InterpreterThunk)));
     169        return;
     170    }
     171#endif // ENABLE(JIT)
     172
     173    UNUSED_PARAM(vm);
     174    codeBlock->setJITCode(
     175        adoptRef(new DirectJITCode(
     176            JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue),
     177                MacroAssemblerCodePtr(),
     178                MacroAssemblerCodePtr(),
     179                MacroAssemblerCodePtr(),
     180                MacroAssemblerCodePtr::createLLIntCodePtr(llint_module_program_prologue),
     181                MacroAssemblerCodePtr::createLLIntCodePtr(llint_module_program_prologue)),
     182            JITCode::InterpreterThunk)));
    113183}
    114184
  • trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp

    r209433 r209653  
    374374   
    375375    if (kind == Prologue)
    376         LLINT_RETURN_TWO(codeBlock->jitCode()->executableAddress(), 0);
     376        LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall(StackArgsArityCheckNotRequired).executableAddress(), 0);
    377377    ASSERT(kind == ArityCheck);
    378     LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall(MustCheckArity).executableAddress(), 0);
     378    LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall(StackArgsMustCheckArity).executableAddress(), 0);
    379379}
    380380#else // ENABLE(JIT)
     
    12931293    CodeBlock* codeBlock = 0;
    12941294    if (executable->isHostFunction()) {
    1295         codePtr = executable->entrypointFor(kind, MustCheckArity);
     1295        codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity);
    12961296    } else {
    12971297        FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
     
    13071307        codeBlock = *codeBlockSlot;
    13081308        ASSERT(codeBlock);
    1309         ArityCheckMode arity;
     1309        EntryPointType entryType;
    13101310        if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()))
    1311             arity = MustCheckArity;
     1311            entryType = StackArgsMustCheckArity;
    13121312        else
    1313             arity = ArityCheckNotRequired;
    1314         codePtr = functionExecutable->entrypointFor(kind, arity);
     1313            entryType = StackArgsArityCheckNotRequired;
     1314        codePtr = functionExecutable->entrypointFor(kind, entryType);
    13151315    }
    13161316
  • trunk/Source/JavaScriptCore/llint/LLIntThunks.cpp

    r207693 r209653  
    5252namespace LLInt {
    5353
    54 static MacroAssemblerCodeRef generateThunkWithJumpTo(VM* vm, void (*target)(), const char *thunkKind)
     54enum ShouldCreateRegisterEntry { CreateRegisterEntry, DontCreateRegisterEntry };
     55
     56static MacroAssemblerCodeRef generateThunkWithJumpTo(VM* vm, void (*target)(), const char *thunkKind, ShouldCreateRegisterEntry shouldCreateRegisterEntry = DontCreateRegisterEntry)
    5557{
    5658    JSInterfaceJIT jit(vm);
    57    
     59
     60#if USE(JSVALUE64)
     61    if (shouldCreateRegisterEntry == CreateRegisterEntry)
     62        jit.spillArgumentRegistersToFrameBeforePrologue();
     63#else
     64    UNUSED_PARAM(shouldCreateRegisterEntry);
     65#endif
     66
    5867    // FIXME: there's probably a better way to do it on X86, but I'm not sure I care.
    5968    jit.move(JSInterfaceJIT::TrustedImmPtr(bitwise_cast<void*>(target)), JSInterfaceJIT::regT0);
     
    6473}
    6574
    66 MacroAssemblerCodeRef functionForCallEntryThunkGenerator(VM* vm)
     75MacroAssemblerCodeRef functionForRegisterCallEntryThunkGenerator(VM* vm)
    6776{
    68     return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for call");
     77    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for register args call", CreateRegisterEntry);
    6978}
    7079
    71 MacroAssemblerCodeRef functionForConstructEntryThunkGenerator(VM* vm)
     80MacroAssemblerCodeRef functionForStackCallEntryThunkGenerator(VM* vm)
    7281{
    73     return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_prologue), "function for construct");
     82    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for stack args call");
    7483}
    7584
    76 MacroAssemblerCodeRef functionForCallArityCheckThunkGenerator(VM* vm)
     85MacroAssemblerCodeRef functionForRegisterConstructEntryThunkGenerator(VM* vm)
    7786{
    78     return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_arity_check), "function for call with arity check");
     87    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_prologue), "function for register args construct", CreateRegisterEntry);
    7988}
    8089
    81 MacroAssemblerCodeRef functionForConstructArityCheckThunkGenerator(VM* vm)
     90MacroAssemblerCodeRef functionForStackConstructEntryThunkGenerator(VM* vm)
    8291{
    83     return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for construct with arity check");
     92    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_prologue), "function for stack args construct");
     93}
     94
     95MacroAssemblerCodeRef functionForRegisterCallArityCheckThunkGenerator(VM* vm)
     96{
     97    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_arity_check), "function for register args call with arity check", CreateRegisterEntry);
     98}
     99
     100MacroAssemblerCodeRef functionForStackCallArityCheckThunkGenerator(VM* vm)
     101{
     102    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_arity_check), "function for stack args call with arity check");
     103}
     104
     105MacroAssemblerCodeRef functionForRegisterConstructArityCheckThunkGenerator(VM* vm)
     106{
     107    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for register args construct with arity check", CreateRegisterEntry);
     108}
     109
     110MacroAssemblerCodeRef functionForStackConstructArityCheckThunkGenerator(VM* vm)
     111{
     112    return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for stack args construct with arity check");
    84113}
    85114
  • trunk/Source/JavaScriptCore/llint/LLIntThunks.h

    r207693 r209653  
    4343namespace LLInt {
    4444
    45 MacroAssemblerCodeRef functionForCallEntryThunkGenerator(VM*);
    46 MacroAssemblerCodeRef functionForConstructEntryThunkGenerator(VM*);
    47 MacroAssemblerCodeRef functionForCallArityCheckThunkGenerator(VM*);
    48 MacroAssemblerCodeRef functionForConstructArityCheckThunkGenerator(VM*);
     45MacroAssemblerCodeRef functionForRegisterCallEntryThunkGenerator(VM*);
     46MacroAssemblerCodeRef functionForStackCallEntryThunkGenerator(VM*);
     47MacroAssemblerCodeRef functionForRegisterConstructEntryThunkGenerator(VM*);
     48MacroAssemblerCodeRef functionForStackConstructEntryThunkGenerator(VM*);
     49MacroAssemblerCodeRef functionForRegisterCallArityCheckThunkGenerator(VM*);
     50MacroAssemblerCodeRef functionForStackCallArityCheckThunkGenerator(VM*);
     51MacroAssemblerCodeRef functionForRegisterConstructArityCheckThunkGenerator(VM*);
     52MacroAssemblerCodeRef functionForStackConstructArityCheckThunkGenerator(VM*);
    4953MacroAssemblerCodeRef evalEntryThunkGenerator(VM*);
    5054MacroAssemblerCodeRef programEntryThunkGenerator(VM*);
  • trunk/Source/JavaScriptCore/runtime/ArityCheckMode.h

    r206525 r209653  
    2929
    3030enum ArityCheckMode {
     31    RegisterEntry,
    3132    ArityCheckNotRequired,
    3233    MustCheckArity
  • trunk/Source/JavaScriptCore/runtime/ExecutableBase.cpp

    r209433 r209653  
    5555    m_jitCodeForCall = nullptr;
    5656    m_jitCodeForConstruct = nullptr;
    57     m_jitCodeForCallWithArityCheck = MacroAssemblerCodePtr();
    58     m_jitCodeForConstructWithArityCheck = MacroAssemblerCodePtr();
     57    m_jitEntriesForCall.clearEntries();
     58    m_jitEntriesForConstruct.clearEntries();
    5959#endif
    6060    m_numParametersForCall = NUM_PARAMETERS_NOT_COMPILED;
  • trunk/Source/JavaScriptCore/runtime/ExecutableBase.h

    r209433 r209653  
    2626#pragma once
    2727
    28 #include "ArityCheckMode.h"
    2928#include "CallData.h"
    3029#include "CodeBlockHash.h"
     
    3534#include "InferredValue.h"
    3635#include "JITCode.h"
     36#include "JITEntryPoints.h"
    3737#include "JSGlobalObject.h"
    3838#include "SourceCode.h"
     
    146146    }
    147147   
    148     MacroAssemblerCodePtr entrypointFor(CodeSpecializationKind kind, ArityCheckMode arity)
     148    MacroAssemblerCodePtr entrypointFor(CodeSpecializationKind kind, EntryPointType entryType)
    149149    {
    150150        // Check if we have a cached result. We only have it for arity check because we use the
    151151        // no-arity entrypoint in non-virtual calls, which will "cache" this value directly in
    152152        // machine code.
    153         if (arity == MustCheckArity) {
    154             switch (kind) {
    155             case CodeForCall:
    156                 if (MacroAssemblerCodePtr result = m_jitCodeForCallWithArityCheck)
    157                     return result;
    158                 break;
    159             case CodeForConstruct:
    160                 if (MacroAssemblerCodePtr result = m_jitCodeForConstructWithArityCheck)
    161                     return result;
    162                 break;
    163             }
     153        switch (kind) {
     154        case CodeForCall:
     155            if (MacroAssemblerCodePtr result = m_jitEntriesForCall.entryFor(entryType))
     156                return result;
     157            break;
     158        case CodeForConstruct:
     159            if (MacroAssemblerCodePtr result = m_jitEntriesForConstruct.entryFor(entryType))
     160                return result;
     161            break;
    164162        }
    165163        MacroAssemblerCodePtr result =
    166             generatedJITCodeFor(kind)->addressForCall(arity);
    167         if (arity == MustCheckArity) {
    168             // Cache the result; this is necessary for the JIT's virtual call optimizations.
    169             switch (kind) {
    170             case CodeForCall:
    171                 m_jitCodeForCallWithArityCheck = result;
    172                 break;
    173             case CodeForConstruct:
    174                 m_jitCodeForConstructWithArityCheck = result;
    175                 break;
    176             }
     164            generatedJITCodeFor(kind)->addressForCall(entryType);
     165        // Cache the result; this is necessary for the JIT's virtual call optimizations.
     166        switch (kind) {
     167        case CodeForCall:
     168            m_jitEntriesForCall.setEntryFor(entryType, result);
     169            break;
     170        case CodeForConstruct:
     171            m_jitEntriesForConstruct.setEntryFor(entryType, result);
     172            break;
    177173        }
    178174        return result;
    179175    }
    180176
    181     static ptrdiff_t offsetOfJITCodeWithArityCheckFor(
    182         CodeSpecializationKind kind)
     177    static ptrdiff_t offsetOfEntryFor(CodeSpecializationKind kind, EntryPointType entryPointType)
    183178    {
    184179        switch (kind) {
    185180        case CodeForCall:
    186             return OBJECT_OFFSETOF(ExecutableBase, m_jitCodeForCallWithArityCheck);
     181            return OBJECT_OFFSETOF(ExecutableBase, m_jitEntriesForCall) + JITEntryPoints::offsetOfEntryFor(entryPointType);
    187182        case CodeForConstruct:
    188             return OBJECT_OFFSETOF(ExecutableBase, m_jitCodeForConstructWithArityCheck);
     183            return OBJECT_OFFSETOF(ExecutableBase, m_jitEntriesForConstruct) + JITEntryPoints::offsetOfEntryFor(entryPointType);
    189184        }
    190185        RELEASE_ASSERT_NOT_REACHED();
     
    234229    RefPtr<JITCode> m_jitCodeForCall;
    235230    RefPtr<JITCode> m_jitCodeForConstruct;
    236     MacroAssemblerCodePtr m_jitCodeForCallWithArityCheck;
    237     MacroAssemblerCodePtr m_jitCodeForConstructWithArityCheck;
     231    JITEntryPoints m_jitEntriesForCall;
     232    JITEntryPoints m_jitEntriesForConstruct;
    238233};
    239234
  • trunk/Source/JavaScriptCore/runtime/JSBoundFunction.cpp

    r209229 r209653  
    4747    if (executable->hasJITCodeForCall()) {
    4848        // Force the executable to cache its arity entrypoint.
    49         executable->entrypointFor(CodeForCall, MustCheckArity);
     49        executable->entrypointFor(CodeForCall, StackArgsMustCheckArity);
    5050    }
    5151    CallData callData;
  • trunk/Source/JavaScriptCore/runtime/NativeExecutable.cpp

    r208320 r209653  
    6464    m_jitCodeForCall = callThunk;
    6565    m_jitCodeForConstruct = constructThunk;
    66     m_jitCodeForCallWithArityCheck = m_jitCodeForCall->addressForCall(MustCheckArity);
    67     m_jitCodeForConstructWithArityCheck = m_jitCodeForConstruct->addressForCall(MustCheckArity);
     66    m_jitEntriesForCall.setEntryFor(StackArgsMustCheckArity, m_jitCodeForCall->addressForCall(StackArgsMustCheckArity));
     67    m_jitEntriesForConstruct.setEntryFor(StackArgsMustCheckArity, m_jitCodeForConstruct->addressForCall(StackArgsMustCheckArity));
    6868    m_name = name;
    6969}
  • trunk/Source/JavaScriptCore/runtime/ScriptExecutable.cpp

    r209353 r209653  
    140140    case CodeForCall:
    141141        m_jitCodeForCall = genericCodeBlock ? genericCodeBlock->jitCode() : nullptr;
    142         m_jitCodeForCallWithArityCheck = MacroAssemblerCodePtr();
     142        m_jitEntriesForCall.clearEntries();
    143143        m_numParametersForCall = genericCodeBlock ? genericCodeBlock->numParameters() : NUM_PARAMETERS_NOT_COMPILED;
    144144        break;
    145145    case CodeForConstruct:
    146146        m_jitCodeForConstruct = genericCodeBlock ? genericCodeBlock->jitCode() : nullptr;
    147         m_jitCodeForConstructWithArityCheck = MacroAssemblerCodePtr();
     147        m_jitEntriesForConstruct.clearEntries();
    148148        m_numParametersForConstruct = genericCodeBlock ? genericCodeBlock->numParameters() : NUM_PARAMETERS_NOT_COMPILED;
    149149        break;
  • trunk/Source/JavaScriptCore/runtime/VM.cpp

    r209570 r209653  
    211211    setLastStackTop(stack.origin());
    212212
     213#if ENABLE(VM_COUNTERS)
     214    clearCounters();
     215#endif
     216
    213217    // Need to be careful to keep everything consistent here
    214218    JSLockHolder lock(this);
     
    477481
    478482#if ENABLE(JIT)
    479 static ThunkGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic)
     483static JITEntryGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic)
    480484{
    481485    switch (intrinsic) {
     
    924928#endif
    925929
     930#if ENABLE(VM_COUNTERS)
     931void VM::clearCounters()
     932{
     933    for (unsigned i = 0; i < NumberVMCounter; i++)
     934        m_counters[i] = 0;
     935}
     936
     937void VM::dumpCounters()
     938{
     939    size_t totalCalls = counterFor(BaselineCaller) + counterFor(DFGCaller) + counterFor(FTLCaller);
     940    dataLog("#### VM Call counters ####\n");
     941    dataLogF("%10zu Total calls\n", totalCalls);
     942    dataLogF("%10zu Baseline calls\n", counterFor(BaselineCaller));
     943    dataLogF("%10zu DFG calls\n", counterFor(DFGCaller));
     944    dataLogF("%10zu FTL calls\n", counterFor(FTLCaller));
     945    dataLogF("%10zu Vararg calls\n", counterFor(CallVarargs));
     946    dataLogF("%10zu Tail calls\n", counterFor(TailCall));
     947    dataLogF("%10zu Eval calls\n", counterFor(CallEval));
     948    dataLogF("%10zu Direct calls\n", counterFor(DirectCall));
     949    dataLogF("%10zu Polymorphic calls\n", counterFor(PolymorphicCall));
     950    dataLogF("%10zu Virtual calls\n", counterFor(VirtualCall));
     951    dataLogF("%10zu Virtual slow calls\n", counterFor(VirtualSlowCall));
     952    dataLogF("%10zu Register args no arity\n", counterFor(RegArgsNoArity));
     953    dataLogF("%10zu Stack args no arity\n", counterFor(StackArgsNoArity));
     954    dataLogF("%10zu Register args extra arity\n", counterFor(RegArgsExtra));
     955    dataLogF("%10zu Register args arity check\n", counterFor(RegArgsArity));
     956    dataLogF("%10zu Stack args arity check\n", counterFor(StackArgsArity));
     957    dataLogF("%10zu Arity fixups required\n", counterFor(ArityFixupRequired));
     958}
     959#endif
     960
    926961} // namespace JSC
  • trunk/Source/JavaScriptCore/runtime/VM.h

    r209630 r209653  
    433433        return jitStubs->ctiStub(this, generator);
    434434    }
     435
     436    JITEntryPointsWithRef getJITEntryStub(JITEntryGenerator generator)
     437    {
     438        return jitStubs->jitEntryStub(this, generator);
     439    }
     440
     441    JITJSCallThunkEntryPointsWithRef getJITCallThunkEntryStub(JITCallThunkEntryGenerator generator)
     442    {
     443        return jitStubs->jitCallThunkEntryStub(this, generator);
     444    }
    435445   
    436446    std::unique_ptr<RegisterAtOffsetList> allCalleeSaveRegisterOffsets;
     
    574584    BumpPointerAllocator m_regExpAllocator;
    575585    ConcurrentJSLock m_regExpAllocatorLock;
     586
     587    enum VMCounterType {
     588        BaselineCaller,
     589        DFGCaller,
     590        FTLCaller,
     591        CallVarargs,
     592        TailCall,
     593        CallEval,
     594        DirectCall,
     595        PolymorphicCall,
     596        VirtualCall,
     597        VirtualSlowCall,
     598        RegArgsNoArity,
     599        StackArgsNoArity,
     600        RegArgsExtra,
     601        RegArgsArity,
     602        StackArgsArity,
     603        ArityFixupRequired,
     604        NumberVMCounter
     605    };
     606
     607#if ENABLE(VM_COUNTERS)
     608    size_t m_counters[NumberVMCounter];
     609
     610    void clearCounters();
     611
     612    size_t* addressOfCounter(VMCounterType counterType)
     613    {
     614        if (counterType >= NumberVMCounter)
     615            return nullptr;
     616
     617        return &m_counters[counterType];
     618    }
     619
     620    size_t counterFor(VMCounterType counterType)
     621    {
     622        if (counterType >= NumberVMCounter)
     623            return 0;
     624       
     625        return m_counters[counterType];
     626    }
     627
     628    JS_EXPORT_PRIVATE void dumpCounters();
     629#endif
    576630
    577631    std::unique_ptr<HasOwnPropertyCache> m_hasOwnPropertyCache;
  • trunk/Source/JavaScriptCore/wasm/WasmBinding.cpp

    r209597 r209653  
    134134    }
    135135
    136     GPRReg importJSCellGPRReg = GPRInfo::regT0; // Callee needs to be in regT0 for slow path below.
     136    GPRReg importJSCellGPRReg = argumentRegisterForCallee();
    137137    ASSERT(!wasmCC.m_calleeSaveRegisters.get(importJSCellGPRReg));
    138138
     
    149149
    150150    CallLinkInfo* callLinkInfo = callLinkInfos.add();
    151     callLinkInfo->setUpCall(CallLinkInfo::Call, CodeOrigin(), importJSCellGPRReg);
     151    callLinkInfo->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(), importJSCellGPRReg);
    152152    JIT::DataLabelPtr targetToCheck;
    153153    JIT::TrustedImmPtr initialRightValue(0);
     
    156156    JIT::Jump done = jit.jump();
    157157    slowPath.link(&jit);
    158     // Callee needs to be in regT0 here.
    159     jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2.
     158    jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
    160159    JIT::Call slowCall = jit.nearCall();
    161160    done.link(&jit);
     
    225224
    226225    LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
    227     patchBuffer.link(slowCall, FunctionPtr(vm->getCTIStub(linkCallThunkGenerator).code().executableAddress()));
     226    patchBuffer.link(slowCall, FunctionPtr(vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs).executableAddress()));
    228227    CodeLocationLabel callReturnLocation(patchBuffer.locationOfNearCall(slowCall));
    229228    CodeLocationLabel hotPathBegin(patchBuffer.locationOf(targetToCheck));
  • trunk/Source/WTF/ChangeLog

    r209632 r209653  
     12016-12-09  Michael Saboff  <[email protected]>
     2
     3        JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
     4        https://bugs.webkit.org/show_bug.cgi?id=160355
     5
     6        Reviewed by Filip Pizlo.
     7
     8        Added a new build option ENABLE_VM_COUNTERS to enable JIT'able counters.
     9        The default is for the option to be off.
     10
     11        * wtf/Platform.h:
     12        Added ENABLE_VM_COUNTERS
     13
    1142016-12-09  Geoffrey Garen  <[email protected]>
    215
  • trunk/Source/WTF/wtf/Platform.h

    r209070 r209653  
    696696#endif
    697697
     698/* This enables per VM counters available for use by JIT'ed code. */
     699#define ENABLE_VM_COUNTERS 0
     700
    698701/* The FTL *does not* work on 32-bit platforms. Disable it even if someone asked us to enable it. */
    699702#if USE(JSVALUE32_64)
Note: See TracChangeset for help on using the changeset viewer.