Changeset 209653 in webkit
- Timestamp:
- Dec 9, 2016, 11:32:38 PM (9 years ago)
- Location:
- trunk
- Files:
-
- 12 added
- 105 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r209652 r209653 1 2016-12-09 Michael Saboff <[email protected]> 2 3 JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls 4 https://bugs.webkit.org/show_bug.cgi?id=160355 5 6 Reviewed by Filip Pizlo. 7 8 New microbenchmarks to measure call type performance. 9 10 * microbenchmarks/calling-computed-args.js: Added. 11 * microbenchmarks/calling-many-callees.js: Added. 12 * microbenchmarks/calling-one-callee-fixed.js: Added. 13 * microbenchmarks/calling-one-callee.js: Added. 14 * microbenchmarks/calling-poly-callees.js: Added. 15 * microbenchmarks/calling-poly-extra-arity-callees.js: Added. 16 * microbenchmarks/calling-tailcall.js: Added. 17 * microbenchmarks/calling-virtual-arity-fixup-callees.js: Added. 18 * microbenchmarks/calling-virtual-arity-fixup-stackargs.js: Added. 19 * microbenchmarks/calling-virtual-callees.js: Added. 20 * microbenchmarks/calling-virtual-extra-arity-callees.js: Added. 21 1 22 2016-12-09 Keith Miller <[email protected]> 2 23 -
trunk/Source/JavaScriptCore/ChangeLog
r209652 r209653 1 2016-12-09 Michael Saboff <[email protected]> 2 3 JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls 4 https://bugs.webkit.org/show_bug.cgi?id=160355 5 6 Reviewed by Filip Pizlo. 7 8 This patch implements passing JavaScript function arguments in registers for 64 bit platforms. 9 10 The implemented convention follows the ABI conventions for the associated platform. 11 The first two arguments are the callee and argument count, the rest of the argument registers 12 contain "this" and following argument until all platform argument registers are exhausted. 13 Arguments beyond what fit in registers are placed on the stack in the same location as 14 before this patch. 15 16 For X86-64 non-Windows platforms, there are 6 argument registers specified in the related ABI. 17 ARM64 has had argument registers. This allows for 4 or 6 parameter values to be placed in 18 registers on these respective platforms. This patch doesn't implement passing arguments in 19 registers for 32 bit platform, since most platforms have at most 4 argument registers 20 specified and 32 bit platforms use two 32 bit registers/memory locations to store one JSValue. 21 22 The call frame on the stack in unchanged in format and the arguments that are passed in 23 registers use the corresponding call frame location as a spill location. Arguments can 24 also be passed on the stack. The LLInt, baseline JIT'ed code as well as the initial entry 25 from C++ code base arguments on the stack. DFG s and FTL generated code pass arguments 26 via registers. All callees can accept arguments either in registers or on the stack. 27 The callee is responsible for moving argument to its preferred location. 28 29 The multiple entry points to JavaSCript code is now handled via the JITEntryPoints class and 30 related code. That class now has entries for StackArgsArityCheckNotRequired, 31 StackArgsMustCheckArity and for platforms that support registers arguments, 32 RegisterArgsArityCheckNotRequired, RegisterArgsMustCheckArity as well as and additional 33 RegisterArgsPossibleExtraArgs entry point when extra registers argument are passed. 34 This last case is needed to spill those extra arguments to the corresponding call frame 35 slots. 36 37 * JavaScriptCore.xcodeproj/project.pbxproj: 38 * b3/B3ArgumentRegValue.h: 39 * b3/B3Validate.cpp: 40 * bytecode/CallLinkInfo.cpp: 41 (JSC::CallLinkInfo::CallLinkInfo): 42 * bytecode/CallLinkInfo.h: 43 (JSC::CallLinkInfo::setUpCall): 44 (JSC::CallLinkInfo::argumentsLocation): 45 (JSC::CallLinkInfo::argumentsInRegisters): 46 * bytecode/PolymorphicAccess.cpp: 47 (JSC::AccessCase::generateImpl): 48 * dfg/DFGAbstractInterpreterInlines.h: 49 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 50 * dfg/DFGByteCodeParser.cpp: 51 (JSC::DFG::ByteCodeParser::parseBlock): 52 * dfg/DFGCPSRethreadingPhase.cpp: 53 (JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock): 54 (JSC::DFG::CPSRethreadingPhase::specialCaseArguments): 55 (JSC::DFG::CPSRethreadingPhase::computeIsFlushed): 56 * dfg/DFGClobberize.h: 57 (JSC::DFG::clobberize): 58 * dfg/DFGCommon.h: 59 * dfg/DFGDCEPhase.cpp: 60 (JSC::DFG::DCEPhase::run): 61 * dfg/DFGDoesGC.cpp: 62 (JSC::DFG::doesGC): 63 * dfg/DFGDriver.cpp: 64 (JSC::DFG::compileImpl): 65 * dfg/DFGFixupPhase.cpp: 66 (JSC::DFG::FixupPhase::fixupNode): 67 * dfg/DFGGenerationInfo.h: 68 (JSC::DFG::GenerationInfo::initArgumentRegisterValue): 69 * dfg/DFGGraph.cpp: 70 (JSC::DFG::Graph::dump): 71 (JSC::DFG::Graph::methodOfGettingAValueProfileFor): 72 * dfg/DFGGraph.h: 73 (JSC::DFG::Graph::needsFlushedThis): 74 (JSC::DFG::Graph::addImmediateShouldSpeculateInt32): 75 * dfg/DFGInPlaceAbstractState.cpp: 76 (JSC::DFG::InPlaceAbstractState::initialize): 77 * dfg/DFGJITCompiler.cpp: 78 (JSC::DFG::JITCompiler::link): 79 (JSC::DFG::JITCompiler::compile): 80 (JSC::DFG::JITCompiler::compileFunction): 81 (JSC::DFG::JITCompiler::compileEntry): Deleted. 82 * dfg/DFGJITCompiler.h: 83 (JSC::DFG::JITCompiler::addJSDirectCall): 84 (JSC::DFG::JITCompiler::JSDirectCallRecord::JSDirectCallRecord): 85 (JSC::DFG::JITCompiler::JSDirectCallRecord::hasSlowCall): 86 * dfg/DFGJITFinalizer.cpp: 87 (JSC::DFG::JITFinalizer::JITFinalizer): 88 (JSC::DFG::JITFinalizer::finalize): 89 (JSC::DFG::JITFinalizer::finalizeFunction): 90 * dfg/DFGJITFinalizer.h: 91 * dfg/DFGLiveCatchVariablePreservationPhase.cpp: 92 (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlock): 93 * dfg/DFGMaximalFlushInsertionPhase.cpp: 94 (JSC::DFG::MaximalFlushInsertionPhase::treatRegularBlock): 95 (JSC::DFG::MaximalFlushInsertionPhase::treatRootBlock): 96 * dfg/DFGMayExit.cpp: 97 * dfg/DFGMinifiedNode.cpp: 98 (JSC::DFG::MinifiedNode::fromNode): 99 * dfg/DFGMinifiedNode.h: 100 (JSC::DFG::belongsInMinifiedGraph): 101 * dfg/DFGNode.cpp: 102 (JSC::DFG::Node::hasVariableAccessData): 103 * dfg/DFGNode.h: 104 (JSC::DFG::Node::accessesStack): 105 (JSC::DFG::Node::setVariableAccessData): 106 (JSC::DFG::Node::hasArgumentRegisterIndex): 107 (JSC::DFG::Node::argumentRegisterIndex): 108 * dfg/DFGNodeType.h: 109 * dfg/DFGOSRAvailabilityAnalysisPhase.cpp: 110 (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode): 111 * dfg/DFGOSREntrypointCreationPhase.cpp: 112 (JSC::DFG::OSREntrypointCreationPhase::run): 113 * dfg/DFGPlan.cpp: 114 (JSC::DFG::Plan::compileInThreadImpl): 115 * dfg/DFGPreciseLocalClobberize.h: 116 (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop): 117 * dfg/DFGPredictionInjectionPhase.cpp: 118 (JSC::DFG::PredictionInjectionPhase::run): 119 * dfg/DFGPredictionPropagationPhase.cpp: 120 * dfg/DFGPutStackSinkingPhase.cpp: 121 * dfg/DFGRegisterBank.h: 122 (JSC::DFG::RegisterBank::iterator::unlock): 123 (JSC::DFG::RegisterBank::unlockAtIndex): 124 * dfg/DFGSSAConversionPhase.cpp: 125 (JSC::DFG::SSAConversionPhase::run): 126 * dfg/DFGSafeToExecute.h: 127 (JSC::DFG::safeToExecute): 128 * dfg/DFGSpeculativeJIT.cpp: 129 (JSC::DFG::SpeculativeJIT::SpeculativeJIT): 130 (JSC::DFG::SpeculativeJIT::clearGenerationInfo): 131 (JSC::DFG::dumpRegisterInfo): 132 (JSC::DFG::SpeculativeJIT::dump): 133 (JSC::DFG::SpeculativeJIT::compileCurrentBlock): 134 (JSC::DFG::SpeculativeJIT::checkArgumentTypes): 135 (JSC::DFG::SpeculativeJIT::setupArgumentRegistersForEntry): 136 (JSC::DFG::SpeculativeJIT::compile): 137 * dfg/DFGSpeculativeJIT.h: 138 (JSC::DFG::SpeculativeJIT::allocate): 139 (JSC::DFG::SpeculativeJIT::spill): 140 (JSC::DFG::SpeculativeJIT::generationInfoFromVirtualRegister): 141 (JSC::DFG::JSValueOperand::JSValueOperand): 142 (JSC::DFG::JSValueOperand::gprUseSpecific): 143 * dfg/DFGSpeculativeJIT32_64.cpp: 144 (JSC::DFG::SpeculativeJIT::emitCall): 145 (JSC::DFG::SpeculativeJIT::compile): 146 * dfg/DFGSpeculativeJIT64.cpp: 147 (JSC::DFG::SpeculativeJIT::fillJSValue): 148 (JSC::DFG::SpeculativeJIT::emitCall): 149 (JSC::DFG::SpeculativeJIT::compile): 150 * dfg/DFGStrengthReductionPhase.cpp: 151 (JSC::DFG::StrengthReductionPhase::handleNode): 152 * dfg/DFGThunks.cpp: 153 (JSC::DFG::osrEntryThunkGenerator): 154 * dfg/DFGVariableEventStream.cpp: 155 (JSC::DFG::VariableEventStream::reconstruct): 156 * dfg/DFGVirtualRegisterAllocationPhase.cpp: 157 (JSC::DFG::VirtualRegisterAllocationPhase::allocateRegister): 158 (JSC::DFG::VirtualRegisterAllocationPhase::run): 159 * ftl/FTLCapabilities.cpp: 160 (JSC::FTL::canCompile): 161 * ftl/FTLJITCode.cpp: 162 (JSC::FTL::JITCode::~JITCode): 163 (JSC::FTL::JITCode::initializeEntrypointThunk): 164 (JSC::FTL::JITCode::setEntryFor): 165 (JSC::FTL::JITCode::addressForCall): 166 (JSC::FTL::JITCode::executableAddressAtOffset): 167 (JSC::FTL::JITCode::initializeAddressForCall): Deleted. 168 (JSC::FTL::JITCode::initializeArityCheckEntrypoint): Deleted. 169 * ftl/FTLJITCode.h: 170 * ftl/FTLJITFinalizer.cpp: 171 (JSC::FTL::JITFinalizer::finalizeFunction): 172 * ftl/FTLLink.cpp: 173 (JSC::FTL::link): 174 * ftl/FTLLowerDFGToB3.cpp: 175 (JSC::FTL::DFG::LowerDFGToB3::lower): 176 (JSC::FTL::DFG::LowerDFGToB3::compileNode): 177 (JSC::FTL::DFG::LowerDFGToB3::compileGetArgumentRegister): 178 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct): 179 (JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct): 180 (JSC::FTL::DFG::LowerDFGToB3::compileTailCall): 181 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread): 182 (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs): 183 (JSC::FTL::DFG::LowerDFGToB3::compileCallEval): 184 * ftl/FTLOSREntry.cpp: 185 (JSC::FTL::prepareOSREntry): 186 * ftl/FTLOutput.cpp: 187 (JSC::FTL::Output::argumentRegister): 188 (JSC::FTL::Output::argumentRegisterInt32): 189 * ftl/FTLOutput.h: 190 * interpreter/ShadowChicken.cpp: 191 (JSC::ShadowChicken::update): 192 * jit/AssemblyHelpers.cpp: 193 (JSC::AssemblyHelpers::emitDumbVirtualCall): 194 * jit/AssemblyHelpers.h: 195 (JSC::AssemblyHelpers::spillArgumentRegistersToFrameBeforePrologue): 196 (JSC::AssemblyHelpers::spillArgumentRegistersToFrame): 197 (JSC::AssemblyHelpers::fillArgumentRegistersFromFrameBeforePrologue): 198 (JSC::AssemblyHelpers::emitPutArgumentToCallFrameBeforePrologue): 199 (JSC::AssemblyHelpers::emitPutArgumentToCallFrame): 200 (JSC::AssemblyHelpers::emitGetFromCallFrameHeaderBeforePrologue): 201 (JSC::AssemblyHelpers::emitGetFromCallFrameArgumentBeforePrologue): 202 (JSC::AssemblyHelpers::emitGetPayloadFromCallFrameHeaderBeforePrologue): 203 (JSC::AssemblyHelpers::incrementCounter): 204 * jit/CachedRecovery.cpp: 205 (JSC::CachedRecovery::addTargetJSValueRegs): 206 * jit/CachedRecovery.h: 207 (JSC::CachedRecovery::gprTargets): 208 (JSC::CachedRecovery::setWantedFPR): 209 (JSC::CachedRecovery::wantedJSValueRegs): 210 (JSC::CachedRecovery::setWantedJSValueRegs): Deleted. 211 * jit/CallFrameShuffleData.h: 212 * jit/CallFrameShuffler.cpp: 213 (JSC::CallFrameShuffler::CallFrameShuffler): 214 (JSC::CallFrameShuffler::dump): 215 (JSC::CallFrameShuffler::tryWrites): 216 (JSC::CallFrameShuffler::prepareAny): 217 * jit/CallFrameShuffler.h: 218 (JSC::CallFrameShuffler::snapshot): 219 (JSC::CallFrameShuffler::addNew): 220 (JSC::CallFrameShuffler::initDangerFrontier): 221 (JSC::CallFrameShuffler::updateDangerFrontier): 222 (JSC::CallFrameShuffler::findDangerFrontierFrom): 223 * jit/CallFrameShuffler64.cpp: 224 (JSC::CallFrameShuffler::emitDisplace): 225 * jit/GPRInfo.h: 226 (JSC::JSValueRegs::operator==): 227 (JSC::JSValueRegs::operator!=): 228 (JSC::GPRInfo::toArgumentIndex): 229 (JSC::argumentRegisterFor): 230 (JSC::argumentRegisterForCallee): 231 (JSC::argumentRegisterForArgumentCount): 232 (JSC::argumentRegisterIndexForJSFunctionArgument): 233 (JSC::jsFunctionArgumentForArgumentRegister): 234 (JSC::argumentRegisterForFunctionArgument): 235 (JSC::numberOfRegisterArgumentsFor): 236 * jit/JIT.cpp: 237 (JSC::JIT::compileWithoutLinking): 238 (JSC::JIT::link): 239 (JSC::JIT::compileCTINativeCall): Deleted. 240 * jit/JIT.h: 241 (JSC::JIT::compileNativeCallEntryPoints): 242 * jit/JITCall.cpp: 243 (JSC::JIT::compileSetupVarargsFrame): 244 (JSC::JIT::compileCallEval): 245 (JSC::JIT::compileCallEvalSlowCase): 246 (JSC::JIT::compileOpCall): 247 (JSC::JIT::compileOpCallSlowCase): 248 * jit/JITCall32_64.cpp: 249 (JSC::JIT::compileCallEvalSlowCase): 250 (JSC::JIT::compileOpCall): 251 (JSC::JIT::compileOpCallSlowCase): 252 * jit/JITCode.cpp: 253 (JSC::JITCode::execute): 254 (JSC::DirectJITCode::DirectJITCode): 255 (JSC::DirectJITCode::initializeEntryPoints): 256 (JSC::DirectJITCode::addressForCall): 257 (JSC::NativeJITCode::addressForCall): 258 (JSC::DirectJITCode::initializeCodeRef): Deleted. 259 * jit/JITCode.h: 260 (JSC::JITCode::executableAddress): Deleted. 261 * jit/JITEntryPoints.h: Added. 262 (JSC::JITEntryPoints::JITEntryPoints): 263 (JSC::JITEntryPoints::entryFor): 264 (JSC::JITEntryPoints::setEntryFor): 265 (JSC::JITEntryPoints::offsetOfEntryFor): 266 (JSC::JITEntryPoints::registerEntryTypeForArgumentCount): 267 (JSC::JITEntryPoints::registerEntryTypeForArgumentType): 268 (JSC::JITEntryPoints::clearEntries): 269 (JSC::JITEntryPoints::operator=): 270 (JSC::JITEntryPointsWithRef::JITEntryPointsWithRef): 271 (JSC::JITEntryPointsWithRef::codeRef): 272 (JSC::argumentsLocationFor): 273 (JSC::registerEntryPointTypeFor): 274 (JSC::entryPointTypeFor): 275 (JSC::thunkEntryPointTypeFor): 276 (JSC::JITJSCallThunkEntryPointsWithRef::JITJSCallThunkEntryPointsWithRef): 277 (JSC::JITJSCallThunkEntryPointsWithRef::entryFor): 278 (JSC::JITJSCallThunkEntryPointsWithRef::setEntryFor): 279 (JSC::JITJSCallThunkEntryPointsWithRef::offsetOfEntryFor): 280 (JSC::JITJSCallThunkEntryPointsWithRef::clearEntries): 281 (JSC::JITJSCallThunkEntryPointsWithRef::codeRef): 282 (JSC::JITJSCallThunkEntryPointsWithRef::operator=): 283 * jit/JITOpcodes.cpp: 284 (JSC::JIT::privateCompileJITEntryNativeCall): 285 (JSC::JIT::privateCompileCTINativeCall): Deleted. 286 * jit/JITOpcodes32_64.cpp: 287 (JSC::JIT::privateCompileJITEntryNativeCall): 288 (JSC::JIT::privateCompileCTINativeCall): Deleted. 289 * jit/JITOperations.cpp: 290 * jit/JITThunks.cpp: 291 (JSC::JITThunks::jitEntryNativeCall): 292 (JSC::JITThunks::jitEntryNativeConstruct): 293 (JSC::JITThunks::jitEntryStub): 294 (JSC::JITThunks::jitCallThunkEntryStub): 295 (JSC::JITThunks::hostFunctionStub): 296 (JSC::JITThunks::ctiNativeCall): Deleted. 297 (JSC::JITThunks::ctiNativeConstruct): Deleted. 298 * jit/JITThunks.h: 299 * jit/JSInterfaceJIT.h: 300 (JSC::JSInterfaceJIT::emitJumpIfNotInt32): 301 (JSC::JSInterfaceJIT::emitLoadInt32): 302 * jit/RegisterSet.cpp: 303 (JSC::RegisterSet::argumentRegisters): 304 * jit/RegisterSet.h: 305 * jit/Repatch.cpp: 306 (JSC::linkSlowFor): 307 (JSC::revertCall): 308 (JSC::unlinkFor): 309 (JSC::linkVirtualFor): 310 (JSC::linkPolymorphicCall): 311 * jit/SpecializedThunkJIT.h: 312 (JSC::SpecializedThunkJIT::SpecializedThunkJIT): 313 (JSC::SpecializedThunkJIT::checkJSStringArgument): 314 (JSC::SpecializedThunkJIT::linkFailureHere): 315 (JSC::SpecializedThunkJIT::finalize): 316 * jit/ThunkGenerator.h: 317 * jit/ThunkGenerators.cpp: 318 (JSC::createRegisterArgumentsSpillEntry): 319 (JSC::slowPathFor): 320 (JSC::linkCallThunkGenerator): 321 (JSC::linkDirectCallThunkGenerator): 322 (JSC::linkPolymorphicCallThunkGenerator): 323 (JSC::virtualThunkFor): 324 (JSC::nativeForGenerator): 325 (JSC::nativeCallGenerator): 326 (JSC::nativeTailCallGenerator): 327 (JSC::nativeTailCallWithoutSavedTagsGenerator): 328 (JSC::nativeConstructGenerator): 329 (JSC::stringCharLoadRegCall): 330 (JSC::charCodeAtThunkGenerator): 331 (JSC::charAtThunkGenerator): 332 (JSC::fromCharCodeThunkGenerator): 333 (JSC::clz32ThunkGenerator): 334 (JSC::sqrtThunkGenerator): 335 (JSC::floorThunkGenerator): 336 (JSC::ceilThunkGenerator): 337 (JSC::truncThunkGenerator): 338 (JSC::roundThunkGenerator): 339 (JSC::expThunkGenerator): 340 (JSC::logThunkGenerator): 341 (JSC::absThunkGenerator): 342 (JSC::imulThunkGenerator): 343 (JSC::randomThunkGenerator): 344 (JSC::boundThisNoArgsFunctionCallGenerator): 345 * jit/ThunkGenerators.h: 346 * jsc.cpp: 347 (jscmain): 348 * llint/LLIntEntrypoint.cpp: 349 (JSC::LLInt::setFunctionEntrypoint): 350 (JSC::LLInt::setEvalEntrypoint): 351 (JSC::LLInt::setProgramEntrypoint): 352 (JSC::LLInt::setModuleProgramEntrypoint): 353 * llint/LLIntSlowPaths.cpp: 354 (JSC::LLInt::entryOSR): 355 (JSC::LLInt::setUpCall): 356 * llint/LLIntThunks.cpp: 357 (JSC::LLInt::generateThunkWithJumpTo): 358 (JSC::LLInt::functionForRegisterCallEntryThunkGenerator): 359 (JSC::LLInt::functionForStackCallEntryThunkGenerator): 360 (JSC::LLInt::functionForRegisterConstructEntryThunkGenerator): 361 (JSC::LLInt::functionForStackConstructEntryThunkGenerator): 362 (JSC::LLInt::functionForRegisterCallArityCheckThunkGenerator): 363 (JSC::LLInt::functionForStackCallArityCheckThunkGenerator): 364 (JSC::LLInt::functionForRegisterConstructArityCheckThunkGenerator): 365 (JSC::LLInt::functionForStackConstructArityCheckThunkGenerator): 366 (JSC::LLInt::functionForCallEntryThunkGenerator): Deleted. 367 (JSC::LLInt::functionForConstructEntryThunkGenerator): Deleted. 368 (JSC::LLInt::functionForCallArityCheckThunkGenerator): Deleted. 369 (JSC::LLInt::functionForConstructArityCheckThunkGenerator): Deleted. 370 * llint/LLIntThunks.h: 371 * runtime/ArityCheckMode.h: 372 * runtime/ExecutableBase.cpp: 373 (JSC::ExecutableBase::clearCode): 374 * runtime/ExecutableBase.h: 375 (JSC::ExecutableBase::entrypointFor): 376 (JSC::ExecutableBase::offsetOfEntryFor): 377 (JSC::ExecutableBase::offsetOfJITCodeWithArityCheckFor): Deleted. 378 * runtime/JSBoundFunction.cpp: 379 (JSC::boundThisNoArgsFunctionCall): 380 * runtime/NativeExecutable.cpp: 381 (JSC::NativeExecutable::finishCreation): 382 * runtime/ScriptExecutable.cpp: 383 (JSC::ScriptExecutable::installCode): 384 * runtime/VM.cpp: 385 (JSC::VM::VM): 386 (JSC::thunkGeneratorForIntrinsic): 387 (JSC::VM::clearCounters): 388 (JSC::VM::dumpCounters): 389 * runtime/VM.h: 390 (JSC::VM::getJITEntryStub): 391 (JSC::VM::getJITCallThunkEntryStub): 392 (JSC::VM::addressOfCounter): 393 (JSC::VM::counterFor): 394 * wasm/WasmBinding.cpp: 395 (JSC::Wasm::importStubGenerator): 396 1 397 2016-12-09 Keith Miller <[email protected]> 2 398 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r209630 r209653 1351 1351 65C0285C1717966800351E35 /* ARMv7DOpcode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C0285A1717966800351E35 /* ARMv7DOpcode.cpp */; }; 1352 1352 65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 65C0285B1717966800351E35 /* ARMv7DOpcode.h */; }; 1353 65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */ = {isa = PBXBuildFile; fileRef = 650300F21C50274600D786D7 /* JITEntryPoints.h */; settings = {ATTRIBUTES = (Private, ); }; }; 1353 1354 65FB5117184EEE7000C12B70 /* ProtoCallFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65FB5116184EE9BC00C12B70 /* ProtoCallFrame.cpp */; }; 1354 1355 65FB63A41C8EA09C0020719B /* YarrCanonicalizeUnicode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65A946141C8E9F6F00A7209A /* YarrCanonicalizeUnicode.cpp */; }; … … 3721 3722 62EC9BB41B7EB07C00303AD1 /* CallFrameShuffleData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallFrameShuffleData.cpp; sourceTree = "<group>"; }; 3722 3723 62EC9BB51B7EB07C00303AD1 /* CallFrameShuffleData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallFrameShuffleData.h; sourceTree = "<group>"; }; 3724 650300F21C50274600D786D7 /* JITEntryPoints.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITEntryPoints.h; sourceTree = "<group>"; }; 3723 3725 6507D2970E871E4A00D7D896 /* JSTypeInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSTypeInfo.h; sourceTree = "<group>"; }; 3724 3726 651122E5140469BA002B101D /* testRegExp.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = testRegExp.cpp; sourceTree = "<group>"; }; … … 5559 5561 FE187A0A1C0229230038BBCA /* JITDivGenerator.cpp */, 5560 5562 FE187A0B1C0229230038BBCA /* JITDivGenerator.h */, 5563 650300F21C50274600D786D7 /* JITEntryPoints.h */, 5561 5564 0F46807F14BA572700BFE272 /* JITExceptions.cpp */, 5562 5565 0F46808014BA572700BFE272 /* JITExceptions.h */, … … 7715 7718 53D444DC1DAF08AB00B92784 /* B3WasmAddressValue.h in Headers */, 7716 7719 990DA67F1C8E316A00295159 /* generate_objc_protocol_type_conversions_implementation.py in Headers */, 7720 65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */, 7717 7721 DC17E8191C9C91DB008A6AB3 /* ShadowChickenInlines.h in Headers */, 7718 7722 DC17E8181C9C91D9008A6AB3 /* ShadowChicken.h in Headers */, -
trunk/Source/JavaScriptCore/b3/B3ArgumentRegValue.h
r206595 r209653 56 56 } 57 57 58 ArgumentRegValue(Origin origin, Reg reg, Type type) 59 : Value(CheckedOpcode, ArgumentReg, type, origin) 60 , m_reg(reg) 61 { 62 ASSERT(reg.isSet()); 63 } 64 58 65 Reg m_reg; 59 66 }; -
trunk/Source/JavaScriptCore/b3/B3Validate.cpp
r208848 r209653 183 183 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value)); 184 184 VALIDATE(!value->numChildren(), ("At ", *value)); 185 VALIDATE( 186 (value->as<ArgumentRegValue>()->argumentReg().isGPR() ? pointerType() : Double) 187 == value->type(), ("At ", *value)); 185 // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165717 186 // We need to handle Int32 arguments and Int64 arguments 187 // for the same register distinctly. 188 VALIDATE((value->as<ArgumentRegValue>()->argumentReg().isGPR() 189 ? (value->type() == pointerType() || value->type() == Int32) 190 : value->type() == Double), ("At ", *value)); 188 191 break; 189 192 case Add: -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
r208309 r209653 61 61 , m_clearedByGC(false) 62 62 , m_allowStubs(true) 63 , m_argumentsLocation(static_cast<unsigned>(ArgumentsLocation::StackArgs)) 63 64 , m_isLinked(false) 64 65 , m_callType(None) -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h
r207475 r209653 29 29 #include "CodeLocation.h" 30 30 #include "CodeSpecializationKind.h" 31 #include "JITEntryPoints.h" 31 32 #include "PolymorphicCallStubRoutine.h" 32 33 #include "WriteBarrier.h" … … 158 159 void unlink(VM&); 159 160 160 void setUpCall(CallType callType, CodeOrigin codeOrigin, unsigned calleeGPR) 161 { 161 void setUpCall(CallType callType, ArgumentsLocation argumentsLocation, CodeOrigin codeOrigin, unsigned calleeGPR) 162 { 163 ASSERT(!isVarargsCallType(callType) || (argumentsLocation == StackArgs)); 164 162 165 m_callType = callType; 166 m_argumentsLocation = static_cast<unsigned>(argumentsLocation); 163 167 m_codeOrigin = codeOrigin; 164 168 m_calleeGPR = calleeGPR; … … 274 278 { 275 279 return static_cast<CallType>(m_callType); 280 } 281 282 ArgumentsLocation argumentsLocation() 283 { 284 return static_cast<ArgumentsLocation>(m_argumentsLocation); 285 } 286 287 bool argumentsInRegisters() 288 { 289 return m_argumentsLocation != StackArgs; 276 290 } 277 291 … … 340 354 bool m_clearedByGC : 1; 341 355 bool m_allowStubs : 1; 356 unsigned m_argumentsLocation : 4; 342 357 bool m_isLinked : 1; 343 358 unsigned m_callType : 4; // CallType -
trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
r209594 r209653 1033 1033 1034 1034 m_rareData->callLinkInfo->setUpCall( 1035 CallLinkInfo::Call, stubInfo.codeOrigin, loadedValueGPR);1035 CallLinkInfo::Call, StackArgs, stubInfo.codeOrigin, loadedValueGPR); 1036 1036 1037 1037 CCallHelpers::JumpList done; … … 1106 1106 jit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1); 1107 1107 #endif 1108 jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo:: regT2);1108 jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo::nonArgGPR0); 1109 1109 slowPathCall = jit.nearCall(); 1110 1110 if (m_type == Getter) … … 1132 1132 linkBuffer.link( 1133 1133 slowPathCall, 1134 CodeLocationLabel(vm.get CTIStub(linkCallThunkGenerator).code()));1134 CodeLocationLabel(vm.getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs))); 1135 1135 }); 1136 1136 } else { -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r209638 r209653 272 272 ASSERT(!m_state.variables().operand(node->local()).isClear()); 273 273 break; 274 274 275 case GetArgumentRegister: 276 ASSERT(!m_state.variables().operand(node->local()).isClear()); 277 if (node->variableAccessData()->flushFormat() == FlushedJSValue) { 278 forNode(node).makeBytecodeTop(); 279 break; 280 } 281 282 forNode(node).setType(m_graph, typeFilterFor(node->variableAccessData()->flushFormat())); 283 break; 284 275 285 case LoadVarargs: 276 286 case ForwardVarargs: { -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r209638 r209653 3698 3698 // opposed to using a value we set explicitly. 3699 3699 if (m_currentBlock == m_graph.block(0) && !inlineCallFrame()) { 3700 m_graph.m_arguments.resize(m_numArguments); 3701 // We will emit SetArgument nodes. They don't exit, but we're at the top of an op_enter so 3702 // exitOK = true. 3700 m_graph.m_argumentsOnStack.resize(m_numArguments); 3701 m_graph.m_argumentsForChecking.resize(m_numArguments); 3702 // Create all GetArgumentRegister nodes first and then the corresponding MovHint nodes, 3703 // followed by the corresponding SetLocal nodes and finally any SetArgument nodes for 3704 // the remaining arguments. 3705 // We do this to make the exit processing correct. We start with m_exitOK = true since 3706 // GetArgumentRegister nodes can exit, even though they don't. The MovHint's technically could 3707 // exit but won't. The SetLocals can exit and therefore we want all the MovHints 3708 // before the first SetLocal so that the register state is consistent. 3709 // We do all this processing before creating any SetArgument nodes since they are 3710 // morally equivalent to the SetLocals for GetArgumentRegister nodes. 3703 3711 m_exitOK = true; 3704 for (unsigned argument = 0; argument < m_numArguments; ++argument) { 3712 3713 unsigned numRegisterArguments = std::min(m_numArguments, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS); 3714 3715 Vector<Node*, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS> getArgumentRegisterNodes; 3716 3717 // First create GetArgumentRegister nodes. 3718 for (unsigned argument = 0; argument < numRegisterArguments; ++argument) { 3719 getArgumentRegisterNodes.append( 3720 addToGraph(GetArgumentRegister, OpInfo(0), 3721 OpInfo(argumentRegisterIndexForJSFunctionArgument(argument)))); 3722 } 3723 3724 // Create all the MovHint's for the GetArgumentRegister nodes created above. 3725 for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) { 3726 Node* getArgumentRegister = getArgumentRegisterNodes[i]; 3727 addToGraph(MovHint, OpInfo(virtualRegisterForArgument(i).offset()), getArgumentRegister); 3728 // We can't exit anymore. 3729 m_exitOK = false; 3730 } 3731 3732 // Exit is now okay, but we need to fence with an ExitOK node. 3733 m_exitOK = true; 3734 addToGraph(ExitOK); 3735 3736 // Create all the SetLocals's for the GetArgumentRegister nodes created above. 3737 for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) { 3738 Node* getArgumentRegister = getArgumentRegisterNodes[i]; 3739 VariableAccessData* variableAccessData = newVariableAccessData(virtualRegisterForArgument(i)); 3740 variableAccessData->mergeStructureCheckHoistingFailed( 3741 m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCache)); 3742 variableAccessData->mergeCheckArrayHoistingFailed( 3743 m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIndexingType)); 3744 Node* setLocal = addToGraph(SetLocal, OpInfo(variableAccessData), getArgumentRegister); 3745 m_currentBlock->variablesAtTail.argument(i) = setLocal; 3746 getArgumentRegister->setVariableAccessData(setLocal->variableAccessData()); 3747 m_graph.m_argumentsOnStack[i] = setLocal; 3748 m_graph.m_argumentsForChecking[i] = getArgumentRegister; 3749 } 3750 3751 // Finally create any SetArgument nodes. 3752 for (unsigned argument = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argument < m_numArguments; ++argument) { 3705 3753 VariableAccessData* variable = newVariableAccessData( 3706 3754 virtualRegisterForArgument(argument)); … … 3711 3759 3712 3760 Node* setArgument = addToGraph(SetArgument, OpInfo(variable)); 3713 m_graph.m_arguments[argument] = setArgument; 3761 m_graph.m_argumentsOnStack[argument] = setArgument; 3762 m_graph.m_argumentsForChecking[argument] = setArgument; 3714 3763 m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument); 3715 3764 } … … 4821 4870 // done by the arguments object creation node as that node may not exist. 4822 4871 noticeArgumentsUse(); 4872 Terminality terminality = handleVarargsCall(currentInstruction, TailCallForwardVarargs, CallMode::Tail); 4873 // We need to insert flush nodes for our arguments after the TailCallForwardVarargs 4874 // node so that they will be flushed to the stack and kept alive. 4823 4875 flushForReturn(); 4824 Terminality terminality = handleVarargsCall(currentInstruction, TailCallForwardVarargs, CallMode::Tail);4825 4876 ASSERT_WITH_MESSAGE(m_currentInstruction == currentInstruction, "handleVarargsCall, which may have inlined the callee, trashed m_currentInstruction"); 4826 4877 // If the call is terminal then we should not parse any further bytecodes as the TailCall will exit the function. -
trunk/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
r203808 r209653 300 300 // 301 301 // Head variable: describes what is live at the head of the basic block. 302 // Head variable links may refer to Flush, PhantomLocal, Phi, or SetArgument. 303 // SetArgument may only appear in the root block. 302 // Head variable links may refer to Flush, PhantomLocal, Phi, GetArgumentRegister 303 // or SetArgument. 304 // GetArgumentRegister and SetArgument may only appear in the root block. 304 305 // 305 306 // Tail variable: the last thing that happened to the variable in the block. 306 // It may be a Flush, PhantomLocal, GetLocal, SetLocal, SetArgument, or Phi. 307 // SetArgument may only appear in the root block. Note that if there ever 308 // was a GetLocal to the variable, and it was followed by PhantomLocals and 309 // Flushes but not SetLocals, then the tail variable will be the GetLocal. 307 // It may be a Flush, PhantomLocal, GetLocal, SetLocal, GetArgumentRegister, 308 // SetArgument, or Phi. GetArgumentRegister and SetArgument may only appear 309 // in the root block. Note that if there ever was a GetLocal to the variable, 310 // and it was followed by PhantomLocals and Flushes but not SetLocals, then 311 // the tail variable will be the GetLocal. 310 312 // This reflects the fact that you only care that the tail variable is a 311 313 // Flush or PhantomLocal if nothing else interesting happened. Likewise, if … … 368 370 void specialCaseArguments() 369 371 { 370 // Normally, a SetArgument denotes the start of a live range for a local's value on the stack. 371 // But those SetArguments used for the actual arguments to the machine CodeBlock get 372 // special-cased. We could have instead used two different node types - one for the arguments 373 // at the prologue case, and another for the other uses. But this seemed like IR overkill. 374 for (unsigned i = m_graph.m_arguments.size(); i--;) 375 m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_arguments[i]); 372 // Normally, a SetArgument or SetLocal denotes the start of a live range for 373 // a local's value on the stack. But those SetArguments and SetLocals used 374 // for the actual arguments to the machine CodeBlock get special-cased. We could have 375 // instead used two different node types - one for the arguments at the prologue case, 376 // and another for the other uses. But this seemed like IR overkill. 377 for (unsigned i = m_graph.m_argumentsOnStack.size(); i--;) 378 m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_argumentsOnStack[i]); 376 379 } 377 380 … … 481 484 case SetLocal: 482 485 case SetArgument: 486 case GetArgumentRegister: 483 487 break; 484 488 -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r209638 r209653 407 407 case PhantomLocal: 408 408 case SetArgument: 409 case GetArgumentRegister: 409 410 case Jump: 410 411 case Branch: … … 471 472 // DFG backend requires that the locals that this reads are flushed. FTL backend can handle those 472 473 // locals being promoted. 473 if (!isFTL(graph.m_plan.mode) )474 if (!isFTL(graph.m_plan.mode) && !node->origin.semantic.inlineCallFrame) 474 475 read(Stack); 475 476 … … 560 561 case DirectTailCall: 561 562 case TailCallVarargs: 562 case TailCallForwardVarargs:563 563 read(World); 564 564 write(SideState); 565 565 return; 566 566 567 case TailCallForwardVarargs: 568 // We read all arguments after "this". 569 for (unsigned arg = 1; arg < graph.m_argumentsOnStack.size(); arg++) 570 read(AbstractHeap(Stack, virtualRegisterForArgument(arg))); 571 read(World); 572 write(SideState); 573 return; 574 567 575 case GetGetter: 568 576 read(GetterSetter_getter); -
trunk/Source/JavaScriptCore/dfg/DFGCommon.h
r206899 r209653 153 153 enum OptimizationFixpointState { BeforeFixpoint, FixpointNotConverged, FixpointConverged }; 154 154 155 enum StrengthReduceArgumentFlushes { DontOptimizeArgumentFlushes, OptimizeArgumentFlushes }; 156 155 157 // Describes the form you can expect the entire graph to be in. 156 158 enum GraphForm { -
trunk/Source/JavaScriptCore/dfg/DFGDCEPhase.cpp
r203808 r209653 54 54 fixupBlock(block); 55 55 56 cleanVariables(m_graph.m_arguments); 56 cleanVariables(m_graph.m_argumentsOnStack); 57 cleanVariables(m_graph.m_argumentsForChecking); 57 58 58 59 // Just do a basic Phantom/Check clean-up. -
trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
r209638 r209653 262 262 case GetFromArguments: 263 263 case PutToArguments: 264 case GetArgumentRegister: 264 265 case GetArgument: 265 266 case LogShadowChickenPrologue: -
trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp
r208777 r209653 91 91 vm.getCTIStub(osrExitGenerationThunkGenerator); 92 92 vm.getCTIStub(throwExceptionFromCallSlowPathGenerator); 93 vm.getCTIStub(linkCallThunkGenerator); 94 vm.getCTIStub(linkPolymorphicCallThunkGenerator); 93 vm.getJITCallThunkEntryStub(linkCallThunkGenerator); 94 vm.getJITCallThunkEntryStub(linkDirectCallThunkGenerator); 95 vm.getJITCallThunkEntryStub(linkPolymorphicCallThunkGenerator); 95 96 96 97 if (vm.typeProfiler()) -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r209638 r209653 1792 1792 case GetLocal: 1793 1793 case GetCallee: 1794 case GetArgumentRegister: 1794 1795 case GetArgumentCountIncludingThis: 1795 1796 case GetRestLength: -
trunk/Source/JavaScriptCore/dfg/DFGGenerationInfo.h
r206525 r209653 105 105 initGPR(node, useCount, gpr, format); 106 106 } 107 108 void initArgumentRegisterValue(Node* node, uint32_t useCount, GPRReg gpr, DataFormat registerFormat = DataFormatJS) 109 { 110 m_node = node; 111 m_useCount = useCount; 112 m_registerFormat = registerFormat; 113 m_spillFormat = DataFormatNone; 114 m_canFill = false; 115 u.gpr = gpr; 116 m_bornForOSR = false; 117 m_isConstant = false; 118 ASSERT(m_useCount); 119 } 107 120 #elif USE(JSVALUE32_64) 108 121 void initJSValue(Node* node, uint32_t useCount, GPRReg tagGPR, GPRReg payloadGPR, DataFormat format = DataFormatJS) -
trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp
r208761 r209653 295 295 out.print(comma, inContext(data.variants[i], context)); 296 296 } 297 ASSERT(node->hasVariableAccessData(*this) == node->accessesStack(*this));298 297 if (node->hasVariableAccessData(*this)) { 299 298 VariableAccessData* variableAccessData = node->tryGetVariableAccessData(); … … 374 373 out.print(comma, "default:", data->fallThrough); 375 374 } 375 if (node->hasArgumentRegisterIndex()) 376 out.print(comma, node->argumentRegisterIndex(), "(", GPRInfo::toArgumentRegister(node->argumentRegisterIndex()), ")"); 376 377 ClobberSet reads; 377 378 ClobberSet writes; … … 397 398 out.print(")"); 398 399 399 if ( node->accessesStack(*this) && node->tryGetVariableAccessData())400 if ((node->accessesStack(*this) || node->op() == GetArgumentRegister) && node->tryGetVariableAccessData()) 400 401 out.print(" predicting ", SpeculationDump(node->tryGetVariableAccessData()->prediction())); 401 402 else if (node->hasHeapPrediction()) … … 507 508 if (m_form == SSA) 508 509 out.print(" Argument formats: ", listDump(m_argumentFormats), "\n"); 509 else 510 out.print(" Arguments: ", listDump(m_arguments), "\n"); 510 else { 511 out.print(" Arguments for checking: ", listDump(m_argumentsForChecking), "\n"); 512 out.print(" Arguments on stack: ", listDump(m_argumentsOnStack), "\n"); 513 } 511 514 out.print("\n"); 512 515 … … 1621 1624 CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic); 1622 1625 1623 if (node->accessesStack(*this) ) {1626 if (node->accessesStack(*this) || node->op() == GetArgumentRegister) { 1624 1627 ValueProfile* result = [&] () -> ValueProfile* { 1625 1628 if (!node->local().isArgument()) 1626 1629 return nullptr; 1627 1630 int argument = node->local().toArgument(); 1628 Node* argumentNode = m_arguments [argument];1629 if (!argumentNode )1631 Node* argumentNode = m_argumentsOnStack[argument]; 1632 if (!argumentNode || !argumentNode->accessesStack(*this)) 1630 1633 return nullptr; 1631 1634 if (node->variableAccessData() != argumentNode->variableAccessData()) -
trunk/Source/JavaScriptCore/dfg/DFGGraph.h
r208637 r209653 860 860 861 861 bool needsScopeRegister() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); } 862 bool needsFlushedThis() const { return m_ codeBlock->usesEval(); }862 bool needsFlushedThis() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); } 863 863 864 864 VM& m_vm; … … 879 879 Bag<StorageAccessData> m_storageAccessData; 880 880 881 // In CPS, this is all of the SetArgument nodes for the arguments in the machine code block882 // th at survived DCE. All of them except maybe "this" will survive DCE, because of the Flush883 // nodes.881 // In CPS, this is all of the GetArgumentRegister and SetArgument nodes for the arguments in 882 // the machine code block that survived DCE. All of them except maybe "this" will survive DCE, 883 // because of the Flush nodes. 884 884 // 885 885 // In SSA, this is all of the GetStack nodes for the arguments in the machine code block that … … 904 904 // If we DCE the ArithAdd and we remove the int check on x, then this won't do the side 905 905 // effects. 906 Vector<Node*, 8> m_arguments; 906 Vector<Node*, 8> m_argumentsOnStack; 907 Vector<Node*, 8> m_argumentsForChecking; 907 908 908 909 // In CPS, this is meaningless. In SSA, this is the argument speculation that we've locked in. … … 955 956 UnificationState m_unificationState; 956 957 PlanStage m_planStage { PlanStage::Initial }; 958 StrengthReduceArgumentFlushes m_strengthReduceArguments = { StrengthReduceArgumentFlushes::DontOptimizeArgumentFlushes }; 957 959 RefCountState m_refCountState; 958 960 bool m_hasDebuggerEnabled; -
trunk/Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
r208373 r209653 107 107 format = m_graph.m_argumentFormats[i]; 108 108 else { 109 Node* node = m_graph.m_arguments [i];109 Node* node = m_graph.m_argumentsOnStack[i]; 110 110 if (!node) 111 111 format = FlushedJSValue; 112 112 else { 113 ASSERT(node->op() == SetArgument );113 ASSERT(node->op() == SetArgument || node->op() == SetLocal); 114 114 format = node->variableAccessData()->flushFormat(); 115 115 } -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r208560 r209653 100 100 } 101 101 102 void JITCompiler::compileEntry()103 {104 // This code currently matches the old JIT. In the function header we need to105 // save return address and call frame via the prologue and perform a fast stack check.106 // FIXME: https://bugs.webkit.org/show_bug.cgi?id=56292107 // We'll need to convert the remaining cti_ style calls (specifically the stack108 // check) which will be dependent on stack layout. (We'd need to account for this in109 // both normal return code and when jumping to an exception handler).110 emitFunctionPrologue();111 emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);112 }113 114 102 void JITCompiler::compileSetupRegistersForEntry() 115 103 { … … 278 266 JSCallRecord& record = m_jsCalls[i]; 279 267 CallLinkInfo& info = *record.info; 280 linkBuffer.link(record.slowCall, FunctionPtr(m_vm->get CTIStub(linkCallThunkGenerator).code().executableAddress()));268 linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress())); 281 269 info.setCallLocations( 282 270 CodeLocationLabel(linkBuffer.locationOfNearCall(record.slowCall)), … … 288 276 CallLinkInfo& info = *record.info; 289 277 linkBuffer.link(record.call, linkBuffer.locationOf(record.slowPath)); 278 if (record.hasSlowCall()) 279 linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress())); 290 280 info.setCallLocations( 291 281 CodeLocationLabel(), … … 355 345 void JITCompiler::compile() 356 346 { 347 Label mainEntry(this); 348 357 349 setStartOfCode(); 358 compileEntry(); 350 emitFunctionPrologue(); 351 352 Label entryPoint(this); 353 emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock); 354 359 355 m_speculative = std::make_unique<SpeculativeJIT>(*this); 360 356 … … 383 379 m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock); 384 380 381 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 382 m_stackArgsArityOKEntry = label(); 383 emitFunctionPrologue(); 384 385 // Load argument values into argument registers 386 loadPtr(addressFor(CallFrameSlot::callee), argumentRegisterForCallee()); 387 load32(payloadFor(CallFrameSlot::argumentCount), argumentRegisterForArgumentCount()); 388 389 for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 390 load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex)); 391 392 jump(entryPoint); 393 #endif 394 385 395 // Generate slow path code. 386 396 m_speculative->runSlowPathGenerators(m_pcToCodeOriginMapBuilder); … … 407 417 408 418 disassemble(*linkBuffer); 409 419 420 JITEntryPoints entrypoints; 421 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 422 entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry)); 423 entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry)); 424 #else 425 entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry)); 426 #endif 427 410 428 m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>( 411 m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer) );429 m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints); 412 430 } 413 431 … … 415 433 { 416 434 setStartOfCode(); 417 compileEntry(); 435 436 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 437 unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters()); 438 GPRReg argCountReg = argumentRegisterForArgumentCount(); 439 JumpList continueRegisterEntry; 440 Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1]; 441 442 if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 443 // Spill any extra register arguments passed to function onto the stack. 444 for (unsigned extraRegisterArgumentIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1; 445 extraRegisterArgumentIndex >= numParameters; extraRegisterArgumentIndex--) { 446 registerArgumentsEntrypoints[extraRegisterArgumentIndex + 1] = label(); 447 emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(extraRegisterArgumentIndex), extraRegisterArgumentIndex); 448 } 449 } 450 incrementCounter(this, VM::RegArgsExtra); 451 452 continueRegisterEntry.append(jump()); 453 454 m_registerArgsWithArityCheck = label(); 455 incrementCounter(this, VM::RegArgsArity); 456 457 Label registerArgsCheckArity(this); 458 459 Jump registerCheckArity; 460 461 if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 462 registerCheckArity = branch32(NotEqual, argCountReg, TrustedImm32(numParameters)); 463 else { 464 registerCheckArity = branch32(Below, argCountReg, TrustedImm32(numParameters)); 465 m_registerArgsWithPossibleExtraArgs = label(); 466 } 467 468 Label registerEntryNoArity(this); 469 470 if (numParameters <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 471 registerArgumentsEntrypoints[numParameters] = registerEntryNoArity; 472 473 incrementCounter(this, VM::RegArgsNoArity); 474 475 continueRegisterEntry.link(this); 476 #endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 477 478 Label mainEntry(this); 479 480 emitFunctionPrologue(); 418 481 419 482 // === Function header code generation === … … 422 485 // so enter after this. 423 486 Label fromArityCheck(this); 487 488 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 489 storePtr(argumentRegisterForCallee(), addressFor(CallFrameSlot::callee)); 490 store32(argCountReg, payloadFor(CallFrameSlot::argumentCount)); 491 492 Label fromStackEntry(this); 493 #endif 494 495 emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock); 496 424 497 // Plant a check that sufficient space is available in the JSStack. 425 addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo:: regT1);426 Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo:: regT1);498 addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::nonArgGPR0); 499 Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo::nonArgGPR0); 427 500 428 501 // Move the stack pointer down to accommodate locals … … 453 526 454 527 m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock); 455 456 // The fast entry point into a function does not check the correct number of arguments 457 // have been passed to the call (we only use the fast entry point where we can statically 458 // determine the correct number of arguments have been passed, or have already checked). 459 // In cases where an arity check is necessary, we enter here. 460 // FIXME: change this from a cti call to a DFG style operation (normal C calling conventions). 461 m_arityCheck = label(); 462 compileEntry(); 528 529 JumpList arityOK; 530 531 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 532 jump(registerArgsCheckArity); 533 534 JumpList registerArityNeedsFixup; 535 if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 536 registerCheckArity.link(this); 537 registerArityNeedsFixup.append(branch32(Below, argCountReg, TrustedImm32(m_codeBlock->numParameters()))); 538 539 // We have extra register arguments. 540 541 // The fast entry point into a function does not check that the correct number of arguments 542 // have been passed to the call (we only use the fast entry point where we can statically 543 // determine the correct number of arguments have been passed, or have already checked). 544 // In cases where an arity check is necessary, we enter here. 545 m_registerArgsWithPossibleExtraArgs = label(); 546 547 incrementCounter(this, VM::RegArgsExtra); 548 549 // Spill extra args passed to function 550 for (unsigned argIndex = static_cast<unsigned>(m_codeBlock->numParameters()); argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 551 branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)).linkTo(mainEntry, this); 552 emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 553 } 554 jump(mainEntry); 555 } 556 557 // Fall through 558 if (numParameters > 0) { 559 // There should always be a "this" parameter. 560 unsigned registerArgumentFixupCount = std::min(numParameters - 1, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS); 561 Label registerArgumentsNeedArityFixup = label(); 562 563 for (unsigned argIndex = 1; argIndex <= registerArgumentFixupCount; argIndex++) 564 registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup; 565 } 566 567 incrementCounter(this, VM::RegArgsArity); 568 569 registerArityNeedsFixup.link(this); 570 571 if (numParameters >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 572 registerCheckArity.link(this); 573 574 spillArgumentRegistersToFrameBeforePrologue(); 575 576 #if ENABLE(VM_COUNTERS) 577 Jump continueToStackArityFixup = jump(); 578 #endif 579 #endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 580 581 m_stackArgsWithArityCheck = label(); 582 incrementCounter(this, VM::StackArgsArity); 583 584 #if ENABLE(VM_COUNTERS) 585 continueToStackArityFixup.link(this); 586 #endif 587 588 emitFunctionPrologue(); 463 589 464 590 load32(AssemblyHelpers::payloadFor((VirtualRegister)CallFrameSlot::argumentCount), GPRInfo::regT1); 465 branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters())).linkTo(fromArityCheck, this); 591 arityOK.append(branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters()))); 592 593 incrementCounter(this, VM::ArityFixupRequired); 594 466 595 emitStoreCodeOrigin(CodeOrigin(0)); 467 596 if (maxFrameExtentForSlowPathCall) … … 470 599 if (maxFrameExtentForSlowPathCall) 471 600 addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister); 472 branchTest32(Zero, GPRInfo::returnValueGPR).linkTo(fromArityCheck, this); 601 arityOK.append(branchTest32(Zero, GPRInfo::returnValueGPR)); 602 473 603 emitStoreCodeOrigin(CodeOrigin(0)); 474 604 move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0); 475 605 m_callArityFixup = call(); 606 607 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 608 Jump toFillRegisters = jump(); 609 610 m_stackArgsArityOKEntry = label(); 611 612 incrementCounter(this, VM::StackArgsNoArity); 613 emitFunctionPrologue(); 614 615 arityOK.link(this); 616 toFillRegisters.link(this); 617 618 // Load argument values into argument registers 619 for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 620 load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex)); 621 622 jump(fromStackEntry); 623 #else 624 arityOK.linkTo(fromArityCheck, this); 476 625 jump(fromArityCheck); 626 #endif 477 627 478 628 // Generate slow path code. … … 503 653 disassemble(*linkBuffer); 504 654 505 MacroAssemblerCodePtr withArityCheck = linkBuffer->locationOf(m_arityCheck); 655 JITEntryPoints entrypoints; 656 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 657 #if ENABLE(VM_COUNTERS) 658 MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(registerEntryNoArity); 659 #else 660 MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(mainEntry); 661 #endif 662 entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, mainEntryCodePtr); 663 entrypoints.setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(m_registerArgsWithPossibleExtraArgs)); 664 entrypoints.setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(m_registerArgsWithArityCheck)); 665 666 for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) { 667 MacroAssemblerCodePtr entry; 668 if (argCount == numParameters) 669 entry = mainEntryCodePtr; 670 else if (registerArgumentsEntrypoints[argCount].isSet()) 671 entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]); 672 else 673 entry = linkBuffer->locationOf(m_registerArgsWithArityCheck); 674 entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry); 675 } 676 entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry)); 677 #else 678 entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry)); 679 #endif 680 entrypoints.setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(m_stackArgsWithArityCheck)); 506 681 507 682 m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>( 508 m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), withArityCheck);683 m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints); 509 684 } 510 685 -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.h
r207475 r209653 218 218 } 219 219 220 void addJSDirectCall(Call call, Call slowCall, Label slowPath, CallLinkInfo* info) 221 { 222 m_jsDirectCalls.append(JSDirectCallRecord(call, slowCall, slowPath, info)); 223 } 224 220 225 void addJSDirectTailCall(PatchableJump patchableJump, Call call, Label slowPath, CallLinkInfo* info) 221 226 { … … 268 273 269 274 // Internal implementation to compile. 270 void compileEntry();271 275 void compileSetupRegistersForEntry(); 272 276 void compileEntryExecutionFlag(); … … 319 323 } 320 324 325 JSDirectCallRecord(Call call, Call slowCall, Label slowPath, CallLinkInfo* info) 326 : call(call) 327 , slowCall(slowCall) 328 , slowPath(slowPath) 329 , info(info) 330 { 331 } 332 333 bool hasSlowCall() { return slowCall.m_label.isSet(); } 334 321 335 Call call; 336 Call slowCall; 322 337 Label slowPath; 323 338 CallLinkInfo* info; … … 356 371 357 372 Call m_callArityFixup; 358 Label m_arityCheck; 373 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 374 Label m_registerArgsWithPossibleExtraArgs; 375 Label m_registerArgsWithArityCheck; 376 Label m_stackArgsArityOKEntry; 377 #endif 378 Label m_stackArgsWithArityCheck; 359 379 std::unique_ptr<SpeculativeJIT> m_speculative; 360 380 PCToCodeOriginMapBuilder m_pcToCodeOriginMapBuilder; -
trunk/Source/JavaScriptCore/dfg/DFGJITFinalizer.cpp
r200933 r209653 38 38 namespace JSC { namespace DFG { 39 39 40 JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode, std::unique_ptr<LinkBuffer> linkBuffer, MacroAssemblerCodePtr withArityCheck) 40 JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode, 41 std::unique_ptr<LinkBuffer> linkBuffer, JITEntryPoints& entrypoints) 41 42 : Finalizer(plan) 42 43 , m_jitCode(jitCode) 43 44 , m_linkBuffer(WTFMove(linkBuffer)) 44 , m_ withArityCheck(withArityCheck)45 , m_entrypoints(entrypoints) 45 46 { 46 47 } … … 57 58 bool JITFinalizer::finalize() 58 59 { 59 m_jitCode->initializeCodeRef( 60 FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())), 61 MacroAssemblerCodePtr()); 60 MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())); 61 m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints)); 62 62 63 63 m_plan.codeBlock->setJITCode(m_jitCode); … … 70 70 bool JITFinalizer::finalizeFunction() 71 71 { 72 RELEASE_ASSERT(!m_withArityCheck.isEmptyValue()); 73 m_jitCode->initializeCodeRef( 74 FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())), 75 m_withArityCheck); 72 RELEASE_ASSERT(!m_entrypoints.entryFor(StackArgsMustCheckArity).isEmptyValue()); 73 MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())); 74 75 m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints)); 76 76 77 m_plan.codeBlock->setJITCode(m_jitCode); 77 78 -
trunk/Source/JavaScriptCore/dfg/DFGJITFinalizer.h
r206525 r209653 37 37 class JITFinalizer : public Finalizer { 38 38 public: 39 JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, MacroAssemblerCodePtr withArityCheck = MacroAssemblerCodePtr(MacroAssemblerCodePtr::EmptyValue));39 JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, JITEntryPoints&); 40 40 virtual ~JITFinalizer(); 41 41 … … 49 49 RefPtr<JITCode> m_jitCode; 50 50 std::unique_ptr<LinkBuffer> m_linkBuffer; 51 MacroAssemblerCodePtr m_withArityCheck;51 JITEntryPoints m_entrypoints; 52 52 }; 53 53 -
trunk/Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
r205794 r209653 102 102 for (unsigned i = 0; i < block->size(); i++) { 103 103 Node* node = block->at(i); 104 bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments [node->local().toArgument()];104 bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_argumentsOnStack[node->local().toArgument()]; 105 105 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame; 106 106 if (inlineCallFrame) -
trunk/Source/JavaScriptCore/dfg/DFGMaximalFlushInsertionPhase.cpp
r203923 r209653 68 68 for (unsigned i = 0; i < block->size(); i++) { 69 69 Node* node = block->at(i); 70 bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments[node->local().toArgument()];71 if (node->op() == SetLocal || (node->op() == SetArgument && !isPrimordialSetArgument)) {70 if ((node->op() == SetArgument || node->op() == SetLocal) 71 && (!node->local().isArgument() || node != m_graph.m_argumentsOnStack[node->local().toArgument()])) { 72 72 VirtualRegister operand = node->local(); 73 73 VariableAccessData* flushAccessData = currentBlockAccessData.operand(operand); … … 118 118 continue; 119 119 120 DFG_ASSERT(m_graph, node, node->op() != SetLocal); // We should have inserted a Flush before this!121 120 initialAccessData.operand(operand) = node->variableAccessData(); 122 121 initialAccessNodes.operand(operand) = node; -
trunk/Source/JavaScriptCore/dfg/DFGMayExit.cpp
r209638 r209653 73 73 case GetCallee: 74 74 case GetArgumentCountIncludingThis: 75 case GetArgumentRegister: 75 76 case GetRestLength: 76 77 case GetScope: -
trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
r181993 r209653 1 1 /* 2 * Copyright (C) 2012-201 5Apple Inc. All rights reserved.2 * Copyright (C) 2012-2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 42 42 if (hasConstant(node->op())) 43 43 result.m_info = JSValue::encode(node->asJSValue()); 44 else if (node->op() == GetArgumentRegister) 45 result.m_info = jsFunctionArgumentForArgumentRegisterIndex(node->argumentRegisterIndex()); 44 46 else { 45 47 ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments); -
trunk/Source/JavaScriptCore/dfg/DFGMinifiedNode.h
r206525 r209653 1 1 /* 2 * Copyright (C) 2012, 2014 , 2015Apple Inc. All rights reserved.2 * Copyright (C) 2012, 2014-2016 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 44 44 case PhantomDirectArguments: 45 45 case PhantomClonedArguments: 46 case GetArgumentRegister: 46 47 return true; 47 48 default: … … 72 73 return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(m_info)); 73 74 } 75 76 bool hasArgumentIndex() const { return hasArgumentIndex(m_op); } 77 78 unsigned argumentIndex() const { return m_info; } 74 79 75 80 static MinifiedID getID(MinifiedNode* node) { return node->id(); } … … 89 94 return type == PhantomDirectArguments || type == PhantomClonedArguments; 90 95 } 96 97 static bool hasArgumentIndex(NodeType type) 98 { 99 return type == GetArgumentRegister; 100 } 91 101 92 102 MinifiedID m_id; -
trunk/Source/JavaScriptCore/dfg/DFGNode.cpp
r208320 r209653 72 72 case SetLocal: 73 73 case SetArgument: 74 case GetArgumentRegister: 74 75 case Flush: 75 76 case PhantomLocal: -
trunk/Source/JavaScriptCore/dfg/DFGNode.h
r209121 r209653 829 829 bool accessesStack(Graph& graph) 830 830 { 831 if (op() == GetArgumentRegister) 832 return false; 833 831 834 return hasVariableAccessData(graph); 832 835 } … … 845 848 { 846 849 return m_opInfo.as<VariableAccessData*>()->find(); 850 } 851 852 void setVariableAccessData(VariableAccessData* variable) 853 { 854 m_opInfo = variable; 847 855 } 848 856 … … 1213 1221 { 1214 1222 return speculationFromJSType(queriedType()); 1223 } 1224 1225 bool hasArgumentRegisterIndex() 1226 { 1227 return op() == GetArgumentRegister; 1228 } 1229 1230 unsigned argumentRegisterIndex() 1231 { 1232 ASSERT(hasArgumentRegisterIndex()); 1233 return m_opInfo2.as<unsigned>(); 1215 1234 } 1216 1235 -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r209638 r209653 54 54 macro(GetCallee, NodeResultJS) \ 55 55 macro(GetArgumentCountIncludingThis, NodeResultInt32) \ 56 macro(GetArgumentRegister, NodeResultJS /* | NodeMustGenerate */) \ 56 57 \ 57 58 /* Nodes for local variable access. These nodes are linked together using Phi nodes. */\ -
trunk/Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
r209121 r209653 145 145 } 146 146 147 case GetArgumentRegister: { 148 m_availability.m_locals.operand(node->local()).setNode(node); 149 break; 150 } 151 147 152 case MovHint: { 148 153 m_availability.m_locals.operand(node->unlinkedLocal()).setNode(node->child1().node()); -
trunk/Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
r198364 r209653 113 113 origin = target->at(0)->origin; 114 114 115 for ( int argument = 0; argument < baseline->numParameters(); ++argument) {115 for (unsigned argument = 0; argument < static_cast<unsigned>(baseline->numParameters()); ++argument) { 116 116 Node* oldNode = target->variablesAtHead.argument(argument); 117 117 if (!oldNode) { 118 // Just for sanity, always have a SetArgumenteven if it's not needed.119 oldNode = m_graph.m_arguments [argument];118 // Just for sanity, always have an argument node even if it's not needed. 119 oldNode = m_graph.m_argumentsForChecking[argument]; 120 120 } 121 Node* node = newRoot->appendNode( 122 m_graph, SpecNone, SetArgument, origin, 123 OpInfo(oldNode->variableAccessData())); 124 m_graph.m_arguments[argument] = node; 121 Node* node; 122 Node* stackNode; 123 if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 124 node = newRoot->appendNode( 125 m_graph, SpecNone, GetArgumentRegister, origin, 126 OpInfo(oldNode->variableAccessData()), 127 OpInfo(argumentRegisterIndexForJSFunctionArgument(argument))); 128 stackNode = newRoot->appendNode( 129 m_graph, SpecNone, SetLocal, origin, 130 OpInfo(oldNode->variableAccessData()), 131 Edge(node)); 132 } else { 133 node = newRoot->appendNode( 134 m_graph, SpecNone, SetArgument, origin, 135 OpInfo(oldNode->variableAccessData())); 136 stackNode = node; 137 } 138 139 m_graph.m_argumentsForChecking[argument] = node; 140 m_graph.m_argumentsOnStack[argument] = stackNode; 125 141 } 126 142 -
trunk/Source/JavaScriptCore/dfg/DFGPlan.cpp
r208720 r209653 315 315 performConstantFolding(dfg); 316 316 bool changed = false; 317 dfg.m_strengthReduceArguments = OptimizeArgumentFlushes; 317 318 changed |= performCFGSimplification(dfg); 319 changed |= performStrengthReduction(dfg); 318 320 changed |= performLocalCSE(dfg); 319 321 -
trunk/Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
r209121 r209653 198 198 199 199 default: { 200 // All of the outermost arguments, except this, are definitely read.200 // All of the outermost stack arguments, except this, are definitely read. 201 201 for (unsigned i = m_graph.m_codeBlock->numParameters(); i-- > 1;) 202 202 m_read(virtualRegisterForArgument(i)); -
trunk/Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
r208761 r209653 57 57 continue; 58 58 59 m_graph.m_arguments [arg]->variableAccessData()->predict(59 m_graph.m_argumentsForChecking[arg]->variableAccessData()->predict( 60 60 profile->computeUpdatedPrediction(locker)); 61 61 } … … 75 75 if (!node) 76 76 continue; 77 ASSERT(node->accessesStack(m_graph) );77 ASSERT(node->accessesStack(m_graph) || node->op() == GetArgumentRegister); 78 78 node->variableAccessData()->predict( 79 79 speculationFromValue(m_graph.m_plan.mustHandleValues[i])); -
trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
r209638 r209653 169 169 } 170 170 171 case GetArgumentRegister: { 172 VariableAccessData* variable = node->variableAccessData(); 173 SpeculatedType prediction = variable->prediction(); 174 if (!variable->couldRepresentInt52() && (prediction & SpecInt52Only)) 175 prediction = (prediction | SpecAnyIntAsDouble) & ~SpecInt52Only; 176 if (prediction) 177 changed |= mergePrediction(prediction); 178 break; 179 } 180 171 181 case UInt32ToNumber: { 172 182 if (node->canSpeculateInt32(m_pass)) … … 969 979 case GetLocal: 970 980 case SetLocal: 981 case GetArgumentRegister: 971 982 case UInt32ToNumber: 972 983 case ValueAdd: -
trunk/Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
r198364 r209653 148 148 } while (changed); 149 149 150 // All of the arguments should be live at head of root. Note that we may find that some150 // All of the stack arguments should be live at head of root. Note that we may find that some 151 151 // locals are live at head of root. This seems wrong but isn't. This will happen for example 152 152 // if the function accesses closure variable #42 for some other function and we either don't … … 158 158 // For our purposes here, the imprecision in the aliasing is harmless. It just means that we 159 159 // may not do as much Phi pruning as we wanted. 160 for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;) 161 DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i)); 160 for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;) { 161 if (i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 162 // Stack arguments are live at the head of root. 163 DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i)); 164 } 165 } 162 166 163 167 // Next identify where we would want to sink PutStacks to. We say that there is a deferred … … 359 363 switch (node->op()) { 360 364 case PutStack: 361 putStacksToSink.add(node); 365 if (!m_graph.m_argumentsOnStack.contains(node)) 366 putStacksToSink.add(node); 362 367 ssaCalculator.newDef( 363 368 operandToVariable.operand(node->stackAccessData()->local), … … 484 489 } 485 490 491 Node* incoming = mapping.operand(operand); 492 // Since we don't delete argument PutStacks, no need to add one back. 493 if (m_graph.m_argumentsOnStack.contains(incoming)) 494 return; 495 486 496 // Gotta insert a PutStack. 487 497 if (verbose) 488 498 dataLog("Inserting a PutStack for ", operand, " at ", node, "\n"); 489 499 490 Node* incoming = mapping.operand(operand);491 500 DFG_ASSERT(m_graph, node, incoming); 492 501 … … 539 548 if (isConcrete(deferred.operand(operand))) { 540 549 incoming = mapping.operand(operand); 550 if (m_graph.m_argumentsOnStack.contains(incoming)) 551 continue; 541 552 DFG_ASSERT(m_graph, phiNode, incoming); 542 553 } else { -
trunk/Source/JavaScriptCore/dfg/DFGRegisterBank.h
r206525 r209653 237 237 } 238 238 239 void unlock() const 240 { 241 return m_bank->unlockAtIndex(m_index); 242 } 243 239 244 void release() const 240 245 { … … 297 302 ASSERT(index < NUM_REGS); 298 303 return m_data[index].lockCount; 304 } 305 306 void unlockAtIndex(unsigned index) 307 { 308 ASSERT(index < NUM_REGS); 309 ASSERT(m_data[index].lockCount); 310 --m_data[index].lockCount; 299 311 } 300 312 -
trunk/Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
r203808 r209653 74 74 75 75 // Find all SetLocals and create Defs for them. We handle SetArgument by creating a 76 // GetLocal, and recording the flush format. 76 // GetStack, and recording the flush format. We handle GetArgumentRegister by directly 77 // adding the node to m_argumentMapping hash map. 77 78 for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) { 78 79 BasicBlock* block = m_graph.block(blockIndex); … … 84 85 for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) { 85 86 Node* node = block->at(nodeIndex); 86 if (node->op() != SetLocal && node->op() != SetArgument )87 if (node->op() != SetLocal && node->op() != SetArgument && node->op() != GetArgumentRegister) 87 88 continue; 88 89 89 90 VariableAccessData* variable = node->variableAccessData(); 90 91 91 Node* childNode ;92 Node* childNode = nullptr; 92 93 if (node->op() == SetLocal) 93 94 childNode = node->child1().node(); 95 else if (node->op() == GetArgumentRegister) 96 m_argumentMapping.add(node, node); 94 97 else { 95 98 ASSERT(node->op() == SetArgument); … … 102 105 m_argumentMapping.add(node, childNode); 103 106 } 104 105 m_calculator.newDef( 106 m_ssaVariableForVariable.get(variable), block, childNode); 107 108 if (childNode) { 109 m_calculator.newDef( 110 m_ssaVariableForVariable.get(variable), block, childNode); 111 } 107 112 } 108 113 … … 295 300 break; 296 301 } 297 302 303 case GetArgumentRegister: { 304 VariableAccessData* variable = node->variableAccessData(); 305 valueForOperand.operand(variable->local()) = node; 306 break; 307 } 308 298 309 case GetStack: { 299 310 ASSERT(m_argumentGetters.contains(node)); … … 383 394 } 384 395 385 m_graph.m_argumentFormats.resize(m_graph.m_arguments .size());386 for (unsigned i = m_graph.m_arguments .size(); i--;) {396 m_graph.m_argumentFormats.resize(m_graph.m_argumentsForChecking.size()); 397 for (unsigned i = m_graph.m_argumentsForChecking.size(); i--;) { 387 398 FlushFormat format = FlushedJSValue; 388 399 389 Node* node = m_argumentMapping.get(m_graph.m_arguments [i]);400 Node* node = m_argumentMapping.get(m_graph.m_argumentsForChecking[i]); 390 401 391 402 RELEASE_ASSERT(node); 392 format = node->stackAccessData()->format; 403 if (node->op() == GetArgumentRegister) { 404 VariableAccessData* variable = node->variableAccessData(); 405 format = variable->flushFormat(); 406 } else 407 format = node->stackAccessData()->format; 393 408 394 409 m_graph.m_argumentFormats[i] = format; 395 m_graph.m_arguments [i] = node; // Record the load that loads the arguments for the benefit of exit profiling.410 m_graph.m_argumentsForChecking[i] = node; // Record the load that loads the arguments for the benefit of exit profiling. 396 411 } 397 412 -
trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
r209638 r209653 148 148 case GetCallee: 149 149 case GetArgumentCountIncludingThis: 150 case GetArgumentRegister: 150 151 case GetRestLength: 151 152 case GetLocal: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r209638 r209653 75 75 , m_indexInBlock(0) 76 76 , m_generationInfo(m_jit.graph().frameRegisterCount()) 77 , m_argumentGenerationInfo(CallFrameSlot::callee + GPRInfo::numberOfArgumentRegisters) 77 78 , m_state(m_jit.graph()) 78 79 , m_interpreter(m_jit.graph(), m_state) … … 408 409 for (unsigned i = 0; i < m_generationInfo.size(); ++i) 409 410 m_generationInfo[i] = GenerationInfo(); 411 for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i) 412 m_argumentGenerationInfo[i] = GenerationInfo(); 410 413 m_gprs = RegisterBank<GPRInfo>(); 411 414 m_fprs = RegisterBank<FPRInfo>(); … … 1200 1203 } 1201 1204 1205 static void dumpRegisterInfo(GenerationInfo& info, unsigned index) 1206 { 1207 if (info.alive()) 1208 dataLogF(" % 3d:%s%s", index, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat())); 1209 else 1210 dataLogF(" % 3d:[__][__]", index); 1211 if (info.registerFormat() == DataFormatDouble) 1212 dataLogF(":fpr%d\n", info.fpr()); 1213 else if (info.registerFormat() != DataFormatNone 1214 #if USE(JSVALUE32_64) 1215 && !(info.registerFormat() & DataFormatJS) 1216 #endif 1217 ) { 1218 ASSERT(info.gpr() != InvalidGPRReg); 1219 dataLogF(":%s\n", GPRInfo::debugName(info.gpr())); 1220 } else 1221 dataLogF("\n"); 1222 } 1223 1202 1224 void SpeculativeJIT::dump(const char* label) 1203 1225 { … … 1209 1231 dataLogF(" fprs:\n"); 1210 1232 m_fprs.dump(); 1211 dataLogF(" VirtualRegisters:\n"); 1212 for (unsigned i = 0; i < m_generationInfo.size(); ++i) { 1213 GenerationInfo& info = m_generationInfo[i]; 1214 if (info.alive()) 1215 dataLogF(" % 3d:%s%s", i, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat())); 1216 else 1217 dataLogF(" % 3d:[__][__]", i); 1218 if (info.registerFormat() == DataFormatDouble) 1219 dataLogF(":fpr%d\n", info.fpr()); 1220 else if (info.registerFormat() != DataFormatNone 1221 #if USE(JSVALUE32_64) 1222 && !(info.registerFormat() & DataFormatJS) 1223 #endif 1224 ) { 1225 ASSERT(info.gpr() != InvalidGPRReg); 1226 dataLogF(":%s\n", GPRInfo::debugName(info.gpr())); 1227 } else 1228 dataLogF("\n"); 1229 } 1233 1234 dataLogF(" Argument VirtualRegisters:\n"); 1235 for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i) 1236 dumpRegisterInfo(m_argumentGenerationInfo[i], i); 1237 1238 dataLogF(" Local VirtualRegisters:\n"); 1239 for (unsigned i = 0; i < m_generationInfo.size(); ++i) 1240 dumpRegisterInfo(m_generationInfo[i], i); 1241 1230 1242 if (label) 1231 1243 dataLogF("</%s>\n", label); … … 1678 1690 m_jit.blockHeads()[m_block->index] = m_jit.label(); 1679 1691 1692 if (!m_block->index) 1693 checkArgumentTypes(); 1694 1680 1695 if (!m_block->intersectionOfCFAHasVisited) { 1681 1696 // Don't generate code for basic blocks that are unreachable according to CFA. … … 1688 1703 m_stream->appendAndLog(VariableEvent::reset()); 1689 1704 1705 if (!m_block->index) 1706 setupArgumentRegistersForEntry(); 1707 1690 1708 m_jit.jitAssertHasValidCallFrame(); 1691 1709 m_jit.jitAssertTagsInPlace(); … … 1697 1715 for (size_t i = m_block->variablesAtHead.size(); i--;) { 1698 1716 int operand = m_block->variablesAtHead.operandForIndex(i); 1717 if (!m_block->index && operandIsArgument(operand)) { 1718 unsigned argument = m_block->variablesAtHead.argumentForIndex(i); 1719 Node* argumentNode = m_jit.graph().m_argumentsForChecking[argument]; 1720 1721 if (argumentNode && argumentNode->op() == GetArgumentRegister) { 1722 if (!argumentNode->refCount()) 1723 continue; // No need to record dead GetArgumentRegisters's. 1724 m_stream->appendAndLog( 1725 VariableEvent::movHint( 1726 MinifiedID(argumentNode), 1727 argumentNode->local())); 1728 continue; 1729 } 1730 } 1731 1699 1732 Node* node = m_block->variablesAtHead[i]; 1700 1733 if (!node) … … 1783 1816 1784 1817 for (int i = 0; i < m_jit.codeBlock()->numParameters(); ++i) { 1785 Node* node = m_jit.graph().m_arguments [i];1818 Node* node = m_jit.graph().m_argumentsForChecking[i]; 1786 1819 if (!node) { 1787 1820 // The argument is dead. We don't do any checks for such arguments. … … 1789 1822 } 1790 1823 1791 ASSERT(node->op() == SetArgument); 1824 ASSERT(node->op() == SetArgument 1825 || (node->op() == SetLocal && node->child1()->op() == GetArgumentRegister) 1826 || node->op() == GetArgumentRegister); 1792 1827 ASSERT(node->shouldGenerate()); 1793 1828 … … 1800 1835 VirtualRegister virtualRegister = variableAccessData->local(); 1801 1836 1802 JSValueSource valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister)); 1803 1837 JSValueSource valueSource; 1838 1839 #if USE(JSVALUE64) 1840 GPRReg argumentRegister = InvalidGPRReg; 1841 1842 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 1843 if (static_cast<unsigned>(i) < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 1844 argumentRegister = argumentRegisterForFunctionArgument(i); 1845 valueSource = JSValueSource(argumentRegister); 1846 } else 1847 #endif 1848 #endif 1849 valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister)); 1850 1804 1851 #if USE(JSVALUE64) 1805 1852 switch (format) { 1806 1853 case FlushedInt32: { 1807 speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister)); 1854 if (argumentRegister != InvalidGPRReg) 1855 speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, argumentRegister, GPRInfo::tagTypeNumberRegister)); 1856 else 1857 speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister)); 1808 1858 break; 1809 1859 } 1810 1860 case FlushedBoolean: { 1811 1861 GPRTemporary temp(this); 1812 m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr()); 1862 if (argumentRegister != InvalidGPRReg) 1863 m_jit.move(argumentRegister, temp.gpr()); 1864 else 1865 m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr()); 1813 1866 m_jit.xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), temp.gpr()); 1814 1867 speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, temp.gpr(), TrustedImm32(static_cast<int32_t>(~1)))); … … 1816 1869 } 1817 1870 case FlushedCell: { 1818 speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister)); 1871 if (argumentRegister != InvalidGPRReg) 1872 speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, argumentRegister, GPRInfo::tagMaskRegister)); 1873 else 1874 speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister)); 1819 1875 break; 1820 1876 } … … 1847 1903 } 1848 1904 1905 void SpeculativeJIT::setupArgumentRegistersForEntry() 1906 { 1907 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 1908 BasicBlock* firstBlock = m_jit.graph().block(0); 1909 1910 // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165720 1911 // We should scan m_arguemntsForChecking instead of looking for GetArgumentRegister 1912 // nodes in the root block. 1913 for (size_t indexInBlock = 0; indexInBlock < firstBlock->size(); ++indexInBlock) { 1914 Node* node = firstBlock->at(indexInBlock); 1915 1916 if (node->op() == GetArgumentRegister) { 1917 VirtualRegister virtualRegister = node->virtualRegister(); 1918 GenerationInfo& info = generationInfoFromVirtualRegister(virtualRegister); 1919 GPRReg argumentReg = GPRInfo::toArgumentRegister(node->argumentRegisterIndex()); 1920 1921 ASSERT(argumentReg != InvalidGPRReg); 1922 1923 ASSERT(!m_gprs.isLocked(argumentReg)); 1924 m_gprs.allocateSpecific(argumentReg); 1925 m_gprs.retain(argumentReg, virtualRegister, SpillOrderJS); 1926 info.initArgumentRegisterValue(node, node->refCount(), argumentReg, DataFormatJS); 1927 info.noticeOSRBirth(*m_stream, node, virtualRegister); 1928 // Don't leave argument registers locked. 1929 m_gprs.unlock(argumentReg); 1930 } 1931 } 1932 #endif 1933 } 1934 1849 1935 bool SpeculativeJIT::compile() 1850 1936 { 1851 checkArgumentTypes();1852 1853 1937 ASSERT(!m_currentNode); 1854 1938 for (BlockIndex blockIndex = 0; blockIndex < m_jit.graph().numBlocks(); ++blockIndex) { -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
r209638 r209653 129 129 130 130 #if USE(JSVALUE64) 131 GPRReg fillJSValue(Edge );131 GPRReg fillJSValue(Edge, GPRReg gprToUse = InvalidGPRReg); 132 132 #elif USE(JSVALUE32_64) 133 133 bool fillJSValue(Edge, GPRReg&, GPRReg&, FPRReg&); … … 201 201 m_jit.addRegisterAllocationAtOffset(m_jit.debugOffset()); 202 202 #endif 203 if (specific == InvalidGPRReg) 204 return allocate(); 205 203 206 VirtualRegister spillMe = m_gprs.allocateSpecific(specific); 204 207 if (spillMe.isValid()) { … … 315 318 316 319 void checkArgumentTypes(); 320 321 void setupArgumentRegistersForEntry(); 317 322 318 323 void clearGenerationInfo(); … … 486 491 void spill(VirtualRegister spillMe) 487 492 { 493 if (spillMe.isArgument() && m_block->index > 0) 494 return; 495 488 496 GenerationInfo& info = generationInfoFromVirtualRegister(spillMe); 489 497 … … 2874 2882 GenerationInfo& generationInfoFromVirtualRegister(VirtualRegister virtualRegister) 2875 2883 { 2876 return m_generationInfo[virtualRegister.toLocal()]; 2884 if (virtualRegister.isLocal()) 2885 return m_generationInfo[virtualRegister.toLocal()]; 2886 ASSERT(virtualRegister.isArgument()); 2887 return m_argumentGenerationInfo[virtualRegister.offset()]; 2877 2888 } 2878 2889 … … 2897 2908 // Virtual and physical register maps. 2898 2909 Vector<GenerationInfo, 32> m_generationInfo; 2910 Vector<GenerationInfo, 8> m_argumentGenerationInfo; 2899 2911 RegisterBank<GPRInfo> m_gprs; 2900 2912 RegisterBank<FPRInfo> m_fprs; … … 2995 3007 } 2996 3008 3009 #if USE(JSVALUE64) 3010 explicit JSValueOperand(SpeculativeJIT* jit, Edge edge, GPRReg regToUse) 3011 : m_jit(jit) 3012 , m_edge(edge) 3013 , m_gprOrInvalid(InvalidGPRReg) 3014 { 3015 ASSERT(m_jit); 3016 if (!edge) 3017 return; 3018 if (jit->isFilled(node()) || regToUse != InvalidGPRReg) 3019 gprUseSpecific(regToUse); 3020 } 3021 #endif 3022 2997 3023 ~JSValueOperand() 2998 3024 { … … 3031 3057 return m_gprOrInvalid; 3032 3058 } 3059 GPRReg gprUseSpecific(GPRReg regToUse) 3060 { 3061 if (m_gprOrInvalid == InvalidGPRReg) 3062 m_gprOrInvalid = m_jit->fillJSValue(m_edge, regToUse); 3063 return m_gprOrInvalid; 3064 } 3033 3065 JSValueRegs jsValueRegs() 3034 3066 { -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r209647 r209653 933 933 934 934 CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo(); 935 info->setUpCall(callType, node->origin.semantic, calleePayloadGPR);935 info->setUpCall(callType, StackArgs, node->origin.semantic, calleePayloadGPR); 936 936 937 937 auto setResultAndResetStack = [&] () { … … 1082 1082 } 1083 1083 1084 m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo:: regT2);1084 m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::nonArgGPR0); 1085 1085 JITCompiler::Call slowCall = m_jit.nearCall(); 1086 1086 … … 5625 5625 case GetStack: 5626 5626 case GetMyArgumentByVal: 5627 case GetArgumentRegister: 5627 5628 case GetMyArgumentByValOutOfBounds: 5628 5629 case PhantomCreateRest: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r209638 r209653 81 81 } 82 82 83 GPRReg SpeculativeJIT::fillJSValue(Edge edge )83 GPRReg SpeculativeJIT::fillJSValue(Edge edge, GPRReg gprToUse) 84 84 { 85 85 VirtualRegister virtualRegister = edge->virtualRegister(); … … 88 88 switch (info.registerFormat()) { 89 89 case DataFormatNone: { 90 GPRReg gpr = allocate( );90 GPRReg gpr = allocate(gprToUse); 91 91 92 92 if (edge->hasConstant()) { … … 121 121 // If not, we'll zero extend in place, so mark on the info that this is now type DataFormatInt32, not DataFormatJSInt32. 122 122 if (m_gprs.isLocked(gpr)) { 123 GPRReg result = allocate(); 123 GPRReg result = allocate(gprToUse); 124 m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result); 125 return result; 126 } 127 if (gprToUse != InvalidGPRReg && gpr != gprToUse) { 128 GPRReg result = allocate(gprToUse); 124 129 m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result); 125 130 return result; … … 139 144 case DataFormatJSBoolean: { 140 145 GPRReg gpr = info.gpr(); 146 if (gprToUse != InvalidGPRReg && gpr != gprToUse) { 147 GPRReg result = allocate(gprToUse); 148 m_jit.move(gpr, result); 149 return result; 150 } 141 151 m_gprs.lock(gpr); 142 152 return gpr; … … 633 643 { 634 644 CallLinkInfo::CallType callType; 645 ArgumentsLocation argumentsLocation = StackArgs; 635 646 bool isVarargs = false; 636 647 bool isForwardVarargs = false; … … 715 726 GPRReg calleeGPR = InvalidGPRReg; 716 727 CallFrameShuffleData shuffleData; 717 728 std::optional<JSValueOperand> tailCallee; 729 std::optional<GPRTemporary> calleeGPRTemporary; 730 731 incrementCounter(&m_jit, VM::DFGCaller); 732 718 733 ExecutableBase* executable = nullptr; 719 734 FunctionExecutable* functionExecutable = nullptr; … … 734 749 unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal; 735 750 751 incrementCounter(&m_jit, VM::CallVarargs); 736 752 if (isForwardVarargs) { 737 753 flushRegisters(); … … 842 858 843 859 if (isTail) { 860 incrementCounter(&m_jit, VM::TailCall); 844 861 Edge calleeEdge = m_jit.graph().child(node, 0); 845 JSValueOperand callee(this, calleeEdge); 846 calleeGPR = callee.gpr(); 862 // We can't get the a specific register for the callee, since that will just move 863 // from any current register. When we silent fill in the slow path we'll fill 864 // the original register and won't have the callee in the right register. 865 // Therefore we allocate a temp register for the callee and move ourselves. 866 tailCallee.emplace(this, calleeEdge); 867 GPRReg tailCalleeGPR = tailCallee->gpr(); 868 calleeGPR = argumentRegisterForCallee(); 869 if (tailCalleeGPR != calleeGPR) 870 calleeGPRTemporary = GPRTemporary(this, calleeGPR); 847 871 if (!isDirect) 848 callee.use(); 849 872 tailCallee->use(); 873 874 argumentsLocation = argumentsLocationFor(numAllocatedArgs); 875 shuffleData.argumentsInRegisters = argumentsLocation != StackArgs; 850 876 shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister; 851 877 shuffleData.numLocals = m_jit.graph().frameRegisterCount(); 852 shuffleData.callee = ValueRecovery::inGPR( calleeGPR, DataFormatJS);878 shuffleData.callee = ValueRecovery::inGPR(tailCalleeGPR, DataFormatJS); 853 879 shuffleData.args.resize(numAllocatedArgs); 854 880 … … 865 891 866 892 shuffleData.setupCalleeSaveRegisters(m_jit.codeBlock()); 867 } else { 893 } else if (node->op() == CallEval) { 894 // CallEval is handled with the arguments in the stack 868 895 m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), JITCompiler::calleeFramePayloadSlot(CallFrameSlot::argumentCount)); 869 896 … … 879 906 for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) 880 907 m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i)); 908 909 incrementCounter(&m_jit, VM::CallEval); 910 } else { 911 for (unsigned i = numPassedArgs; i-- > 0;) { 912 GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i); 913 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 914 JSValueOperand arg(this, argEdge, platformArgGPR); 915 GPRReg argGPR = arg.gpr(); 916 ASSERT(argGPR == platformArgGPR || platformArgGPR == InvalidGPRReg); 917 918 // Only free the non-argument registers at this point. 919 if (platformArgGPR == InvalidGPRReg) { 920 use(argEdge); 921 m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i)); 922 } 923 } 924 925 // Use the argument edges for arguments passed in registers. 926 for (unsigned i = numPassedArgs; i-- > 0;) { 927 GPRReg argGPR = argumentRegisterForFunctionArgument(i); 928 if (argGPR != InvalidGPRReg) { 929 Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i]; 930 use(argEdge); 931 } 932 } 933 934 GPRTemporary argCount(this, argumentRegisterForArgumentCount()); 935 GPRReg argCountGPR = argCount.gpr(); 936 m_jit.move(TrustedImm32(numPassedArgs), argCountGPR); 937 argumentsLocation = argumentsLocationFor(numAllocatedArgs); 938 939 for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) { 940 GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i); 941 942 if (platformArgGPR == InvalidGPRReg) 943 m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i)); 944 else { 945 GPRTemporary argumentTemp(this, platformArgGPR); 946 m_jit.move(TrustedImm64(JSValue::encode(jsUndefined())), argumentTemp.gpr()); 947 } 948 } 881 949 } 882 950 } … … 884 952 if (!isTail || isVarargs || isForwardVarargs) { 885 953 Edge calleeEdge = m_jit.graph().child(node, 0); 886 JSValueOperand callee(this, calleeEdge );954 JSValueOperand callee(this, calleeEdge, argumentRegisterForCallee()); 887 955 calleeGPR = callee.gpr(); 888 956 callee.use(); 889 m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee)); 957 if (argumentsLocation == StackArgs) 958 m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee)); 890 959 891 960 flushRegisters(); … … 914 983 915 984 CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo(); 916 callLinkInfo->setUpCall(callType, m_currentNode->origin.semantic, calleeGPR);985 callLinkInfo->setUpCall(callType, argumentsLocation, m_currentNode->origin.semantic, calleeGPR); 917 986 918 987 if (node->op() == CallEval) { … … 955 1024 RELEASE_ASSERT(node->op() == DirectTailCall); 956 1025 1026 if (calleeGPRTemporary != std::nullopt) 1027 m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr()); 1028 957 1029 JITCompiler::PatchableJump patchableJump = m_jit.patchableJump(); 958 1030 JITCompiler::Label mainPath = m_jit.label(); 1031 1032 incrementCounter(&m_jit, VM::TailCall); 1033 incrementCounter(&m_jit, VM::DirectCall); 959 1034 960 1035 m_jit.emitStoreCallSiteIndex(callSite); … … 972 1047 silentFillAllRegisters(InvalidGPRReg); 973 1048 m_jit.exceptionCheck(); 1049 if (calleeGPRTemporary != std::nullopt) 1050 m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr()); 974 1051 m_jit.jump().linkTo(mainPath, &m_jit); 975 1052 … … 982 1059 JITCompiler::Label mainPath = m_jit.label(); 983 1060 1061 incrementCounter(&m_jit, VM::DirectCall); 1062 984 1063 m_jit.emitStoreCallSiteIndex(callSite); 985 1064 … … 989 1068 JITCompiler::Label slowPath = m_jit.label(); 990 1069 if (isX86()) 991 m_jit.pop(JITCompiler::selectScratchGPR(calleeGPR)); 992 993 callOperation(operationLinkDirectCall, callLinkInfo, calleeGPR); 1070 m_jit.pop(GPRInfo::nonArgGPR0); 1071 1072 m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0 1073 JITCompiler::Call slowCall = m_jit.nearCall(); 1074 994 1075 m_jit.exceptionCheck(); 995 1076 m_jit.jump().linkTo(mainPath, &m_jit); … … 998 1079 999 1080 setResultAndResetStack(); 1000 1001 m_jit.addJSDirectCall(call, slow Path, callLinkInfo);1081 1082 m_jit.addJSDirectCall(call, slowCall, slowPath, callLinkInfo); 1002 1083 return; 1003 1084 } 1004 1085 1086 if (isTail && calleeGPRTemporary != std::nullopt) 1087 m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr()); 1088 1005 1089 m_jit.emitStoreCallSiteIndex(callSite); 1006 1090 … … 1026 1110 if (node->op() == TailCall) { 1027 1111 CallFrameShuffler callFrameShuffler(m_jit, shuffleData); 1028 callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0)); 1112 if (argumentsLocation == StackArgs) 1113 callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(argumentRegisterForCallee())); 1029 1114 callFrameShuffler.prepareForSlowPath(); 1030 } else { 1031 m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0 1032 1033 if (isTail) 1034 m_jit.emitRestoreCalleeSaves(); // This needs to happen after we moved calleeGPR to regT0 1035 } 1036 1037 m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2 1115 } else if (isTail) 1116 m_jit.emitRestoreCalleeSaves(); 1117 1118 m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0 1038 1119 JITCompiler::Call slowCall = m_jit.nearCall(); 1039 1120 1040 1121 done.link(&m_jit); 1041 1122 1042 if (isTail) 1123 if (isTail) { 1124 tailCallee = std::nullopt; 1125 calleeGPRTemporary = std::nullopt; 1043 1126 m_jit.abortWithReason(JITDidReturnFromTailCall); 1044 else1127 } else 1045 1128 setResultAndResetStack(); 1046 1129 … … 4167 4250 } 4168 4251 4252 case GetArgumentRegister: 4253 break; 4254 4169 4255 case GetRestLength: { 4170 4256 compileGetRestLength(node); -
trunk/Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
r208985 r209653 277 277 VirtualRegister local = m_node->local(); 278 278 279 if (local.isArgument() && m_graph.m_strengthReduceArguments != OptimizeArgumentFlushes) 280 break; 281 279 282 for (unsigned i = m_nodeIndex; i--;) { 280 283 Node* node = m_block->at(i); -
trunk/Source/JavaScriptCore/dfg/DFGThunks.cpp
r203006 r209653 131 131 jit.branchPtr(MacroAssembler::NotEqual, GPRInfo::regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(-static_cast<intptr_t>(CallFrame::headerSizeInRegisters)))).linkTo(loop, &jit); 132 132 133 jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo:: regT1);134 MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo:: regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));133 jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo::nonArgGPR0); 134 MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo::nonArgGPR0, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000)))); 135 135 jit.abortWithReason(DFGUnreasonableOSREntryJumpDestination); 136 136 137 137 ok.link(&jit); 138 139 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 140 // Load argument values into argument registers 141 jit.loadPtr(MacroAssembler::Address(GPRInfo::callFrameRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register))), argumentRegisterForCallee()); 142 GPRReg argCountReg = argumentRegisterForArgumentCount(); 143 jit.load32(AssemblyHelpers::payloadFor(CallFrameSlot::argumentCount), argCountReg); 144 145 MacroAssembler::JumpList doneLoadingArgs; 146 147 for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 148 jit.load64(MacroAssembler::Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex)); 149 150 doneLoadingArgs.link(&jit); 151 #endif 152 138 153 jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(); 139 154 jit.emitMaterializeTagCheckRegisters(); 140 155 141 jit.jump(GPRInfo:: regT1);156 jit.jump(GPRInfo::nonArgGPR0); 142 157 143 158 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); -
trunk/Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
r198154 r209653 134 134 valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables); 135 135 for (size_t i = 0; i < valueRecoveries.size(); ++i) { 136 valueRecoveries[i] = ValueRecovery::displacedInJSStack( 137 VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS); 136 if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 137 valueRecoveries[i] = ValueRecovery::inGPR( 138 argumentRegisterForFunctionArgument(i), DataFormatJS); 139 } else { 140 valueRecoveries[i] = ValueRecovery::displacedInJSStack( 141 VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS); 142 } 138 143 } 139 144 return; … … 162 167 info.update(event); 163 168 generationInfos.add(event.id(), info); 169 MinifiedNode* node = graph.at(event.id()); 170 if (node && node->hasArgumentIndex()) { 171 unsigned argument = node->argumentIndex(); 172 VirtualRegister argumentReg = virtualRegisterForArgument(argument); 173 operandSources.setOperand(argumentReg, ValueSource(event.id())); 174 } 164 175 break; 165 176 } -
trunk/Source/JavaScriptCore/dfg/DFGVirtualRegisterAllocationPhase.cpp
r198364 r209653 43 43 { 44 44 } 45 45 46 void allocateRegister(ScoreBoard& scoreBoard, Node* node) 47 { 48 // First, call use on all of the current node's children, then 49 // allocate a VirtualRegister for this node. We do so in this 50 // order so that if a child is on its last use, and a 51 // VirtualRegister is freed, then it may be reused for node. 52 if (node->flags() & NodeHasVarArgs) { 53 for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++) 54 scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]); 55 } else { 56 scoreBoard.useIfHasResult(node->child1()); 57 scoreBoard.useIfHasResult(node->child2()); 58 scoreBoard.useIfHasResult(node->child3()); 59 } 60 61 if (!node->hasResult()) 62 return; 63 64 VirtualRegister virtualRegister = scoreBoard.allocate(); 65 node->setVirtualRegister(virtualRegister); 66 // 'mustGenerate' nodes have their useCount artificially elevated, 67 // call use now to account for this. 68 if (node->mustGenerate()) 69 scoreBoard.use(node); 70 } 71 46 72 bool run() 47 73 { … … 60 86 scoreBoard.sortFree(); 61 87 } 88 89 // Handle GetArgumentRegister Nodes first as the register is alive on entry 90 // to the function and may need to be spilled before any use. 91 if (!blockIndex) { 92 for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) { 93 Node* node = block->at(indexInBlock); 94 if (node->op() == GetArgumentRegister) 95 allocateRegister(scoreBoard, node); 96 } 97 } 98 62 99 for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) { 63 100 Node* node = block->at(indexInBlock); … … 74 111 ASSERT(!node->child1()->hasResult()); 75 112 break; 113 case GetArgumentRegister: 114 ASSERT(!blockIndex); 115 continue; 76 116 default: 77 117 break; 78 118 } 79 80 // First, call use on all of the current node's children, then81 // allocate a VirtualRegister for this node. We do so in this82 // order so that if a child is on its last use, and a83 // VirtualRegister is freed, then it may be reused for node.84 if (node->flags() & NodeHasVarArgs) {85 for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++)86 scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]);87 } else {88 scoreBoard.useIfHasResult(node->child1());89 scoreBoard.useIfHasResult(node->child2());90 scoreBoard.useIfHasResult(node->child3());91 }92 119 93 if (!node->hasResult()) 94 continue; 95 96 VirtualRegister virtualRegister = scoreBoard.allocate(); 97 node->setVirtualRegister(virtualRegister); 98 // 'mustGenerate' nodes have their useCount artificially elevated, 99 // call use now to account for this. 100 if (node->mustGenerate()) 101 scoreBoard.use(node); 120 allocateRegister(scoreBoard, node); 102 121 } 103 122 scoreBoard.assertClear(); -
trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
r209638 r209653 173 173 case GetScope: 174 174 case GetCallee: 175 case GetArgumentRegister: 175 176 case GetArgumentCountIncludingThis: 176 177 case ToNumber: -
trunk/Source/JavaScriptCore/ftl/FTLJITCode.cpp
r208985 r209653 46 46 CommaPrinter comma; 47 47 dataLog(comma, m_b3Code); 48 dataLog(comma, m_arityCheckEntrypoint); 48 dataLog(comma, m_registerArgsPossibleExtraArgsEntryPoint); 49 dataLog(comma, m_registerArgsCheckArityEntryPoint); 49 50 dataLog("\n"); 50 51 } … … 61 62 } 62 63 63 void JITCode::initialize AddressForCall(CodePtr address)64 void JITCode::initializeEntrypointThunk(CodeRef entrypointThunk) 64 65 { 65 m_ addressForCall = address;66 m_entrypointThunk = entrypointThunk; 66 67 } 67 68 68 void JITCode:: initializeArityCheckEntrypoint(CodeRef entrypoint)69 void JITCode::setEntryFor(EntryPointType type, CodePtr entry) 69 70 { 70 m_ arityCheckEntrypoint = entrypoint;71 m_entrypoints.setEntryFor(type, entry); 71 72 } 72 73 JITCode::CodePtr JITCode::addressForCall( ArityCheckMode arityCheck)73 74 JITCode::CodePtr JITCode::addressForCall(EntryPointType entryType) 74 75 { 75 switch (arityCheck) { 76 case ArityCheckNotRequired: 77 return m_addressForCall; 78 case MustCheckArity: 79 return m_arityCheckEntrypoint.code(); 80 } 81 RELEASE_ASSERT_NOT_REACHED(); 82 return CodePtr(); 76 CodePtr entry = m_entrypoints.entryFor(entryType); 77 RELEASE_ASSERT(entry); 78 return entry; 83 79 } 84 80 85 81 void* JITCode::executableAddressAtOffset(size_t offset) 86 82 { 87 return reinterpret_cast<char*>(m_addressForCall.executableAddress()) + offset; 83 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 84 return reinterpret_cast<char*>(addressForCall(RegisterArgsArityCheckNotRequired).executableAddress()) + offset; 85 #else 86 return reinterpret_cast<char*>(addressForCall(StackArgsArityCheckNotRequired).executableAddress()) + offset; 87 #endif 88 88 } 89 89 -
trunk/Source/JavaScriptCore/ftl/FTLJITCode.h
r208985 r209653 45 45 ~JITCode(); 46 46 47 CodePtr addressForCall( ArityCheckMode) override;47 CodePtr addressForCall(EntryPointType) override; 48 48 void* executableAddressAtOffset(size_t offset) override; 49 49 void* dataAddressAtOffset(size_t offset) override; … … 54 54 void initializeB3Code(CodeRef); 55 55 void initializeB3Byproducts(std::unique_ptr<B3::OpaqueByproducts>); 56 void initialize AddressForCall(CodePtr);57 void initializeArityCheckEntrypoint(CodeRef);58 56 void initializeEntrypointThunk(CodeRef); 57 void setEntryFor(EntryPointType, CodePtr); 58 59 59 void validateReferences(const TrackedReferences&) override; 60 60 … … 78 78 CodeRef m_b3Code; 79 79 std::unique_ptr<B3::OpaqueByproducts> m_b3Byproducts; 80 CodeRef m_arityCheckEntrypoint; 80 CodeRef m_entrypointThunk; 81 JITEntryPoints m_entrypoints; 82 CodePtr m_registerArgsPossibleExtraArgsEntryPoint; 83 CodePtr m_registerArgsCheckArityEntryPoint; 84 CodePtr m_stackArgsArityOKEntryPoint; 85 CodePtr m_stackArgsCheckArityEntrypoint; 81 86 }; 82 87 -
trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
r205462 r209653 77 77 ("FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data()))); 78 78 79 jitCode->initialize ArityCheckEntrypoint(79 jitCode->initializeEntrypointThunk( 80 80 FINALIZE_CODE_IF( 81 81 dumpDisassembly, *entrypointLinkBuffer, -
trunk/Source/JavaScriptCore/ftl/FTLLink.cpp
r203990 r209653 128 128 switch (graph.m_plan.mode) { 129 129 case FTLMode: { 130 CCallHelpers::JumpList mainPathJumps; 131 132 jit.load32( 133 frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount), 134 GPRInfo::regT1); 135 mainPathJumps.append(jit.branch32( 136 CCallHelpers::AboveOrEqual, GPRInfo::regT1, 137 CCallHelpers::TrustedImm32(codeBlock->numParameters()))); 130 CCallHelpers::JumpList fillRegistersAndContinueMainPath; 131 CCallHelpers::JumpList toMainPath; 132 133 unsigned numParameters = static_cast<unsigned>(codeBlock->numParameters()); 134 unsigned maxRegisterArgumentCount = std::min(numParameters, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS); 135 136 GPRReg argCountReg = argumentRegisterForArgumentCount(); 137 138 CCallHelpers::Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1]; 139 140 if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 141 // Spill any extra register arguments passed to function onto the stack. 142 for (unsigned argIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1; argIndex >= numParameters; argIndex--) { 143 registerArgumentsEntrypoints[argIndex + 1] = jit.label(); 144 jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 145 } 146 incrementCounter(&jit, VM::RegArgsExtra); 147 toMainPath.append(jit.jump()); 148 } 149 150 CCallHelpers::JumpList continueToArityFixup; 151 152 CCallHelpers::Label stackArgsCheckArityEntry = jit.label(); 153 incrementCounter(&jit, VM::StackArgsArity); 154 jit.load32(frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount), GPRInfo::regT1); 155 continueToArityFixup.append(jit.branch32( 156 CCallHelpers::Below, GPRInfo::regT1, 157 CCallHelpers::TrustedImm32(numParameters))); 158 159 #if ENABLE(VM_COUNTERS) 160 CCallHelpers::Jump continueToStackArityOk = jit.jump(); 161 #endif 162 163 CCallHelpers::Label stackArgsArityOKEntry = jit.label(); 164 165 incrementCounter(&jit, VM::StackArgsArity); 166 167 #if ENABLE(VM_COUNTERS) 168 continueToStackArityOk.link(&jit); 169 #endif 170 171 // Load argument values into argument registers 172 173 // FIXME: Would like to eliminate these to load, but we currently can't jump into 174 // the B3 compiled code at an arbitrary point from the slow entry where the 175 // registers are stored to the stack. 176 jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee()); 177 jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount()); 178 179 for (unsigned argIndex = 0; argIndex < maxRegisterArgumentCount; argIndex++) 180 jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex)); 181 182 toMainPath.append(jit.jump()); 183 184 CCallHelpers::Label registerArgsCheckArityEntry = jit.label(); 185 incrementCounter(&jit, VM::RegArgsArity); 186 187 CCallHelpers::JumpList continueToRegisterArityFixup; 188 CCallHelpers::Label checkForExtraRegisterArguments; 189 190 if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 191 toMainPath.append(jit.branch32( 192 CCallHelpers::Equal, argCountReg, CCallHelpers::TrustedImm32(numParameters))); 193 continueToRegisterArityFixup.append(jit.branch32( 194 CCallHelpers::Below, argCountReg, CCallHelpers::TrustedImm32(numParameters))); 195 // Fall through to the "extra register arity" case. 196 197 checkForExtraRegisterArguments = jit.label(); 198 // Spill any extra register arguments passed to function onto the stack. 199 for (unsigned argIndex = numParameters; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 200 toMainPath.append(jit.branch32(CCallHelpers::BelowOrEqual, argCountReg, CCallHelpers::TrustedImm32(argIndex))); 201 jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 202 } 203 204 incrementCounter(&jit, VM::RegArgsExtra); 205 toMainPath.append(jit.jump()); 206 } else 207 toMainPath.append(jit.branch32( 208 CCallHelpers::AboveOrEqual, argCountReg, CCallHelpers::TrustedImm32(numParameters))); 209 210 #if ENABLE(VM_COUNTERS) 211 continueToRegisterArityFixup.append(jit.jump()); 212 #endif 213 214 if (numParameters > 0) { 215 // There should always be a "this" parameter. 216 CCallHelpers::Label registerArgumentsNeedArityFixup = jit.label(); 217 218 for (unsigned argIndex = 1; argIndex < numParameters && argIndex <= maxRegisterArgumentCount; argIndex++) 219 registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup; 220 } 221 222 #if ENABLE(VM_COUNTERS) 223 incrementCounter(&jit, VM::RegArgsArity); 224 #endif 225 226 continueToRegisterArityFixup.link(&jit); 227 228 jit.spillArgumentRegistersToFrameBeforePrologue(maxRegisterArgumentCount); 229 230 continueToArityFixup.link(&jit); 231 232 incrementCounter(&jit, VM::ArityFixupRequired); 233 138 234 jit.emitFunctionPrologue(); 139 235 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0); … … 156 252 jit.move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0); 157 253 jit.emitFunctionEpilogue(); 158 mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));254 fillRegistersAndContinueMainPath.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0)); 159 255 jit.emitFunctionPrologue(); 160 256 CCallHelpers::Call callArityFixup = jit.call(); 161 257 jit.emitFunctionEpilogue(); 162 mainPathJumps.append(jit.jump()); 258 259 fillRegistersAndContinueMainPath.append(jit.jump()); 260 261 fillRegistersAndContinueMainPath.linkTo(stackArgsArityOKEntry, &jit); 262 263 #if ENABLE(VM_COUNTERS) 264 CCallHelpers::Label registerEntryNoArity = jit.label(); 265 incrementCounter(&jit, VM::RegArgsNoArity); 266 toMainPath.append(jit.jump()); 267 #endif 163 268 164 269 linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail); … … 170 275 linkBuffer->link(callLookupExceptionHandlerFromCallerFrame, lookupExceptionHandlerFromCallerFrame); 171 276 linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress())); 172 linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction))); 173 174 state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction))); 277 linkBuffer->link(toMainPath, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction))); 278 279 state.jitCode->setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(stackArgsCheckArityEntry)); 280 state.jitCode->setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(stackArgsArityOKEntry)); 281 282 #if ENABLE(VM_COUNTERS) 283 MacroAssemblerCodePtr mainEntry = linkBuffer->locationOf(registerEntryNoArity); 284 #else 285 MacroAssemblerCodePtr mainEntry = MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)); 286 #endif 287 state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, mainEntry); 288 289 if (checkForExtraRegisterArguments.isSet()) 290 state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(checkForExtraRegisterArguments)); 291 else 292 state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, mainEntry); 293 294 state.jitCode->setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(registerArgsCheckArityEntry)); 295 296 for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) { 297 MacroAssemblerCodePtr entry; 298 if (argCount == numParameters) 299 entry = mainEntry; 300 else if (registerArgumentsEntrypoints[argCount].isSet()) 301 entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]); 302 else 303 entry = linkBuffer->locationOf(registerArgsCheckArityEntry); 304 state.jitCode->setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry); 305 } 175 306 break; 176 307 } … … 182 313 // call to the B3-generated code. 183 314 CCallHelpers::Label start = jit.label(); 315 184 316 jit.emitFunctionEpilogue(); 317 318 // Load argument values into argument registers 319 320 // FIXME: Would like to eliminate these to load, but we currently can't jump into 321 // the B3 compiled code at an arbitrary point from the slow entry where the 322 // registers are stored to the stack. 323 jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee()); 324 jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount()); 325 326 for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 327 jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex)); 328 185 329 CCallHelpers::Jump mainPathJump = jit.jump(); 186 330 … … 192 336 linkBuffer->link(mainPathJump, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction))); 193 337 194 state.jitCode-> initializeAddressForCall(linkBuffer->locationOf(start));338 state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(start)); 195 339 break; 196 340 } -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r209638 r209653 197 197 m_proc.addFastConstant(m_tagMask->key()); 198 198 199 // Store out callee and argument count for possible OSR exit. 200 m_out.store64(m_out.argumentRegister(argumentRegisterForCallee()), addressFor(CallFrameSlot::callee)); 201 m_out.store32(m_out.argumentRegisterInt32(argumentRegisterForArgumentCount()), payloadFor(CallFrameSlot::argumentCount)); 202 199 203 m_out.storePtr(m_out.constIntPtr(codeBlock()), addressFor(CallFrameSlot::codeBlock)); 200 204 … … 248 252 availabilityMap().clear(); 249 253 availabilityMap().m_locals = Operands<Availability>(codeBlock()->numParameters(), 0); 254 255 Vector<Node*, 8> argumentNodes; 256 Vector<LValue, 8> argumentValues; 257 258 argumentNodes.resize(codeBlock()->numParameters()); 259 argumentValues.resize(codeBlock()->numParameters()); 260 261 m_highBlock = m_graph.block(0); 262 250 263 for (unsigned i = codeBlock()->numParameters(); i--;) { 251 availabilityMap().m_locals.argument(i) = 252 Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i))); 253 } 254 m_node = nullptr; 255 m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true); 256 for (unsigned i = codeBlock()->numParameters(); i--;) { 257 Node* node = m_graph.m_arguments[i]; 264 Node* node = m_graph.m_argumentsForChecking[i]; 258 265 VirtualRegister operand = virtualRegisterForArgument(i); 259 266 260 LValue jsValue = m_out.load64(addressFor(operand));261 267 LValue jsValue = nullptr; 268 262 269 if (node) { 263 DFG_ASSERT(m_graph, node, operand == node->stackAccessData()->machineLocal); 270 if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 271 availabilityMap().m_locals.argument(i) = Availability(node); 272 jsValue = m_out.argumentRegister(GPRInfo::toArgumentRegister(node->argumentRegisterIndex())); 273 274 setJSValue(node, jsValue); 275 } else { 276 availabilityMap().m_locals.argument(i) = 277 Availability(FlushedAt(FlushedJSValue, operand)); 278 jsValue = m_out.load64(addressFor(virtualRegisterForArgument(i))); 279 } 280 281 DFG_ASSERT(m_graph, node, node->hasArgumentRegisterIndex() || operand == node->stackAccessData()->machineLocal); 264 282 265 283 // This is a hack, but it's an effective one. It allows us to do CSE on the … … 269 287 m_loadedArgumentValues.add(node, jsValue); 270 288 } 271 289 290 argumentNodes[i] = node; 291 argumentValues[i] = jsValue; 292 } 293 294 m_node = nullptr; 295 m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true); 296 for (unsigned i = codeBlock()->numParameters(); i--;) { 297 Node* node = argumentNodes[i]; 298 299 if (!node) 300 continue; 301 302 LValue jsValue = argumentValues[i]; 303 272 304 switch (m_graph.m_argumentFormats[i]) { 273 305 case FlushedInt32: … … 813 845 case GetArgumentCountIncludingThis: 814 846 compileGetArgumentCountIncludingThis(); 847 break; 848 case GetArgumentRegister: 849 compileGetArgumentRegister(); 815 850 break; 816 851 case GetScope: … … 5403 5438 } 5404 5439 5440 void compileGetArgumentRegister() 5441 { 5442 // We might have already have a value for this node. 5443 if (LValue value = m_loadedArgumentValues.get(m_node)) { 5444 setJSValue(value); 5445 return; 5446 } 5447 setJSValue(m_out.argumentRegister(GPRInfo::toArgumentRegister(m_node->argumentRegisterIndex()))); 5448 } 5449 5405 5450 void compileGetScope() 5406 5451 { … … 5815 5860 Vector<ConstrainedValue> arguments; 5816 5861 5817 // Make sure that the callee goes into GPR0 because that's where the slow path thunks expect the 5818 // callee to be. 5819 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0))); 5862 // Make sure that the callee goes into argumentRegisterForCallee() because that's where 5863 // the slow path thunks expect the callee to be. 5864 GPRReg calleeReg = argumentRegisterForCallee(); 5865 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg))); 5820 5866 5821 5867 auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) { … … 5825 5871 }; 5826 5872 5827 addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0); 5828 addArgument(m_out.constInt32(numArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset); 5829 for (unsigned i = 0; i < numArgs; ++i) 5830 addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0); 5873 ArgumentsLocation argLocation = argumentsLocationFor(numArgs); 5874 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg))); 5875 arguments.append(ConstrainedValue(m_out.constInt32(numArgs), ValueRep::reg(argumentRegisterForArgumentCount()))); 5876 5877 for (unsigned i = 0; i < numArgs; ++i) { 5878 if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 5879 arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i)))); 5880 else 5881 addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0); 5882 } 5831 5883 5832 5884 PatchpointValue* patchpoint = m_out.patchpoint(Int64); … … 5857 5909 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo(); 5858 5910 5911 incrementCounter(&jit, VM::FTLCaller); 5912 5859 5913 CCallHelpers::DataLabelPtr targetToCheck; 5860 5914 CCallHelpers::Jump slowPath = jit.branchPtrWithPatch( 5861 CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,5915 CCallHelpers::NotEqual, calleeReg, targetToCheck, 5862 5916 CCallHelpers::TrustedImmPtr(0)); 5863 5917 … … 5867 5921 slowPath.link(&jit); 5868 5922 5869 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo:: regT2);5923 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); 5870 5924 CCallHelpers::Call slowCall = jit.nearCall(); 5871 5925 done.link(&jit); … … 5873 5927 callLinkInfo->setUpCall( 5874 5928 node->op() == Construct ? CallLinkInfo::Construct : CallLinkInfo::Call, 5875 node->origin.semantic, GPRInfo::regT0);5929 argLocation, node->origin.semantic, argumentRegisterForCallee()); 5876 5930 5877 5931 jit.addPtr( … … 5882 5936 [=] (LinkBuffer& linkBuffer) { 5883 5937 MacroAssemblerCodePtr linkCall = 5884 linkBuffer.vm().get CTIStub(linkCallThunkGenerator).code();5938 linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()); 5885 5939 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 5886 5940 … … 5926 5980 Vector<ConstrainedValue> arguments; 5927 5981 5928 arguments.append(ConstrainedValue(jsCallee, ValueRep::SomeRegister)); 5982 // Make sure that the callee goes into argumentRegisterForCallee() because that's where 5983 // the slow path thunks expect the callee to be. 5984 GPRReg calleeReg = argumentRegisterForCallee(); 5985 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg))); 5929 5986 if (!isTail) { 5930 5987 auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) { … … 5933 5990 arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP))); 5934 5991 }; 5935 5992 5993 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg))); 5994 #if ENABLE(CALLER_SPILLS_CALLEE) 5936 5995 addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0); 5996 #endif 5997 arguments.append(ConstrainedValue(m_out.constInt32(numPassedArgs), ValueRep::reg(argumentRegisterForArgumentCount()))); 5998 #if ENABLE(CALLER_SPILLS_ARGCOUNT) 5937 5999 addArgument(m_out.constInt32(numPassedArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset); 5938 for (unsigned i = 0; i < numPassedArgs; ++i) 5939 addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0); 5940 for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) 5941 addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0); 6000 #endif 6001 6002 for (unsigned i = 0; i < numPassedArgs; ++i) { 6003 if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 6004 arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i)))); 6005 else 6006 addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0); 6007 } 6008 for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) { 6009 if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 6010 arguments.append(ConstrainedValue(m_out.constInt64(JSValue::encode(jsUndefined())), ValueRep::reg(argumentRegisterForFunctionArgument(i)))); 6011 else 6012 addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0); 6013 } 5942 6014 } else { 5943 6015 for (unsigned i = 0; i < numPassedArgs; ++i) … … 5981 6053 5982 6054 RegisterSet toSave = params.unavailableRegisters(); 6055 shuffleData.argumentsInRegisters = true; 5983 6056 shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatCell); 5984 6057 toSave.set(calleeGPR); … … 5999 6072 CCallHelpers::PatchableJump patchableJump = jit.patchableJump(); 6000 6073 CCallHelpers::Label mainPath = jit.label(); 6001 6074 6075 incrementCounter(&jit, VM::FTLCaller); 6076 incrementCounter(&jit, VM::TailCall); 6077 incrementCounter(&jit, VM::DirectCall); 6078 6002 6079 jit.store32( 6003 6080 CCallHelpers::TrustedImm32(callSiteIndex.bits()), … … 6020 6097 6021 6098 callLinkInfo->setUpCall( 6022 CallLinkInfo::DirectTailCall, node->origin.semantic, InvalidGPRReg);6099 CallLinkInfo::DirectTailCall, argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg); 6023 6100 callLinkInfo->setExecutableDuringCompilation(executable); 6024 6101 if (numAllocatedArgs > numPassedArgs) … … 6043 6120 CCallHelpers::Label mainPath = jit.label(); 6044 6121 6122 incrementCounter(&jit, VM::FTLCaller); 6123 incrementCounter(&jit, VM::DirectCall); 6124 6045 6125 jit.store32( 6046 6126 CCallHelpers::TrustedImm32(callSiteIndex.bits()), … … 6054 6134 callLinkInfo->setUpCall( 6055 6135 isConstruct ? CallLinkInfo::DirectConstruct : CallLinkInfo::DirectCall, 6056 node->origin.semantic, InvalidGPRReg);6136 argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg); 6057 6137 callLinkInfo->setExecutableDuringCompilation(executable); 6058 6138 if (numAllocatedArgs > numPassedArgs) … … 6065 6145 CCallHelpers::Label slowPath = jit.label(); 6066 6146 if (isX86()) 6067 jit.pop(CCallHelpers::selectScratchGPR(calleeGPR)); 6068 6069 callOperation( 6070 *state, params.unavailableRegisters(), jit, 6071 node->origin.semantic, exceptions.get(), operationLinkDirectCall, 6072 InvalidGPRReg, CCallHelpers::TrustedImmPtr(callLinkInfo), 6073 calleeGPR).call(); 6147 jit.pop(GPRInfo::nonArgGPR0); 6148 6149 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0 6150 CCallHelpers::Call slowCall = jit.nearCall(); 6151 exceptions->append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth)); 6074 6152 jit.jump().linkTo(mainPath, &jit); 6075 6153 … … 6080 6158 6081 6159 linkBuffer.link(call, slowPathLocation); 6160 MacroAssemblerCodePtr linkCall = 6161 linkBuffer.vm().getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()); 6162 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 6082 6163 6083 6164 callLinkInfo->setCallLocations( … … 6111 6192 Vector<ConstrainedValue> arguments; 6112 6193 6113 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0))); 6194 GPRReg calleeReg = argumentRegisterForCallee(); 6195 arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg))); 6114 6196 6115 6197 for (unsigned i = 0; i < numArgs; ++i) { … … 6145 6227 CallSiteIndex callSiteIndex = state->jitCode->common.addUniqueCallSiteIndex(codeOrigin); 6146 6228 6229 incrementCounter(&jit, VM::FTLCaller); 6230 incrementCounter(&jit, VM::TailCall); 6231 6147 6232 CallFrameShuffleData shuffleData; 6233 shuffleData.argumentsInRegisters = true; 6148 6234 shuffleData.numLocals = state->jitCode->common.frameRegisterCount; 6149 shuffleData.callee = ValueRecovery::inGPR( GPRInfo::regT0, DataFormatJS);6235 shuffleData.callee = ValueRecovery::inGPR(calleeReg, DataFormatJS); 6150 6236 6151 6237 for (unsigned i = 0; i < numArgs; ++i) … … 6158 6244 CCallHelpers::DataLabelPtr targetToCheck; 6159 6245 CCallHelpers::Jump slowPath = jit.branchPtrWithPatch( 6160 CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,6246 CCallHelpers::NotEqual, calleeReg, targetToCheck, 6161 6247 CCallHelpers::TrustedImmPtr(0)); 6162 6248 … … 6176 6262 6177 6263 CallFrameShuffler slowPathShuffler(jit, shuffleData); 6178 slowPathShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));6179 6264 slowPathShuffler.prepareForSlowPath(); 6180 6265 6181 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo:: regT2);6266 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); 6182 6267 CCallHelpers::Call slowCall = jit.nearCall(); 6183 6268 6184 6269 jit.abortWithReason(JITDidReturnFromTailCall); 6185 6270 6186 callLinkInfo->setUpCall(CallLinkInfo::TailCall, codeOrigin, GPRInfo::regT0);6271 callLinkInfo->setUpCall(CallLinkInfo::TailCall, argumentsLocationFor(numArgs), codeOrigin, calleeReg); 6187 6272 6188 6273 jit.addLinkTask( 6189 6274 [=] (LinkBuffer& linkBuffer) { 6190 6275 MacroAssemblerCodePtr linkCall = 6191 linkBuffer.vm().get CTIStub(linkCallThunkGenerator).code();6276 linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()); 6192 6277 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 6193 6278 … … 6279 6364 6280 6365 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo(); 6366 ArgumentsLocation argumentsLocation = StackArgs; 6281 6367 6282 6368 RegisterSet usedRegisters = RegisterSet::allRegisters(); … … 6428 6514 jit.emitRestoreCalleeSaves(); 6429 6515 ASSERT(!usedRegisters.get(GPRInfo::regT2)); 6430 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo:: regT2);6516 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); 6431 6517 CCallHelpers::Call slowCall = jit.nearCall(); 6432 6518 … … 6436 6522 done.link(&jit); 6437 6523 6438 callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);6524 callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0); 6439 6525 6440 6526 jit.addPtr( … … 6445 6531 [=] (LinkBuffer& linkBuffer) { 6446 6532 MacroAssemblerCodePtr linkCall = 6447 linkBuffer.vm().get CTIStub(linkCallThunkGenerator).code();6533 linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs); 6448 6534 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 6449 6535 … … 6546 6632 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex); 6547 6633 6634 incrementCounter(&jit, VM::FTLCaller); 6635 incrementCounter(&jit, VM::CallVarargs); 6636 6548 6637 jit.store32( 6549 6638 CCallHelpers::TrustedImm32(callSiteIndex.bits()), … … 6551 6640 6552 6641 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo(); 6642 ArgumentsLocation argumentsLocation = StackArgs; 6553 6643 CallVarargsData* data = node->callVarargsData(); 6554 6644 … … 6711 6801 if (isTailCall) 6712 6802 jit.emitRestoreCalleeSaves(); 6713 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo:: regT2);6803 jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); 6714 6804 CCallHelpers::Call slowCall = jit.nearCall(); 6715 6805 … … 6719 6809 done.link(&jit); 6720 6810 6721 callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);6811 callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0); 6722 6812 6723 6813 jit.addPtr( … … 6728 6818 [=] (LinkBuffer& linkBuffer) { 6729 6819 MacroAssemblerCodePtr linkCall = 6730 linkBuffer.vm().get CTIStub(linkCallThunkGenerator).code();6820 linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs); 6731 6821 linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress())); 6732 6822 … … 6797 6887 6798 6888 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex); 6799 6889 6890 incrementCounter(&jit, VM::FTLCaller); 6891 incrementCounter(&jit, VM::CallEval); 6892 6800 6893 jit.store32( 6801 6894 CCallHelpers::TrustedImm32(callSiteIndex.bits()), … … 6803 6896 6804 6897 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo(); 6805 callLinkInfo->setUpCall(CallLinkInfo::Call, node->origin.semantic, GPRInfo::regT0);6898 callLinkInfo->setUpCall(CallLinkInfo::Call, StackArgs, node->origin.semantic, GPRInfo::regT0); 6806 6899 6807 6900 jit.addPtr(CCallHelpers::TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), CCallHelpers::stackPointerRegister, GPRInfo::regT1); -
trunk/Source/JavaScriptCore/ftl/FTLOSREntry.cpp
r203081 r209653 72 72 dataLog(" Values at entry: ", values, "\n"); 73 73 74 for (int argument = values.numberOfArguments(); argument--;) { 74 for (unsigned argument = values.numberOfArguments(); argument--;) { 75 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 76 if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 77 break; 78 #endif 75 79 JSValue valueOnStack = exec->r(virtualRegisterForArgument(argument).offset()).asanUnsafeJSValue(); 76 80 JSValue reconstructedValue = values.argument(argument); … … 100 104 101 105 exec->setCodeBlock(entryCodeBlock); 102 103 void* result = entryCode->addressForCall(ArityCheckNotRequired).executableAddress(); 106 107 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 108 void* result = entryCode->addressForCall(RegisterArgsArityCheckNotRequired).executableAddress(); 109 #else 110 void* result = entryCode->addressForCall(StackArgsArityCheckNotRequired).executableAddress(); 111 #endif 104 112 if (Options::verboseOSR()) 105 113 dataLog(" Entry will succeed, going to address", RawPointer(result), "\n"); -
trunk/Source/JavaScriptCore/ftl/FTLOutput.cpp
r208720 r209653 90 90 } 91 91 92 LValue Output::argumentRegister(Reg reg) 93 { 94 return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg); 95 } 96 97 LValue Output::argumentRegisterInt32(Reg reg) 98 { 99 return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg, Int32); 100 } 101 92 102 LValue Output::framePointer() 93 103 { -
trunk/Source/JavaScriptCore/ftl/FTLOutput.h
r208720 r209653 99 99 B3::Origin origin() { return B3::Origin(m_origin); } 100 100 101 LValue argumentRegister(Reg reg); 102 LValue argumentRegisterInt32(Reg reg); 101 103 LValue framePointer(); 102 104 -
trunk/Source/JavaScriptCore/interpreter/ShadowChicken.cpp
r209229 r209653 285 285 bool isTailDeleted = false; 286 286 JSScope* scope = nullptr; 287 JSValue thisValue = jsUndefined(); 287 288 CodeBlock* codeBlock = callFrame->codeBlock(); 288 if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes() && codeBlock->scopeRegister().isValid()) { 289 scope = callFrame->scope(codeBlock->scopeRegister().offset()); 290 RELEASE_ASSERT(scope->inherits(JSScope::info())); 289 if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes()) { 290 if (codeBlock->scopeRegister().isValid()) { 291 scope = callFrame->scope(codeBlock->scopeRegister().offset()); 292 RELEASE_ASSERT(scope->inherits(JSScope::info())); 293 } 294 thisValue = callFrame->thisValue(); 291 295 } else if (foundFrame) { 292 scope = m_log[indexInLog].scope; 293 if (scope) 294 RELEASE_ASSERT(scope->inherits(JSScope::info())); 295 } 296 toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, callFrame->thisValue(), scope, codeBlock, callFrame->callSiteIndex())); 296 if (!scope) { 297 scope = m_log[indexInLog].scope; 298 if (scope) 299 RELEASE_ASSERT(scope->inherits(JSScope::info())); 300 } 301 if (thisValue.isUndefined()) 302 thisValue = m_log[indexInLog].thisValue; 303 } 304 toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, thisValue, scope, codeBlock, callFrame->callSiteIndex())); 297 305 298 306 if (indexInLog < logCursorIndex -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r208720 r209653 617 617 void AssemblyHelpers::emitDumbVirtualCall(CallLinkInfo* info) 618 618 { 619 move(TrustedImmPtr(info), GPRInfo:: regT2);619 move(TrustedImmPtr(info), GPRInfo::nonArgGPR0); 620 620 Call call = nearCall(); 621 621 addLinkTask( 622 622 [=] (LinkBuffer& linkBuffer) { 623 MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info);624 info->setSlowStub(createJITStubRoutine(virtualThunk , linkBuffer.vm(), nullptr, true));625 linkBuffer.link(call, CodeLocationLabel(virtualThunk. code()));623 JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info); 624 info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), linkBuffer.vm(), nullptr, true)); 625 linkBuffer.link(call, CodeLocationLabel(virtualThunk.entryFor(StackArgs))); 626 626 }); 627 627 } -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r209594 r209653 415 415 } 416 416 417 enum SpillRegisterType { SpillAll, SpillExactly }; 418 419 void spillArgumentRegistersToFrameBeforePrologue(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll) 420 { 421 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 422 JumpList doneStoringArgs; 423 424 emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForCallee(), CallFrameSlot::callee); 425 GPRReg argCountReg = argumentRegisterForArgumentCount(); 426 emitPutToCallFrameHeaderBeforePrologue(argCountReg, CallFrameSlot::argumentCount); 427 428 unsigned argIndex = 0; 429 // Always spill "this" 430 minimumArgsToSpill = std::max(minimumArgsToSpill, 1U); 431 432 for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 433 emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 434 435 if (spillType == SpillAll) { 436 // Spill extra args passed to function 437 for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 438 doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex))); 439 emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 440 } 441 } 442 443 doneStoringArgs.link(this); 444 #else 445 UNUSED_PARAM(minimumArgsToSpill); 446 UNUSED_PARAM(spillType); 447 #endif 448 } 449 450 void spillArgumentRegistersToFrame(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll) 451 { 452 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 453 JumpList doneStoringArgs; 454 455 emitPutToCallFrameHeader(argumentRegisterForCallee(), CallFrameSlot::callee); 456 GPRReg argCountReg = argumentRegisterForArgumentCount(); 457 emitPutToCallFrameHeader(argCountReg, CallFrameSlot::argumentCount); 458 459 unsigned argIndex = 0; 460 // Always spill "this" 461 minimumArgsToSpill = std::max(minimumArgsToSpill, 1U); 462 463 for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) 464 emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex); 465 466 if (spillType == SpillAll) { 467 // Spill extra args passed to function 468 for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 469 doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex))); 470 emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex); 471 } 472 } 473 474 doneStoringArgs.link(this); 475 #else 476 UNUSED_PARAM(minimumArgsToSpill); 477 UNUSED_PARAM(spillType); 478 #endif 479 } 480 481 void fillArgumentRegistersFromFrameBeforePrologue() 482 { 483 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 484 JumpList doneLoadingArgs; 485 486 emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee()); 487 GPRReg argCountReg = argumentRegisterForArgumentCount(); 488 emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argCountReg); 489 490 for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 491 if (argIndex) // Always load "this" 492 doneLoadingArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex))); 493 emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex)); 494 } 495 496 doneLoadingArgs.link(this); 497 #endif 498 } 499 417 500 #if CPU(X86_64) || CPU(X86) 418 501 static size_t prologueStackPointerDelta() … … 624 707 { 625 708 storePtr(from, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta())); 709 } 710 711 void emitPutArgumentToCallFrameBeforePrologue(GPRReg from, unsigned argument) 712 { 713 storePtr(from, Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta())); 714 } 715 716 void emitPutArgumentToCallFrame(GPRReg from, unsigned argument) 717 { 718 emitPutToCallFrameHeader(from, CallFrameSlot::thisArgument + argument); 719 } 720 721 void emitGetFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to) 722 { 723 loadPtr(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to); 724 } 725 726 void emitGetFromCallFrameArgumentBeforePrologue(unsigned argument, GPRReg to) 727 { 728 loadPtr(Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to); 729 } 730 731 void emitGetPayloadFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to) 732 { 733 load32(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), to); 626 734 } 627 735 #else … … 1661 1769 void wangsInt64Hash(GPRReg inputAndResult, GPRReg scratch); 1662 1770 #endif 1663 1771 1772 #if ENABLE(VM_COUNTERS) 1773 void incrementCounter(VM::VMCounterType counterType) 1774 { 1775 addPtr(TrustedImm32(1), AbsoluteAddress(vm()->addressOfCounter(counterType))); 1776 } 1777 #endif 1778 1664 1779 protected: 1665 1780 VM* m_vm; … … 1670 1785 }; 1671 1786 1787 #if ENABLE(VM_COUNTERS) 1788 #define incrementCounter(jit, counterType) (jit)->incrementCounter(counterType) 1789 #else 1790 #define incrementCounter(jit, counterType) ((void)0) 1791 #endif 1792 1672 1793 } // namespace JSC 1673 1794 -
trunk/Source/JavaScriptCore/jit/CachedRecovery.cpp
r189999 r209653 30 30 31 31 namespace JSC { 32 33 void CachedRecovery::addTargetJSValueRegs(JSValueRegs jsValueRegs) 34 { 35 ASSERT(m_wantedFPR == InvalidFPRReg); 36 size_t existing = m_gprTargets.find(jsValueRegs); 37 if (existing == WTF::notFound) { 38 #if USE(JSVALUE64) 39 if (m_gprTargets.size() > 0 && m_recovery.isSet() && m_recovery.isInGPR()) { 40 // If we are recovering to the same GPR, make that GPR the first target. 41 GPRReg sourceGPR = m_recovery.gpr(); 42 if (jsValueRegs.gpr() == sourceGPR) { 43 // Append the current first GPR below. 44 jsValueRegs = JSValueRegs(m_gprTargets[0].gpr()); 45 m_gprTargets[0] = JSValueRegs(sourceGPR); 46 } 47 } 48 #endif 49 m_gprTargets.append(jsValueRegs); 50 } 51 } 32 52 33 53 // We prefer loading doubles and undetermined JSValues into FPRs -
trunk/Source/JavaScriptCore/jit/CachedRecovery.h
r206525 r209653 51 51 52 52 const Vector<VirtualRegister, 1>& targets() const { return m_targets; } 53 const Vector<JSValueRegs, 1>& gprTargets() const { return m_gprTargets; } 53 54 54 55 void addTarget(VirtualRegister reg) … … 69 70 } 70 71 71 void setWantedJSValueRegs(JSValueRegs jsValueRegs) 72 { 73 ASSERT(m_wantedFPR == InvalidFPRReg); 74 m_wantedJSValueRegs = jsValueRegs; 75 } 72 void addTargetJSValueRegs(JSValueRegs); 76 73 77 74 void setWantedFPR(FPRReg fpr) 78 75 { 79 ASSERT( !m_wantedJSValueRegs);76 ASSERT(m_gprTargets.isEmpty()); 80 77 m_wantedFPR = fpr; 81 78 } … … 120 117 void setRecovery(ValueRecovery recovery) { m_recovery = recovery; } 121 118 122 JSValueRegs wantedJSValueRegs() const { return m_wantedJSValueRegs; } 119 JSValueRegs wantedJSValueRegs() const 120 { 121 if (m_gprTargets.isEmpty()) 122 return JSValueRegs(); 123 124 return m_gprTargets[0]; 125 } 123 126 124 127 FPRReg wantedFPR() const { return m_wantedFPR; } 125 128 private: 126 129 ValueRecovery m_recovery; 127 JSValueRegs m_wantedJSValueRegs;128 130 FPRReg m_wantedFPR { InvalidFPRReg }; 129 131 Vector<VirtualRegister, 1> m_targets; 132 Vector<JSValueRegs, 1> m_gprTargets; 130 133 }; 131 134 -
trunk/Source/JavaScriptCore/jit/CallFrameShuffleData.h
r206525 r209653 40 40 Vector<ValueRecovery> args; 41 41 #if USE(JSVALUE64) 42 bool argumentsInRegisters { false }; 42 43 RegisterMap<ValueRecovery> registers; 43 44 GPRReg tagTypeNumber { InvalidGPRReg }; -
trunk/Source/JavaScriptCore/jit/CallFrameShuffler.cpp
r203006 r209653 43 43 , m_alignedNewFrameSize(CallFrame::headerSizeInRegisters 44 44 + roundArgumentCountToAlignFrame(data.args.size())) 45 #if USE(JSVALUE64) 46 , m_argumentsInRegisters(data.argumentsInRegisters) 47 #endif 45 48 , m_frameDelta(m_alignedNewFrameSize - m_alignedOldFrameSize) 46 49 , m_lockedRegisters(RegisterSet::allRegisters()) … … 55 58 56 59 ASSERT(!data.callee.isInJSStack() || data.callee.virtualRegister().isLocal()); 57 addNew(VirtualRegister(CallFrameSlot::callee), data.callee); 58 60 #if USE(JSVALUE64) 61 if (data.argumentsInRegisters) 62 addNew(JSValueRegs(argumentRegisterForCallee()), data.callee); 63 else 64 #endif 65 addNew(VirtualRegister(CallFrameSlot::callee), data.callee); 66 59 67 for (size_t i = 0; i < data.args.size(); ++i) { 60 68 ASSERT(!data.args[i].isInJSStack() || data.args[i].virtualRegister().isLocal()); 61 addNew(virtualRegisterForArgument(i), data.args[i]); 69 #if USE(JSVALUE64) 70 if (data.argumentsInRegisters && i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 71 addNew(JSValueRegs(argumentRegisterForFunctionArgument(i)), data.args[i]); 72 else 73 #endif 74 addNew(virtualRegisterForArgument(i), data.args[i]); 62 75 } 63 76 … … 186 199 } 187 200 #else 188 if (newCachedRecovery) 201 if (newCachedRecovery) { 189 202 out.print(" ", reg, " <- ", newCachedRecovery->recovery()); 203 if (newCachedRecovery->gprTargets().size() > 1) { 204 for (size_t i = 1; i < newCachedRecovery->gprTargets().size(); i++) 205 out.print(", ", newCachedRecovery->gprTargets()[i].gpr(), " <- ", newCachedRecovery->recovery()); 206 } 207 } 190 208 #endif 191 209 out.print("\n"); … … 497 515 || cachedRecovery.recovery().isConstant()); 498 516 499 if (verbose )517 if (verbose && cachedRecovery.targets().size()) 500 518 dataLog(" * Storing ", cachedRecovery.recovery()); 501 519 for (size_t i = 0; i < cachedRecovery.targets().size(); ++i) { … … 506 524 emitStore(cachedRecovery, addressForNew(target)); 507 525 setNew(target, nullptr); 508 }509 if (verbose)510 dataLog("\n");526 if (verbose) 527 dataLog("\n"); 528 } 511 529 cachedRecovery.clearTargets(); 512 530 if (!cachedRecovery.wantedJSValueRegs() && cachedRecovery.wantedFPR() == InvalidFPRReg) … … 607 625 ASSERT(!isUndecided()); 608 626 609 updateDangerFrontier();627 initDangerFrontier(); 610 628 611 629 // First, we try to store any value that goes above the danger … … 703 721 } 704 722 705 #if USE(JSVALUE64)706 if (m_tagTypeNumber != InvalidGPRReg && m_newRegisters[m_tagTypeNumber])707 releaseGPR(m_tagTypeNumber);708 #endif709 710 723 // Handle 2) by loading all registers. We don't have to do any 711 724 // writes, since they have been taken care of above. 725 // Note that we need m_tagTypeNumber to remain locked to box wanted registers. 712 726 if (verbose) 713 727 dataLog(" Loading wanted registers into registers\n"); … … 743 757 // We need to handle 4) first because it implies releasing 744 758 // m_newFrameBase, which could be a wanted register. 759 // Note that we delay setting the argument count register as it needs to be released in step 3. 745 760 if (verbose) 746 761 dataLog(" * Storing the argument count into ", VirtualRegister { CallFrameSlot::argumentCount }, "\n"); 747 m_jit.store32(MacroAssembler::TrustedImm32(0), 748 addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset)); 749 m_jit.store32(MacroAssembler::TrustedImm32(argCount()), 750 addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset)); 762 #if USE(JSVALUE64) 763 if (!m_argumentsInRegisters) { 764 #endif 765 m_jit.store32(MacroAssembler::TrustedImm32(0), 766 addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset)); 767 m_jit.store32(MacroAssembler::TrustedImm32(argCount()), 768 addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset)); 769 #if USE(JSVALUE64) 770 } 771 #endif 751 772 752 773 if (!isSlowPath()) { … … 768 789 emitDisplace(*cachedRecovery); 769 790 } 791 792 #if USE(JSVALUE64) 793 // For recoveries with multiple register targets, copy the contents of the first target to the 794 // remaining targets. 795 for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) { 796 CachedRecovery* cachedRecovery { m_newRegisters[reg] }; 797 if (!cachedRecovery || cachedRecovery->gprTargets().size() < 2) 798 continue; 799 800 GPRReg sourceGPR = cachedRecovery->gprTargets()[0].gpr(); 801 for (size_t i = 1; i < cachedRecovery->gprTargets().size(); i++) 802 m_jit.move(sourceGPR, cachedRecovery->gprTargets()[i].gpr()); 803 } 804 805 if (m_argumentsInRegisters) 806 m_jit.move(MacroAssembler::TrustedImm32(argCount()), argumentRegisterForArgumentCount()); 807 #endif 770 808 } 771 809 -
trunk/Source/JavaScriptCore/jit/CallFrameShuffler.h
r206525 r209653 97 97 // arguments/callee/callee-save registers are by taking into 98 98 // account any spilling that acquireGPR() could have done. 99 CallFrameShuffleData snapshot( ) const99 CallFrameShuffleData snapshot(ArgumentsLocation argumentsLocation) const 100 100 { 101 101 ASSERT(isUndecided()); … … 103 103 CallFrameShuffleData data; 104 104 data.numLocals = numLocals(); 105 data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery(); 105 #if USE(JSVALUE64) 106 data.argumentsInRegisters = argumentsLocation != StackArgs; 107 #endif 108 if (argumentsLocation == StackArgs) 109 data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery(); 110 else { 111 Reg reg { argumentRegisterForCallee() }; 112 CachedRecovery* cachedRecovery { m_newRegisters[reg] }; 113 data.callee = cachedRecovery->recovery(); 114 } 106 115 data.args.resize(argCount()); 107 for (size_t i = 0; i < argCount(); ++i) 108 data.args[i] = getNew(virtualRegisterForArgument(i))->recovery(); 116 for (size_t i = 0; i < argCount(); ++i) { 117 if (argumentsLocation == StackArgs || i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 118 data.args[i] = getNew(virtualRegisterForArgument(i))->recovery(); 119 else { 120 Reg reg { argumentRegisterForFunctionArgument(i) }; 121 CachedRecovery* cachedRecovery { m_newRegisters[reg] }; 122 data.args[i] = cachedRecovery->recovery(); 123 } 124 } 109 125 for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) { 126 if (reg.isGPR() && argumentsLocation != StackArgs 127 && GPRInfo::toArgumentIndex(reg.gpr()) < argumentRegisterIndexForJSFunctionArgument(argCount())) 128 continue; 129 110 130 CachedRecovery* cachedRecovery { m_newRegisters[reg] }; 111 131 if (!cachedRecovery) … … 377 397 int m_alignedOldFrameSize; 378 398 int m_alignedNewFrameSize; 399 #if USE(JSVALUE64) 400 bool m_argumentsInRegisters; 401 #endif 379 402 380 403 // This is the distance, in slots, between the base of the new … … 642 665 CachedRecovery* cachedRecovery = addCachedRecovery(recovery); 643 666 #if USE(JSVALUE64) 644 if (cachedRecovery->wantedJSValueRegs()) 645 m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr; 646 m_newRegisters[jsValueRegs.gpr()] = cachedRecovery; 667 if (cachedRecovery->wantedJSValueRegs()) { 668 if (recovery.isInGPR() && jsValueRegs.gpr() == recovery.gpr()) { 669 m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr; 670 m_newRegisters[jsValueRegs.gpr()] = cachedRecovery; 671 } 672 } else 673 m_newRegisters[jsValueRegs.gpr()] = cachedRecovery; 647 674 #else 648 675 if (JSValueRegs oldRegs { cachedRecovery->wantedJSValueRegs() }) { … … 657 684 m_newRegisters[jsValueRegs.tagGPR()] = cachedRecovery; 658 685 #endif 659 ASSERT(!cachedRecovery->wantedJSValueRegs()); 660 cachedRecovery->setWantedJSValueRegs(jsValueRegs); 686 cachedRecovery->addTargetJSValueRegs(jsValueRegs); 661 687 } 662 688 … … 756 782 } 757 783 784 void initDangerFrontier() 785 { 786 findDangerFrontierFrom(lastNew()); 787 } 788 758 789 void updateDangerFrontier() 759 790 { 791 findDangerFrontierFrom(m_dangerFrontier - 1); 792 } 793 794 void findDangerFrontierFrom(VirtualRegister nextReg) 795 { 760 796 ASSERT(!isUndecided()); 761 797 762 798 m_dangerFrontier = firstNew() - 1; 763 for (VirtualRegister reg = lastNew(); reg >= firstNew(); reg -= 1) {764 if (! getNew(reg) || !isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg)))799 for (VirtualRegister reg = nextReg; reg >= firstNew(); reg -= 1) { 800 if (!isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg))) 765 801 continue; 766 802 -
trunk/Source/JavaScriptCore/jit/CallFrameShuffler64.cpp
r196756 r209653 324 324 else 325 325 m_jit.move64ToDouble(cachedRecovery.recovery().gpr(), wantedReg.fpr()); 326 RELEASE_ASSERT(cachedRecovery.recovery().dataFormat() == DataFormatJS); 326 DataFormat format = cachedRecovery.recovery().dataFormat(); 327 RELEASE_ASSERT(format == DataFormatJS || format == DataFormatCell); 327 328 updateRecovery(cachedRecovery, 328 329 ValueRecovery::inRegister(wantedReg, DataFormatJS)); -
trunk/Source/JavaScriptCore/jit/GPRInfo.h
r206899 r209653 70 70 explicit operator bool() const { return m_gpr != InvalidGPRReg; } 71 71 72 bool operator==(JSValueRegs other) { return m_gpr == other.m_gpr; }73 bool operator!=(JSValueRegs other) { return !(*this == other); }72 bool operator==(JSValueRegs other) const { return m_gpr == other.m_gpr; } 73 bool operator!=(JSValueRegs other) const { return !(*this == other); } 74 74 75 75 GPRReg gpr() const { return m_gpr; } … … 332 332 #if CPU(X86) 333 333 #define NUMBER_OF_ARGUMENT_REGISTERS 0u 334 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u 334 335 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 335 336 … … 354 355 static const GPRReg argumentGPR3 = X86Registers::ebx; // regT3 355 356 static const GPRReg nonArgGPR0 = X86Registers::esi; // regT4 357 static const GPRReg nonArgGPR1 = X86Registers::edi; // regT5 356 358 static const GPRReg returnValueGPR = X86Registers::eax; // regT0 357 359 static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 … … 378 380 unsigned result = indexForRegister[reg]; 379 381 return result; 382 } 383 384 static unsigned toArgumentIndex(GPRReg reg) 385 { 386 ASSERT(reg != InvalidGPRReg); 387 ASSERT(static_cast<int>(reg) < 8); 388 static const unsigned indexForArgumentRegister[8] = { 2, 0, 1, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex }; 389 return indexForArgumentRegister[reg]; 380 390 } 381 391 … … 400 410 #define NUMBER_OF_ARGUMENT_REGISTERS 6u 401 411 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 5u 412 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u) 402 413 #else 403 414 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 404 415 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 7u 416 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u 405 417 #endif 406 418 … … 465 477 #endif 466 478 static const GPRReg nonArgGPR0 = X86Registers::r10; // regT5 (regT4 on Windows) 479 static const GPRReg nonArgGPR1 = X86Registers::eax; // regT0 467 480 static const GPRReg returnValueGPR = X86Registers::eax; // regT0 468 481 static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 or regT2 … … 509 522 } 510 523 524 static unsigned toArgumentIndex(GPRReg reg) 525 { 526 ASSERT(reg != InvalidGPRReg); 527 ASSERT(static_cast<int>(reg) < 16); 528 #if !OS(WINDOWS) 529 static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 3, 2, InvalidIndex, InvalidIndex, InvalidIndex, 1, 0, 4, 5, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex }; 530 #else 531 static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 0, 1, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 2, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex }; 532 #endif 533 return indexForArgumentRegister[reg]; 534 } 535 511 536 static const char* debugName(GPRReg reg) 512 537 { … … 539 564 #if CPU(ARM) 540 565 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 566 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u 541 567 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 542 568 … … 602 628 } 603 629 630 static unsigned toArgumentIndex(GPRReg reg) 631 { 632 ASSERT(reg != InvalidGPRReg); 633 ASSERT(static_cast<int>(reg) < 16); 634 if (reg > argumentGPR3) 635 return InvalidIndex; 636 return (unsigned)reg; 637 } 638 604 639 static const char* debugName(GPRReg reg) 605 640 { … … 622 657 #if CPU(ARM64) 623 658 #define NUMBER_OF_ARGUMENT_REGISTERS 8u 659 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u) 624 660 // Callee Saves includes x19..x28 and FP registers q8..q15 625 661 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 18u … … 699 735 COMPILE_ASSERT(ARM64Registers::q14 == 14, q14_is_14); 700 736 COMPILE_ASSERT(ARM64Registers::q15 == 15, q15_is_15); 737 701 738 static GPRReg toRegister(unsigned index) 702 739 { … … 714 751 ASSERT(index < numberOfArgumentRegisters); 715 752 return toRegister(index); 753 } 754 755 static unsigned toArgumentIndex(GPRReg reg) 756 { 757 ASSERT(reg != InvalidGPRReg); 758 if (reg > argumentGPR7) 759 return InvalidIndex; 760 return (unsigned)reg; 716 761 } 717 762 … … 747 792 #if CPU(MIPS) 748 793 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 794 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u 749 795 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 750 796 … … 774 820 static const GPRReg argumentGPR3 = MIPSRegisters::a3; 775 821 static const GPRReg nonArgGPR0 = regT4; 822 static const GPRReg nonArgGPR1 = regT5; 776 823 static const GPRReg returnValueGPR = regT0; 777 824 static const GPRReg returnValueGPR2 = regT1; … … 826 873 #if CPU(SH4) 827 874 #define NUMBER_OF_ARGUMENT_REGISTERS 4u 875 #define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u 828 876 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u 829 877 … … 856 904 static const GPRReg argumentGPR3 = SH4Registers::r7; // regT3 857 905 static const GPRReg nonArgGPR0 = regT4; 906 static const GPRReg nonArgGPR1 = regT5; 858 907 static const GPRReg returnValueGPR = regT0; 859 908 static const GPRReg returnValueGPR2 = regT1; … … 892 941 #endif // CPU(SH4) 893 942 943 inline GPRReg argumentRegisterFor(unsigned argumentIndex) 944 { 945 #if NUMBER_OF_ARGUMENT_REGISTERS 946 if (argumentIndex >= NUMBER_OF_ARGUMENT_REGISTERS) 947 return InvalidGPRReg; 948 return GPRInfo::toArgumentRegister(argumentIndex); 949 #else 950 UNUSED_PARAM(argumentIndex); 951 RELEASE_ASSERT_NOT_REACHED(); 952 return InvalidGPRReg; 953 #endif 954 } 955 956 inline GPRReg argumentRegisterForCallee() 957 { 958 #if NUMBER_OF_ARGUMENT_REGISTERS 959 return argumentRegisterFor(0); 960 #else 961 return GPRInfo::regT0; 962 #endif 963 } 964 965 inline GPRReg argumentRegisterForArgumentCount() 966 { 967 return argumentRegisterFor(1); 968 } 969 970 inline unsigned argumentRegisterIndexForJSFunctionArgument(unsigned argument) 971 { 972 return argument + 2; 973 } 974 975 inline unsigned jsFunctionArgumentForArgumentRegisterIndex(unsigned index) 976 { 977 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0 978 ASSERT(index >= 2); 979 return index - 2; 980 #else 981 UNUSED_PARAM(index); 982 RELEASE_ASSERT_NOT_REACHED(); 983 return 0; 984 #endif 985 } 986 987 inline unsigned jsFunctionArgumentForArgumentRegister(GPRReg gpr) 988 { 989 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0 990 unsigned argumentRegisterIndex = GPRInfo::toArgumentIndex(gpr); 991 ASSERT(argumentRegisterIndex != GPRInfo::InvalidIndex); 992 return jsFunctionArgumentForArgumentRegisterIndex(argumentRegisterIndex); 993 #else 994 UNUSED_PARAM(gpr); 995 RELEASE_ASSERT_NOT_REACHED(); 996 return 0; 997 #endif 998 } 999 1000 inline GPRReg argumentRegisterForFunctionArgument(unsigned argumentIndex) 1001 { 1002 return argumentRegisterFor(argumentRegisterIndexForJSFunctionArgument(argumentIndex)); 1003 } 1004 1005 inline unsigned numberOfRegisterArgumentsFor(unsigned argumentCount) 1006 { 1007 return std::min(argumentCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS); 1008 } 1009 894 1010 // The baseline JIT uses "accumulator" style execution with regT0 (for 64-bit) 895 1011 // and regT0 + regT1 (for 32-bit) serving as the accumulator register(s) for -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r208761 r209653 65 65 CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), 66 66 newCalleeFunction); 67 }68 69 JIT::CodeRef JIT::compileCTINativeCall(VM* vm, NativeFunction func)70 {71 if (!vm->canUseJIT())72 return CodeRef::createLLIntCodeRef(llint_native_call_trampoline);73 JIT jit(vm, 0);74 return jit.privateCompileCTINativeCall(vm, func);75 67 } 76 68 … … 580 572 nop(); 581 573 574 #if USE(JSVALUE64) 575 spillArgumentRegistersToFrameBeforePrologue(static_cast<unsigned>(m_codeBlock->numParameters())); 576 incrementCounter(this, VM::RegArgsNoArity); 577 #if ENABLE(VM_COUNTERS) 578 Jump continueStackEntry = jump(); 579 #endif 580 #endif 581 m_stackArgsArityOKEntry = label(); 582 incrementCounter(this, VM::StackArgsNoArity); 583 584 #if USE(JSVALUE64) && ENABLE(VM_COUNTERS) 585 continueStackEntry.link(this); 586 #endif 587 582 588 emitFunctionPrologue(); 583 589 emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock); … … 636 642 637 643 if (m_codeBlock->codeType() == FunctionCode) { 638 m_arityCheck = label(); 644 m_registerArgsWithArityCheck = label(); 645 646 incrementCounter(this, VM::RegArgsArity); 647 648 spillArgumentRegistersToFrameBeforePrologue(); 649 650 #if ENABLE(VM_COUNTERS) 651 Jump continueStackArityEntry = jump(); 652 #endif 653 654 m_stackArgsWithArityCheck = label(); 655 incrementCounter(this, VM::StackArgsArity); 656 #if ENABLE(VM_COUNTERS) 657 continueStackArityEntry.link(this); 658 #endif 639 659 store8(TrustedImm32(0), &m_codeBlock->m_shouldAlwaysBeInlined); 640 660 emitFunctionPrologue(); … … 643 663 load32(payloadFor(CallFrameSlot::argumentCount), regT1); 644 664 branch32(AboveOrEqual, regT1, TrustedImm32(m_codeBlock->m_numParameters)).linkTo(beginLabel, this); 665 666 incrementCounter(this, VM::ArityFixupRequired); 645 667 646 668 m_bytecodeOffset = 0; … … 779 801 m_codeBlock->setJITCodeMap(jitCodeMapEncoder.finish()); 780 802 781 MacroAssemblerCodePtr withArityCheck; 782 if (m_codeBlock->codeType() == FunctionCode) 783 withArityCheck = patchBuffer.locationOf(m_arityCheck); 803 MacroAssemblerCodePtr stackEntryArityOKPtr = patchBuffer.locationOf(m_stackArgsArityOKEntry); 804 805 MacroAssemblerCodePtr registerEntryWithArityCheckPtr; 806 MacroAssemblerCodePtr stackEntryWithArityCheckPtr; 807 if (m_codeBlock->codeType() == FunctionCode) { 808 registerEntryWithArityCheckPtr = patchBuffer.locationOf(m_registerArgsWithArityCheck); 809 stackEntryWithArityCheckPtr = patchBuffer.locationOf(m_stackArgsWithArityCheck); 810 } 784 811 785 812 if (Options::dumpDisassembly()) { … … 805 832 806 833 m_codeBlock->shrinkToFit(CodeBlock::LateShrink); 834 JITEntryPoints entrypoints(result.code(), registerEntryWithArityCheckPtr, registerEntryWithArityCheckPtr, stackEntryArityOKPtr, stackEntryWithArityCheckPtr); 835 836 unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters()); 837 for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) { 838 MacroAssemblerCodePtr entry; 839 if (argCount == numParameters) 840 entry = result.code(); 841 else 842 entry = registerEntryWithArityCheckPtr; 843 entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry); 844 } 845 807 846 m_codeBlock->setJITCode( 808 adoptRef(new DirectJITCode( result, withArityCheck, JITCode::BaselineJIT)));847 adoptRef(new DirectJITCode(JITEntryPointsWithRef(result, entrypoints), JITCode::BaselineJIT))); 809 848 810 849 #if ENABLE(JIT_VERBOSE) -
trunk/Source/JavaScriptCore/jit/JIT.h
r208637 r209653 44 44 #include "JITMathIC.h" 45 45 #include "JSInterfaceJIT.h" 46 #include "LowLevelInterpreter.h" 46 47 #include "PCToCodeOriginMap.h" 47 48 #include "UnusedPointer.h" … … 247 248 } 248 249 249 static CodeRef compileCTINativeCall(VM*, NativeFunction); 250 static JITEntryPointsWithRef compileNativeCallEntryPoints(VM* vm, NativeFunction func) 251 { 252 if (!vm->canUseJIT()) { 253 CodeRef nativeCallRef = CodeRef::createLLIntCodeRef(llint_native_call_trampoline); 254 return JITEntryPointsWithRef(nativeCallRef, nativeCallRef.code(), nativeCallRef.code()); 255 } 256 JIT jit(vm, 0); 257 return jit.privateCompileJITEntryNativeCall(vm, func); 258 } 250 259 251 260 static unsigned frameRegisterCountFor(CodeBlock*); … … 267 276 void privateCompileHasIndexedProperty(ByValInfo*, ReturnAddressPtr, JITArrayMode); 268 277 269 Label privateCompileCTINativeCall(VM*, bool isConstruct = false); 270 CodeRef privateCompileCTINativeCall(VM*, NativeFunction); 278 JITEntryPointsWithRef privateCompileJITEntryNativeCall(VM*, NativeFunction); 271 279 void privateCompilePatchGetArrayLength(ReturnAddressPtr returnAddress); 272 280 … … 950 958 unsigned m_byValInstructionIndex; 951 959 unsigned m_callLinkInfoIndex; 952 953 Label m_arityCheck; 960 961 Label m_stackArgsArityOKEntry; 962 Label m_stackArgsWithArityCheck; 963 Label m_registerArgsWithArityCheck; 954 964 std::unique_ptr<LinkBuffer> m_linkBuffer; 955 965 -
trunk/Source/JavaScriptCore/jit/JITCall.cpp
r207475 r209653 92 92 93 93 addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister); 94 incrementCounter(this, VM::BaselineCaller); 95 incrementCounter(this, VM::CallVarargs); 94 96 } 95 97 … … 99 101 storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset())); 100 102 103 incrementCounter(this, VM::BaselineCaller); 104 incrementCounter(this, VM::CallEval); 105 101 106 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); 102 107 checkStackPointerAlignment(); … … 114 119 { 115 120 CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); 116 info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);121 info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0); 117 122 118 123 linkSlowCase(iter); … … 155 160 156 161 CallLinkInfo* info = nullptr; 162 ArgumentsLocation argumentsLocation = StackArgs; 163 157 164 if (opcodeID != op_call_eval) 158 165 info = m_codeBlock->addCallLinkInfo(); … … 160 167 compileSetupVarargsFrame(opcodeID, instruction, info); 161 168 else { 162 int argCount = instruction[3].u.operand;169 unsigned argCount = instruction[3].u.unsignedValue; 163 170 int registerOffset = -instruction[4].u.operand; 164 171 … … 172 179 173 180 addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); 181 if (argumentsLocation != StackArgs) { 182 move(TrustedImm32(argCount), argumentRegisterForArgumentCount()); 183 unsigned registerArgs = std::min(argCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS); 184 for (unsigned arg = 0; arg < registerArgs; arg++) 185 load64(Address(stackPointerRegister, (CallFrameSlot::thisArgument + arg) * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)), argumentRegisterForFunctionArgument(arg)); 186 } 174 187 store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC))); 175 188 } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized. 189 190 incrementCounter(this, VM::BaselineCaller); 176 191 177 192 uint32_t bytecodeOffset = instruction - m_codeBlock->instructions().begin(); … … 179 194 store32(TrustedImm32(locationBits), Address(callFrameRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + TagOffset)); 180 195 181 emitGetVirtualRegister(callee, regT0); // regT0 holds callee. 182 store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 196 GPRReg calleeRegister = argumentRegisterForCallee(); 197 198 emitGetVirtualRegister(callee, calleeRegister); 199 store64(calleeRegister, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 183 200 184 201 if (opcodeID == op_call_eval) { … … 188 205 189 206 DataLabelPtr addressOfLinkedFunctionCheck; 190 Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(0));207 Jump slowCase = branchPtrWithPatch(NotEqual, calleeRegister, addressOfLinkedFunctionCheck, TrustedImmPtr(0)); 191 208 addSlowCase(slowCase); 192 209 193 210 ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); 194 info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);211 info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), argumentsLocation, CodeOrigin(m_bytecodeOffset), calleeRegister); 195 212 m_callCompilationInfo.append(CallCompilationInfo()); 196 213 m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; … … 198 215 199 216 if (opcodeID == op_tail_call) { 217 incrementCounter(this, VM::TailCall); 218 200 219 CallFrameShuffleData shuffleData; 201 220 shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister; … … 210 229 } 211 230 shuffleData.callee = 212 ValueRecovery::inGPR( regT0, DataFormatJS);231 ValueRecovery::inGPR(calleeRegister, DataFormatJS); 213 232 shuffleData.setupCalleeSaveRegisters(m_codeBlock); 214 233 info->setFrameShuffleData(shuffleData); … … 247 266 emitRestoreCalleeSaves(); 248 267 249 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 250 251 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code()); 268 CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo; 269 move(TrustedImmPtr(callLinkInfo), nonArgGPR0); 270 271 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation())); 252 272 253 273 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { -
trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp
r207475 r209653 204 204 { 205 205 CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); 206 info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);206 info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0); 207 207 208 208 linkSlowCase(iter); … … 212 212 addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister); 213 213 214 move(TrustedImmPtr(info), regT2);214 move(TrustedImmPtr(info), nonArgGPR0); 215 215 216 216 emitLoad(CallFrameSlot::callee, regT1, regT0); 217 MacroAssemblerCodeRef virtualThunk = virtualThunkFor(m_vm, *info);218 info->setSlowStub(createJITStubRoutine(virtualThunk , *m_vm, nullptr, true));219 emitNakedCall(virtualThunk. code());217 JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(m_vm, *info); 218 info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *m_vm, nullptr, true)); 219 emitNakedCall(virtualThunk.entryFor(StackArgs)); 220 220 addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister); 221 221 checkStackPointerAlignment(); … … 287 287 288 288 ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); 289 info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);289 info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), StackArgs, CodeOrigin(m_bytecodeOffset), regT0); 290 290 m_callCompilationInfo.append(CallCompilationInfo()); 291 291 m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck; … … 318 318 linkSlowCase(iter); 319 319 320 move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2); 320 CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo; 321 move(TrustedImmPtr(callLinkInfo), nonArgGPR0); 321 322 322 323 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) 323 324 emitRestoreCalleeSaves(); 324 325 325 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->get CTIStub(linkCallThunkGenerator).code());326 m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation())); 326 327 327 328 if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) { -
trunk/Source/JavaScriptCore/jit/JITCode.cpp
r205569 r209653 76 76 if (!function || !protoCallFrame->needArityCheck()) { 77 77 ASSERT(!protoCallFrame->needArityCheck()); 78 entryAddress = executableAddress();78 entryAddress = addressForCall(StackArgsArityCheckNotRequired).executableAddress(); 79 79 } else 80 entryAddress = addressForCall( MustCheckArity).executableAddress();80 entryAddress = addressForCall(StackArgsMustCheckArity).executableAddress(); 81 81 JSValue result = JSValue::decode(vmEntryToJavaScript(entryAddress, vm, protoCallFrame)); 82 82 return scope.exception() ? jsNull() : result; … … 163 163 } 164 164 165 DirectJITCode::DirectJITCode(JIT Code::CodeRef ref, JITCode::CodePtr withArityCheck, JITType jitType)166 : JITCodeWithCodeRef( ref, jitType)167 , m_ withArityCheck(withArityCheck)165 DirectJITCode::DirectJITCode(JITEntryPointsWithRef entries, JITType jitType) 166 : JITCodeWithCodeRef(entries.codeRef(), jitType) 167 , m_entryPoints(entries) 168 168 { 169 169 } … … 173 173 } 174 174 175 void DirectJITCode::initialize CodeRef(JITCode::CodeRef ref, JITCode::CodePtr withArityCheck)175 void DirectJITCode::initializeEntryPoints(JITEntryPointsWithRef entries) 176 176 { 177 177 RELEASE_ASSERT(!m_ref); 178 m_ref = ref; 179 m_withArityCheck = withArityCheck; 180 } 181 182 JITCode::CodePtr DirectJITCode::addressForCall(ArityCheckMode arity) 183 { 184 switch (arity) { 185 case ArityCheckNotRequired: 186 RELEASE_ASSERT(m_ref); 187 return m_ref.code(); 188 case MustCheckArity: 189 RELEASE_ASSERT(m_withArityCheck); 190 return m_withArityCheck; 191 } 192 RELEASE_ASSERT_NOT_REACHED(); 193 return CodePtr(); 178 m_ref = entries.codeRef(); 179 m_entryPoints = entries; 180 } 181 182 JITCode::CodePtr DirectJITCode::addressForCall(EntryPointType type) 183 { 184 return m_entryPoints.entryFor(type); 194 185 } 195 186 … … 214 205 } 215 206 216 JITCode::CodePtr NativeJITCode::addressForCall( ArityCheckMode)207 JITCode::CodePtr NativeJITCode::addressForCall(EntryPointType) 217 208 { 218 209 RELEASE_ASSERT(!!m_ref); -
trunk/Source/JavaScriptCore/jit/JITCode.h
r208985 r209653 26 26 #pragma once 27 27 28 #include "ArityCheckMode.h"29 28 #include "CallFrame.h" 30 29 #include "CodeOrigin.h" 31 30 #include "Disassembler.h" 31 #include "JITEntryPoints.h" 32 32 #include "JSCJSValue.h" 33 33 #include "MacroAssemblerCodeRef.h" … … 174 174 } 175 175 176 virtual CodePtr addressForCall( ArityCheckMode) = 0;176 virtual CodePtr addressForCall(EntryPointType) = 0; 177 177 virtual void* executableAddressAtOffset(size_t offset) = 0; 178 void* executableAddress() { return executableAddressAtOffset(0); }179 178 virtual void* dataAddressAtOffset(size_t offset) = 0; 180 179 virtual unsigned offsetOf(void* pointerIntoCode) = 0; … … 225 224 public: 226 225 DirectJITCode(JITType); 227 DirectJITCode( CodeRef, CodePtr withArityCheck, JITType);226 DirectJITCode(JITEntryPointsWithRef, JITType); 228 227 virtual ~DirectJITCode(); 229 228 230 void initialize CodeRef(CodeRef, CodePtr withArityCheck);231 232 CodePtr addressForCall( ArityCheckMode) override;229 void initializeEntryPoints(JITEntryPointsWithRef); 230 231 CodePtr addressForCall(EntryPointType) override; 233 232 234 233 private: 235 CodePtr m_withArityCheck;234 JITEntryPoints m_entryPoints; 236 235 }; 237 236 … … 244 243 void initializeCodeRef(CodeRef); 245 244 246 CodePtr addressForCall( ArityCheckMode) override;245 CodePtr addressForCall(EntryPointType) override; 247 246 }; 248 247 -
trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp
r209570 r209653 50 50 #if USE(JSVALUE64) 51 51 52 JIT ::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction)53 { 54 return vm->get CTIStub(nativeCallGenerator);52 JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction) 53 { 54 return vm->getJITEntryStub(nativeCallGenerator); 55 55 } 56 56 -
trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
r209647 r209653 47 47 namespace JSC { 48 48 49 JIT ::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction func)49 JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction func) 50 50 { 51 51 // FIXME: This should be able to log ShadowChicken prologue packets. … … 130 130 131 131 patchBuffer.link(nativeCall, FunctionPtr(func)); 132 return FINALIZE_CODE(patchBuffer, ("JIT CTI native call")); 132 JIT::CodeRef codeRef = FINALIZE_CODE(patchBuffer, ("JIT CTI native call")); 133 134 return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code()); 133 135 } 134 136 -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r209570 r209653 891 891 ExecutableBase* executable = callee->executable(); 892 892 893 MacroAssemblerCodePtr codePtr ;893 MacroAssemblerCodePtr codePtr, codePtrForLinking; 894 894 CodeBlock* codeBlock = 0; 895 895 if (executable->isHostFunction()) { 896 codePtr = executable->entrypointFor(kind, MustCheckArity); 896 codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity); 897 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 898 if (callLinkInfo->argumentsInRegisters()) 899 codePtrForLinking = executable->entrypointFor(kind, RegisterArgsMustCheckArity); 900 #endif 897 901 } else { 898 902 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); … … 915 919 } 916 920 codeBlock = *codeBlockSlot; 917 ArityCheckMode arity; 918 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs()) 919 arity = MustCheckArity; 920 else 921 arity = ArityCheckNotRequired; 922 codePtr = functionExecutable->entrypointFor(kind, arity); 921 EntryPointType entryType; 922 size_t callerArgumentCount = execCallee->argumentCountIncludingThis(); 923 size_t calleeArgumentCount = static_cast<size_t>(codeBlock->numParameters()); 924 if (callerArgumentCount < calleeArgumentCount || callLinkInfo->isVarargs()) { 925 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 926 if (callLinkInfo->argumentsInRegisters()) { 927 codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount)); 928 if (!codePtrForLinking) 929 codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsMustCheckArity); 930 } 931 #endif 932 entryType = StackArgsMustCheckArity; 933 (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation())); 934 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 935 } else if (callLinkInfo->argumentsInRegisters()) { 936 if (callerArgumentCount == calleeArgumentCount || calleeArgumentCount >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 937 codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsArityCheckNotRequired); 938 else { 939 codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount)); 940 if (!codePtrForLinking) 941 codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsPossibleExtraArgs); 942 } 943 // Prepopulate the entry points the virtual thunk might use. 944 (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation())); 945 946 entryType = StackArgsArityCheckNotRequired; 947 #endif 948 } else 949 entryType = StackArgsArityCheckNotRequired; 950 codePtr = functionExecutable->entrypointFor(kind, entryType); 923 951 } 924 952 if (!callLinkInfo->seenOnce()) 925 953 callLinkInfo->setSeen(); 926 954 else 927 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr );955 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtrForLinking ? codePtrForLinking : codePtr); 928 956 929 957 return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); … … 960 988 CodeBlock* codeBlock = nullptr; 961 989 if (executable->isHostFunction()) 962 codePtr = executable->entrypointFor(kind, MustCheckArity); 990 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 991 codePtr = executable->entrypointFor(kind, callLinkInfo->argumentsInRegisters() ? RegisterArgsMustCheckArity : StackArgsMustCheckArity); 992 #else 993 codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity); 994 #endif 963 995 else { 964 996 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); … … 972 1004 return; 973 1005 } 974 ArityCheckMode arity;1006 EntryPointType entryType; 975 1007 unsigned argumentStackSlots = callLinkInfo->maxNumArguments(); 976 if (argumentStackSlots < static_cast<size_t>(codeBlock->numParameters())) 977 arity = MustCheckArity; 1008 size_t codeBlockParameterCount = static_cast<size_t>(codeBlock->numParameters()); 1009 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 1010 if (callLinkInfo->argumentsInRegisters()) { 1011 // This logic could probably be simplified! 1012 if (argumentStackSlots < codeBlockParameterCount) 1013 entryType = entryPointTypeFor(callLinkInfo->argumentsLocation()); 1014 else if (argumentStackSlots > NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) { 1015 if (codeBlockParameterCount < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 1016 entryType = RegisterArgsPossibleExtraArgs; 1017 else 1018 entryType = RegisterArgsArityCheckNotRequired; 1019 } else 1020 entryType = registerEntryPointTypeFor(argumentStackSlots); 1021 } else if (argumentStackSlots < codeBlockParameterCount) 1022 #else 1023 if (argumentStackSlots < codeBlockParameterCount) 1024 #endif 1025 entryType = StackArgsMustCheckArity; 978 1026 else 979 arity =ArityCheckNotRequired;980 codePtr = functionExecutable->entrypointFor(kind, arity);1027 entryType = StackArgsArityCheckNotRequired; 1028 codePtr = functionExecutable->entrypointFor(kind, entryType); 981 1029 } 982 1030 … … 1021 1069 } 1022 1070 } 1071 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 1072 if (callLinkInfo->argumentsInRegisters()) { 1073 // Pull into the cache the arity check register entry if the caller wants a register entry. 1074 // This will be used by the generic virtual call thunk. 1075 (void) executable->entrypointFor(kind, RegisterArgsMustCheckArity); 1076 (void) executable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation())); 1077 1078 } 1079 #endif 1023 1080 return encodeResult(executable->entrypointFor( 1024 kind, MustCheckArity).executableAddress(),1081 kind, StackArgsMustCheckArity).executableAddress(), 1025 1082 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 1026 1083 } -
trunk/Source/JavaScriptCore/jit/JITThunks.cpp
r208320 r209653 45 45 } 46 46 47 MacroAssemblerCodePtr JITThunks::ctiNativeCall(VM* vm)47 JITEntryPointsWithRef JITThunks::jitEntryNativeCall(VM* vm) 48 48 { 49 if (!vm->canUseJIT()) 50 return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline); 51 return ctiStub(vm, nativeCallGenerator).code(); 49 if (!vm->canUseJIT()) { 50 MacroAssemblerCodePtr nativeCallStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline); 51 return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeCallStub), nativeCallStub, nativeCallStub); 52 } 53 return jitEntryStub(vm, nativeCallGenerator); 52 54 } 53 55 54 MacroAssemblerCodePtr JITThunks::ctiNativeConstruct(VM* vm)56 JITEntryPointsWithRef JITThunks::jitEntryNativeConstruct(VM* vm) 55 57 { 56 if (!vm->canUseJIT()) 57 return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline); 58 return ctiStub(vm, nativeConstructGenerator).code(); 58 if (!vm->canUseJIT()) { 59 MacroAssemblerCodePtr nativeConstructStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline); 60 return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeConstructStub), nativeConstructStub, nativeConstructStub); 61 } 62 return jitEntryStub(vm, nativeConstructGenerator); 59 63 } 60 64 … … 83 87 } 84 88 89 JITEntryPointsWithRef JITThunks::jitEntryStub(VM* vm, JITEntryGenerator generator) 90 { 91 LockHolder locker(m_lock); 92 JITEntryStubMap::AddResult entry = m_jitEntryStubMap.add(generator, JITEntryPointsWithRef()); 93 if (entry.isNewEntry) { 94 // Compilation thread can only retrieve existing entries. 95 ASSERT(!isCompilationThread()); 96 entry.iterator->value = generator(vm); 97 } 98 return entry.iterator->value; 99 } 100 101 JITJSCallThunkEntryPointsWithRef JITThunks::jitCallThunkEntryStub(VM* vm, JITCallThunkEntryGenerator generator) 102 { 103 LockHolder locker(m_lock); 104 JITCallThunkEntryStubMap::AddResult entry = m_jitCallThunkEntryStubMap.add(generator, JITJSCallThunkEntryPointsWithRef()); 105 if (entry.isNewEntry) { 106 // Compilation thread can only retrieve existing entries. 107 ASSERT(!isCompilationThread()); 108 entry.iterator->value = generator(vm); 109 } 110 return entry.iterator->value; 111 } 112 85 113 void JITThunks::finalize(Handle<Unknown> handle, void*) 86 114 { … … 94 122 } 95 123 96 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, ThunkGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name)124 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, JITEntryGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name) 97 125 { 98 126 ASSERT(!isCompilationThread()); … … 104 132 RefPtr<JITCode> forCall; 105 133 if (generator) { 106 MacroAssemblerCodeRef entry = generator(vm);107 forCall = adoptRef(new DirectJITCode(entry, entry.code(),JITCode::HostCallThunk));134 JITEntryPointsWithRef entry = generator(vm); 135 forCall = adoptRef(new DirectJITCode(entry, JITCode::HostCallThunk)); 108 136 } else 109 forCall = adoptRef(new NativeJITCode(JIT::compileCTINativeCall(vm, function), JITCode::HostCallThunk));137 forCall = adoptRef(new DirectJITCode(JIT::compileNativeCallEntryPoints(vm, function), JITCode::HostCallThunk)); 110 138 111 RefPtr<JITCode> forConstruct = adoptRef(new NativeJITCode(MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct(vm)), JITCode::HostCallThunk));139 RefPtr<JITCode> forConstruct = adoptRef(new DirectJITCode(jitEntryNativeConstruct(vm), JITCode::HostCallThunk)); 112 140 113 141 NativeExecutable* nativeExecutable = NativeExecutable::create(*vm, forCall, function, forConstruct, constructor, intrinsic, signature, name); … … 116 144 } 117 145 118 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, ThunkGenerator generator, Intrinsic intrinsic, const String& name)146 NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, JITEntryGenerator generator, Intrinsic intrinsic, const String& name) 119 147 { 120 148 return hostFunctionStub(vm, function, callHostFunctionAsConstructor, generator, intrinsic, nullptr, name); -
trunk/Source/JavaScriptCore/jit/JITThunks.h
r208320 r209653 30 30 #include "CallData.h" 31 31 #include "Intrinsic.h" 32 #include "JITEntryPoints.h" 32 33 #include "MacroAssemblerCodeRef.h" 33 34 #include "ThunkGenerator.h" … … 53 54 virtual ~JITThunks(); 54 55 55 MacroAssemblerCodePtr ctiNativeCall(VM*);56 MacroAssemblerCodePtr ctiNativeConstruct(VM*);56 JITEntryPointsWithRef jitEntryNativeCall(VM*); 57 JITEntryPointsWithRef jitEntryNativeConstruct(VM*); 57 58 MacroAssemblerCodePtr ctiNativeTailCall(VM*); 58 59 MacroAssemblerCodePtr ctiNativeTailCallWithoutSavedTags(VM*); 59 60 60 61 MacroAssemblerCodeRef ctiStub(VM*, ThunkGenerator); 62 JITEntryPointsWithRef jitEntryStub(VM*, JITEntryGenerator); 63 JITJSCallThunkEntryPointsWithRef jitCallThunkEntryStub(VM*, JITCallThunkEntryGenerator); 61 64 62 65 NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, const String& name); 63 NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, ThunkGenerator, Intrinsic, const DOMJIT::Signature*, const String& name);64 NativeExecutable* hostFunctionStub(VM*, NativeFunction, ThunkGenerator, Intrinsic, const String& name);66 NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, JITEntryGenerator, Intrinsic, const DOMJIT::Signature*, const String& name); 67 NativeExecutable* hostFunctionStub(VM*, NativeFunction, JITEntryGenerator, Intrinsic, const String& name); 65 68 66 69 void clearHostFunctionStubs(); … … 71 74 typedef HashMap<ThunkGenerator, MacroAssemblerCodeRef> CTIStubMap; 72 75 CTIStubMap m_ctiStubMap; 76 typedef HashMap<JITEntryGenerator, JITEntryPointsWithRef> JITEntryStubMap; 77 JITEntryStubMap m_jitEntryStubMap; 78 typedef HashMap<JITCallThunkEntryGenerator, JITJSCallThunkEntryPointsWithRef> JITCallThunkEntryStubMap; 79 JITCallThunkEntryStubMap m_jitCallThunkEntryStubMap; 73 80 74 81 typedef std::tuple<NativeFunction, NativeFunction, String> HostFunctionKey; -
trunk/Source/JavaScriptCore/jit/JSInterfaceJIT.h
r206525 r209653 64 64 Jump emitJumpIfNumber(RegisterID); 65 65 Jump emitJumpIfNotNumber(RegisterID); 66 Jump emitJumpIfNotInt32(RegisterID reg); 66 67 void emitTagInt(RegisterID src, RegisterID dest); 67 68 #endif … … 164 165 } 165 166 167 inline JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotInt32(RegisterID reg) 168 { 169 Jump result = branch64(Below, reg, tagTypeNumberRegister); 170 zeroExtend32ToPtr(reg, reg); 171 return result; 172 } 173 166 174 inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst) 167 175 { 168 176 load64(addressFor(virtualRegisterIndex), dst); 169 Jump result = branch64(Below, dst, tagTypeNumberRegister); 170 zeroExtend32ToPtr(dst, dst); 171 return result; 177 return emitJumpIfNotInt32(dst); 172 178 } 173 179 -
trunk/Source/JavaScriptCore/jit/RegisterSet.cpp
r209560 r209653 160 160 } 161 161 162 RegisterSet RegisterSet::argumentRegisters() 163 { 164 RegisterSet result; 165 #if USE(JSVALUE64) 166 for (unsigned argumentIndex = 0; argumentIndex < NUMBER_OF_ARGUMENT_REGISTERS; argumentIndex++) { 167 GPRReg argumentReg = argumentRegisterFor(argumentIndex); 168 169 if (argumentReg != InvalidGPRReg) 170 result.set(argumentReg); 171 } 172 #endif 173 return result; 174 } 175 162 176 RegisterSet RegisterSet::vmCalleeSaveRegisters() 163 177 { -
trunk/Source/JavaScriptCore/jit/RegisterSet.h
r207434 r209653 50 50 static RegisterSet specialRegisters(); // The union of stack, reserved hardware, and runtime registers. 51 51 JS_EXPORT_PRIVATE static RegisterSet calleeSaveRegisters(); 52 static RegisterSet argumentRegisters(); // Registers used to pass arguments when making JS Calls 52 53 static RegisterSet vmCalleeSaveRegisters(); // Callee save registers that might be saved and used by any tier. 53 54 static RegisterSet llintBaselineCalleeSaveRegisters(); // Registers saved and used by the LLInt. -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r209597 r209653 541 541 } 542 542 543 static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)544 { 545 MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel( codeRef.code()));546 } 547 548 static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, ThunkGenerator generator)549 { 550 linkSlowFor(vm, callLinkInfo, vm->get CTIStub(generator));543 static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef thunkEntryPoints) 544 { 545 MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel(thunkEntryPoints.entryFor(callLinkInfo.argumentsLocation()))); 546 } 547 548 static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, JITCallThunkEntryGenerator generator) 549 { 550 linkSlowFor(vm, callLinkInfo, vm->getJITCallThunkEntryStub(generator)); 551 551 } 552 552 553 553 static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo) 554 554 { 555 MacroAssemblerCodeRef virtualThunk = virtualThunkFor(vm, callLinkInfo);555 JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(vm, callLinkInfo); 556 556 linkSlowFor(vm, callLinkInfo, virtualThunk); 557 callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk , *vm, nullptr, true));557 callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *vm, nullptr, true)); 558 558 } 559 559 … … 645 645 } 646 646 647 static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)647 static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef codeRef) 648 648 { 649 649 if (callLinkInfo.isDirect()) { … … 672 672 dataLog("Unlinking call at ", callLinkInfo.hotPathOther(), "\n"); 673 673 674 revertCall(&vm, callLinkInfo, vm.get CTIStub(linkCallThunkGenerator));674 revertCall(&vm, callLinkInfo, vm.getJITCallThunkEntryStub(linkCallThunkGenerator)); 675 675 } 676 676 … … 684 684 dataLog("Linking virtual call at ", *callerCodeBlock, " ", callerFrame->codeOrigin(), "\n"); 685 685 686 MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&vm, callLinkInfo);686 JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&vm, callLinkInfo); 687 687 revertCall(&vm, callLinkInfo, virtualThunk); 688 callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk , vm, nullptr, true));688 callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), vm, nullptr, true)); 689 689 } 690 690 … … 741 741 742 742 Vector<PolymorphicCallCase> callCases; 743 size_t callerArgumentCount = exec->argumentCountIncludingThis(); 743 744 744 745 // Figure out what our cases are. … … 752 753 // If we cannot handle a callee, either because we don't have a CodeBlock or because arity mismatch, 753 754 // assume that it's better for this whole thing to be a virtual call. 754 if (!codeBlock || exec->argumentCountIncludingThis()< static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) {755 if (!codeBlock || callerArgumentCount < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) { 755 756 linkVirtualFor(exec, callLinkInfo); 756 757 return; … … 776 777 777 778 GPRReg calleeGPR = static_cast<GPRReg>(callLinkInfo.calleeGPR()); 778 779 780 if (callLinkInfo.argumentsInRegisters()) 781 ASSERT(calleeGPR == argumentRegisterForCallee()); 782 779 783 CCallHelpers stubJit(&vm, callerCodeBlock); 780 784 … … 798 802 if (frameShuffler) 799 803 scratchGPR = frameShuffler->acquireGPR(); 804 else if (callLinkInfo.argumentsInRegisters()) 805 scratchGPR = GPRInfo::nonArgGPR0; 800 806 else 801 807 scratchGPR = AssemblyHelpers::selectScratchGPR(calleeGPR); … … 863 869 if (frameShuffler) 864 870 fastCountsBaseGPR = frameShuffler->acquireGPR(); 871 else if (callLinkInfo.argumentsInRegisters()) 872 #if CPU(ARM64) 873 fastCountsBaseGPR = GPRInfo::nonArgGPR1; 874 #else 875 fastCountsBaseGPR = GPRInfo::regT0; 876 #endif 865 877 else { 866 878 fastCountsBaseGPR = 867 879 AssemblyHelpers::selectScratchGPR(calleeGPR, comparisonValueGPR, GPRInfo::regT3); 868 880 } 869 stubJit.move(CCallHelpers::TrustedImmPtr(fastCounts.get()), fastCountsBaseGPR); 881 if (fastCounts) 882 stubJit.move(CCallHelpers::TrustedImmPtr(fastCounts.get()), fastCountsBaseGPR); 870 883 if (!frameShuffler && callLinkInfo.isTailCall()) 871 884 stubJit.emitRestoreCalleeSaves(); 885 886 incrementCounter(&stubJit, VM::PolymorphicCall); 887 872 888 BinarySwitch binarySwitch(comparisonValueGPR, caseValues, BinarySwitch::IntPtr); 873 889 CCallHelpers::JumpList done; … … 878 894 879 895 ASSERT(variant.executable()->hasJITCodeForCall()); 896 897 EntryPointType entryType = StackArgsArityCheckNotRequired; 898 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 899 if (callLinkInfo.argumentsInRegisters()) { 900 CodeBlock* codeBlock = callCases[caseIndex].codeBlock(); 901 if (codeBlock) { 902 size_t calleeArgumentCount = static_cast<size_t>(codeBlock->numParameters()); 903 if (calleeArgumentCount == callerArgumentCount || calleeArgumentCount >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) 904 entryType = RegisterArgsArityCheckNotRequired; 905 else { 906 EntryPointType entryForArgCount = JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount); 907 MacroAssemblerCodePtr codePtr = 908 variant.executable()->generatedJITCodeForCall()->addressForCall(entryForArgCount); 909 if (codePtr) 910 entryType = entryForArgCount; 911 else 912 entryType = RegisterArgsPossibleExtraArgs; 913 } 914 } else 915 entryType = RegisterArgsPossibleExtraArgs; 916 } 917 #endif 918 880 919 MacroAssemblerCodePtr codePtr = 881 variant.executable()->generatedJITCodeForCall()->addressForCall(ArityCheckNotRequired); 920 variant.executable()->generatedJITCodeForCall()->addressForCall(entryType); 921 ASSERT(codePtr); 882 922 883 923 if (fastCounts) { … … 887 927 } 888 928 if (frameShuffler) { 889 CallFrameShuffler(stubJit, frameShuffler->snapshot( )).prepareForTailCall();929 CallFrameShuffler(stubJit, frameShuffler->snapshot(callLinkInfo.argumentsLocation())).prepareForTailCall(); 890 930 calls[caseIndex].call = stubJit.nearTailCall(); 891 931 } else if (callLinkInfo.isTailCall()) { … … 908 948 frameShuffler->setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT1, GPRInfo::regT0)); 909 949 #else 910 frameShuffler->setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0)); 950 if (callLinkInfo.argumentsLocation() == StackArgs) 951 frameShuffler->setCalleeJSValueRegs(JSValueRegs(argumentRegisterForCallee())); 911 952 #endif 912 953 frameShuffler->prepareForSlowPath(); 913 954 } else { 914 stubJit.move(calleeGPR, GPRInfo::regT0);915 955 #if USE(JSVALUE32_64) 916 956 stubJit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1); 917 957 #endif 918 958 } 919 stubJit.move(CCallHelpers::TrustedImmPtr( &callLinkInfo), GPRInfo::regT2);920 stubJit. move(CCallHelpers::TrustedImmPtr(callLinkInfo.callReturnLocation().executableAddress()), GPRInfo::regT4);921 922 stubJit. restoreReturnAddressBeforeReturn(GPRInfo::regT4);959 stubJit.move(CCallHelpers::TrustedImmPtr(callLinkInfo.callReturnLocation().executableAddress()), GPRInfo::nonArgGPR1); 960 stubJit.restoreReturnAddressBeforeReturn(GPRInfo::nonArgGPR1); 961 962 stubJit.move(CCallHelpers::TrustedImmPtr(&callLinkInfo), GPRInfo::nonArgGPR0); 923 963 AssemblyHelpers::Jump slow = stubJit.jump(); 924 964 … … 941 981 else 942 982 patchBuffer.link(done, callLinkInfo.hotPathOther().labelAtOffset(0)); 943 patchBuffer.link(slow, CodeLocationLabel(vm.get CTIStub(linkPolymorphicCallThunkGenerator).code()));983 patchBuffer.link(slow, CodeLocationLabel(vm.getJITCallThunkEntryStub(linkPolymorphicCallThunkGenerator).entryFor(callLinkInfo.argumentsLocation()))); 944 984 945 985 auto stubRoutine = adoptRef(*new PolymorphicCallStubRoutine( -
trunk/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
r208063 r209653 29 29 30 30 #include "JIT.h" 31 #include "JITEntryPoints.h" 31 32 #include "JITInlines.h" 32 33 #include "JSInterfaceJIT.h" … … 38 39 public: 39 40 static const int ThisArgument = -1; 40 SpecializedThunkJIT(VM* vm, int expectedArgCount) 41 enum ArgLocation { OnStack, InRegisters }; 42 43 SpecializedThunkJIT(VM* vm, int expectedArgCount, AssemblyHelpers::SpillRegisterType spillType = AssemblyHelpers::SpillExactly, ArgLocation argLocation = OnStack) 41 44 : JSInterfaceJIT(vm) 42 45 { 43 emitFunctionPrologue(); 44 emitSaveThenMaterializeTagRegisters(); 45 // Check that we have the expected number of arguments 46 m_failures.append(branch32(NotEqual, payloadFor(CallFrameSlot::argumentCount), TrustedImm32(expectedArgCount + 1))); 46 #if !NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 47 UNUSED_PARAM(spillType); 48 UNUSED_PARAM(argLocation); 49 #else 50 if (argLocation == InRegisters) { 51 m_stackArgumentsEntry = label(); 52 fillArgumentRegistersFromFrameBeforePrologue(); 53 m_registerArgumentsEntry = label(); 54 emitFunctionPrologue(); 55 emitSaveThenMaterializeTagRegisters(); 56 // Check that we have the expected number of arguments 57 m_failures.append(branch32(NotEqual, argumentRegisterForArgumentCount(), TrustedImm32(expectedArgCount + 1))); 58 } else { 59 spillArgumentRegistersToFrameBeforePrologue(expectedArgCount + 1, spillType); 60 m_stackArgumentsEntry = label(); 61 #endif 62 emitFunctionPrologue(); 63 emitSaveThenMaterializeTagRegisters(); 64 // Check that we have the expected number of arguments 65 m_failures.append(branch32(NotEqual, payloadFor(CallFrameSlot::argumentCount), TrustedImm32(expectedArgCount + 1))); 66 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 67 } 68 #endif 47 69 } 48 70 … … 50 72 : JSInterfaceJIT(vm) 51 73 { 74 #if USE(JSVALUE64) 75 spillArgumentRegistersToFrameBeforePrologue(); 76 m_stackArgumentsEntry = Label(); 77 #endif 52 78 emitFunctionPrologue(); 53 79 emitSaveThenMaterializeTagRegisters(); … … 95 121 m_failures.append(conversionFailed); 96 122 } 123 124 void checkJSStringArgument(VM& vm, RegisterID argument) 125 { 126 m_failures.append(emitJumpIfNotJSCell(argument)); 127 m_failures.append(branchStructure(NotEqual, 128 Address(argument, JSCell::structureIDOffset()), 129 vm.stringStructure.get())); 130 } 97 131 98 132 void appendFailure(const Jump& failure) … … 100 134 m_failures.append(failure); 101 135 } 136 137 void linkFailureHere() 138 { 139 m_failures.link(this); 140 m_failures.clear(); 141 } 142 102 143 #if USE(JSVALUE64) 103 144 void returnJSValue(RegisterID src) … … 165 206 } 166 207 167 MacroAssemblerCodeRef finalize(MacroAssemblerCodePtr fallback, const char* thunkKind)208 JITEntryPointsWithRef finalize(MacroAssemblerCodePtr fallback, const char* thunkKind) 168 209 { 169 210 LinkBuffer patchBuffer(*m_vm, *this, GLOBAL_THUNK_ID); … … 171 212 for (unsigned i = 0; i < m_calls.size(); i++) 172 213 patchBuffer.link(m_calls[i].first, m_calls[i].second); 173 return FINALIZE_CODE(patchBuffer, ("Specialized thunk for %s", thunkKind)); 214 215 MacroAssemblerCodePtr stackEntry; 216 if (m_stackArgumentsEntry.isSet()) 217 stackEntry = patchBuffer.locationOf(m_stackArgumentsEntry); 218 MacroAssemblerCodePtr registerEntry; 219 if (m_registerArgumentsEntry.isSet()) 220 registerEntry = patchBuffer.locationOf(m_registerArgumentsEntry); 221 222 MacroAssemblerCodeRef entry = FINALIZE_CODE(patchBuffer, ("Specialized thunk for %s", thunkKind)); 223 224 if (m_stackArgumentsEntry.isSet()) { 225 if (m_registerArgumentsEntry.isSet()) 226 return JITEntryPointsWithRef(entry, registerEntry, registerEntry, registerEntry, stackEntry, stackEntry); 227 return JITEntryPointsWithRef(entry, entry.code(), entry.code(), entry.code(), stackEntry, stackEntry); 228 } 229 230 return JITEntryPointsWithRef(entry, entry.code(), entry.code()); 174 231 } 175 232 … … 208 265 209 266 MacroAssembler::JumpList m_failures; 267 MacroAssembler::Label m_registerArgumentsEntry; 268 MacroAssembler::Label m_stackArgumentsEntry; 210 269 Vector<std::pair<Call, FunctionPtr>> m_calls; 211 270 }; -
trunk/Source/JavaScriptCore/jit/ThunkGenerator.h
r206525 r209653 31 31 class VM; 32 32 class MacroAssemblerCodeRef; 33 class JITEntryPointsWithRef; 34 class JITJSCallThunkEntryPointsWithRef; 33 35 34 36 typedef MacroAssemblerCodeRef (*ThunkGenerator)(VM*); 37 typedef JITEntryPointsWithRef (*JITEntryGenerator)(VM*); 38 typedef JITJSCallThunkEntryPointsWithRef (*JITCallThunkEntryGenerator)(VM*); 35 39 36 40 } // namespace JSC -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r203081 r209653 78 78 } 79 79 80 static void createRegisterArgumentsSpillEntry(CCallHelpers& jit, MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount]) 81 { 82 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 83 for (unsigned argIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex-- > 0;) { 84 entryPoints[thunkEntryPointTypeFor(argIndex + 1)] = jit.label(); 85 jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex); 86 } 87 88 jit.emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForCallee(), CallFrameSlot::callee); 89 jit.emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForArgumentCount(), CallFrameSlot::argumentCount); 90 #else 91 UNUSED_PARAM(jit); 92 UNUSED_PARAM(entryPoints); 93 #endif 94 entryPoints[StackArgs] = jit.label(); 95 } 96 80 97 static void slowPathFor( 81 98 CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction) … … 89 106 // and space for the 16 byte return area. 90 107 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 91 jit.move(GPRInfo:: regT2, GPRInfo::argumentGPR2);108 jit.move(GPRInfo::nonArgGPR0, GPRInfo::argumentGPR2); 92 109 jit.addPtr(CCallHelpers::TrustedImm32(32), CCallHelpers::stackPointerRegister, GPRInfo::argumentGPR0); 93 110 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1); … … 101 118 if (maxFrameExtentForSlowPathCall) 102 119 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 103 jit.setupArgumentsWithExecState(GPRInfo:: regT2);120 jit.setupArgumentsWithExecState(GPRInfo::nonArgGPR0); 104 121 jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0); 105 122 emitPointerValidation(jit, GPRInfo::nonArgGPR0); … … 128 145 } 129 146 130 MacroAssemblerCodeRef linkCallThunkGenerator(VM* vm)147 JITJSCallThunkEntryPointsWithRef linkCallThunkGenerator(VM* vm) 131 148 { 132 149 // The return address is on the stack or in the link register. We will hence … … 136 153 // been adjusted, and all other registers to be available for use. 137 154 CCallHelpers jit(vm); 138 155 156 MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount]; 157 158 createRegisterArgumentsSpillEntry(jit, entryPoints); 139 159 slowPathFor(jit, vm, operationLinkCall); 140 160 141 161 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 142 return FINALIZE_CODE(patchBuffer, ("Link call slow path thunk")); 162 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link call slow path thunk")); 163 JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef); 164 165 for (unsigned entryIndex = StackArgs; entryIndex < ThunkEntryPointTypeCount; entryIndex++) { 166 callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex), 167 patchBuffer.locationOf(entryPoints[entryIndex])); 168 } 169 170 return callEntryPoints; 171 } 172 173 JITJSCallThunkEntryPointsWithRef linkDirectCallThunkGenerator(VM* vm) 174 { 175 // The return address is on the stack or in the link register. We will hence 176 // save the return address to the call frame while we make a C++ function call 177 // to perform linking and lazy compilation if necessary. We expect the CallLinkInfo 178 // to be in GPRInfo::nonArgGPR0, the callee to be in argumentRegisterForCallee(), 179 // the CallFrame to have already been adjusted, and arguments in argument registers 180 // and/or in the stack as appropriate. 181 CCallHelpers jit(vm); 182 183 MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount]; 184 185 createRegisterArgumentsSpillEntry(jit, entryPoints); 186 187 jit.move(GPRInfo::callFrameRegister, GPRInfo::nonArgGPR1); // Save callee's frame pointer 188 jit.emitFunctionPrologue(); 189 jit.storePtr(GPRInfo::callFrameRegister, &vm->topCallFrame); 190 191 if (maxFrameExtentForSlowPathCall) 192 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 193 jit.setupArguments(GPRInfo::nonArgGPR1, GPRInfo::nonArgGPR0, argumentRegisterForCallee()); 194 jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(operationLinkDirectCall)), GPRInfo::nonArgGPR0); 195 emitPointerValidation(jit, GPRInfo::nonArgGPR0); 196 jit.call(GPRInfo::nonArgGPR0); 197 if (maxFrameExtentForSlowPathCall) 198 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 199 200 jit.emitFunctionEpilogue(); 201 202 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 203 jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee()); 204 GPRReg argCountReg = argumentRegisterForArgumentCount(); 205 jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argCountReg); 206 207 // load "this" 208 jit.emitGetFromCallFrameArgumentBeforePrologue(0, argumentRegisterForFunctionArgument(0)); 209 210 CCallHelpers::Jump fillUndefined[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS]; 211 212 for (unsigned argIndex = 1; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 213 fillUndefined[argIndex] = jit.branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)); 214 jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex)); 215 } 216 217 CCallHelpers::Jump doneFilling = jit.jump(); 218 219 for (unsigned argIndex = 1; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) { 220 fillUndefined[argIndex].link(&jit); 221 jit.move(CCallHelpers::TrustedImm64(JSValue::encode(jsUndefined())), argumentRegisterForFunctionArgument(argIndex)); 222 } 223 224 doneFilling.link(&jit); 225 #endif 226 227 228 jit.ret(); 229 230 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 231 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link direct call thunk")); 232 JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef); 233 234 for (unsigned entryIndex = StackArgs; entryIndex < ThunkEntryPointTypeCount; entryIndex++) { 235 callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex), 236 patchBuffer.locationOf(entryPoints[entryIndex])); 237 } 238 239 return callEntryPoints; 143 240 } 144 241 145 242 // For closure optimizations, we only include calls, since if you're using closures for 146 243 // object construction then you're going to lose big time anyway. 147 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM* vm)244 JITJSCallThunkEntryPointsWithRef linkPolymorphicCallThunkGenerator(VM* vm) 148 245 { 149 246 CCallHelpers jit(vm); 150 247 248 MacroAssembler::Label entryPoints[ThunkEntryPointTypeCount]; 249 250 createRegisterArgumentsSpillEntry(jit, entryPoints); 251 151 252 slowPathFor(jit, vm, operationLinkPolymorphicCall); 152 253 153 254 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 154 return FINALIZE_CODE(patchBuffer, ("Link polymorphic call slow path thunk")); 255 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Link polymorphic call slow path thunk")); 256 JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef); 257 258 for (unsigned entryIndex = StackArgs; entryIndex < ThunkEntryPointTypeCount; entryIndex++) { 259 callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex), 260 patchBuffer.locationOf(entryPoints[entryIndex])); 261 } 262 263 return callEntryPoints; 155 264 } 156 265 … … 159 268 // virtual calls by using the shuffler. 160 269 // https://bugs.webkit.org/show_bug.cgi?id=148831 161 MacroAssemblerCodeRef virtualThunkFor(VM* vm, CallLinkInfo& callLinkInfo) 162 { 163 // The callee is in regT0 (for JSVALUE32_64, the tag is in regT1). 164 // The return address is on the stack, or in the link register. We will hence 165 // jump to the callee, or save the return address to the call frame while we 166 // make a C++ function call to the appropriate JIT operation. 270 JITJSCallThunkEntryPointsWithRef virtualThunkFor(VM* vm, CallLinkInfo& callLinkInfo) 271 { 272 // The callee is in argumentRegisterForCallee() (for JSVALUE32_64, it is in regT1:regT0). 273 // The CallLinkInfo is in GPRInfo::nonArgGPR0. 274 // The return address is on the stack, or in the link register. 275 /// We will hence jump to the callee, or save the return address to the call 276 // frame while we make a C++ function call to the appropriate JIT operation. 167 277 168 278 CCallHelpers jit(vm); 169 279 170 280 CCallHelpers::JumpList slowCase; 171 172 // This is a slow path execution, and regT2 contains the CallLinkInfo. Count the 173 // slow path execution for the profiler. 281 282 GPRReg calleeReg = argumentRegisterForCallee(); 283 #if USE(JSVALUE32_64) 284 GPRReg calleeTagReg = GPRInfo::regT1; 285 #endif 286 GPRReg targetReg = GPRInfo::nonArgGPR1; 287 // This is the CallLinkInfo* on entry and used later as a temp. 288 GPRReg callLinkInfoAndTempReg = GPRInfo::nonArgGPR0; 289 290 jit.fillArgumentRegistersFromFrameBeforePrologue(); 291 292 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 293 MacroAssembler::Label registerEntry = jit.label(); 294 #endif 295 296 incrementCounter(&jit, VM::VirtualCall); 297 298 // This is a slow path execution. Count the slow path execution for the profiler. 174 299 jit.add32( 175 300 CCallHelpers::TrustedImm32(1), 176 CCallHelpers::Address( GPRInfo::regT2, CallLinkInfo::offsetOfSlowPathCount()));301 CCallHelpers::Address(callLinkInfoAndTempReg, CallLinkInfo::offsetOfSlowPathCount())); 177 302 178 303 // FIXME: we should have a story for eliminating these checks. In many cases, … … 182 307 slowCase.append( 183 308 jit.branchTest64( 184 CCallHelpers::NonZero, GPRInfo::regT0, GPRInfo::tagMaskRegister));309 CCallHelpers::NonZero, calleeReg, GPRInfo::tagMaskRegister)); 185 310 #else 186 311 slowCase.append( 187 312 jit.branch32( 188 CCallHelpers::NotEqual, GPRInfo::regT1,313 CCallHelpers::NotEqual, calleeTagReg, 189 314 CCallHelpers::TrustedImm32(JSValue::CellTag))); 190 315 #endif 191 slowCase.append(jit.branchIfNotType( GPRInfo::regT0, JSFunctionType));316 slowCase.append(jit.branchIfNotType(calleeReg, JSFunctionType)); 192 317 193 318 // Now we know we have a JSFunction. 194 319 195 320 jit.loadPtr( 196 CCallHelpers::Address( GPRInfo::regT0, JSFunction::offsetOfExecutable()),197 GPRInfo::regT4);321 CCallHelpers::Address(calleeReg, JSFunction::offsetOfExecutable()), 322 targetReg); 198 323 jit.loadPtr( 199 324 CCallHelpers::Address( 200 GPRInfo::regT4, ExecutableBase::offsetOfJITCodeWithArityCheckFor( 201 callLinkInfo.specializationKind())), 202 GPRInfo::regT4); 203 slowCase.append(jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT4)); 325 targetReg, ExecutableBase::offsetOfEntryFor( 326 callLinkInfo.specializationKind(), 327 entryPointTypeFor(callLinkInfo.argumentsLocation()))), 328 targetReg); 329 slowCase.append(jit.branchTestPtr(CCallHelpers::Zero, targetReg)); 204 330 205 331 // Now we know that we have a CodeBlock, and we're committed to making a fast … … 207 333 208 334 // Make a tail call. This will return back to JIT code. 209 emitPointerValidation(jit, GPRInfo::regT4);335 emitPointerValidation(jit, targetReg); 210 336 if (callLinkInfo.isTailCall()) { 211 jit.preserveReturnAddressAfterCall(GPRInfo::regT0); 212 jit.prepareForTailCallSlow(GPRInfo::regT4); 337 jit.spillArgumentRegistersToFrameBeforePrologue(); 338 jit.preserveReturnAddressAfterCall(callLinkInfoAndTempReg); 339 jit.prepareForTailCallSlow(targetReg); 213 340 } 214 jit.jump(GPRInfo::regT4); 215 341 jit.jump(targetReg); 216 342 slowCase.link(&jit); 217 343 344 incrementCounter(&jit, VM::VirtualSlowCall); 345 218 346 // Here we don't know anything, so revert to the full slow path. 347 jit.spillArgumentRegistersToFrameBeforePrologue(); 219 348 220 349 slowPathFor(jit, vm, operationVirtualCall); 221 350 222 351 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 223 return FINALIZE_CODE( 224 patchBuffer, 352 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, 225 353 ("Virtual %s slow path thunk", 226 354 callLinkInfo.callMode() == CallMode::Regular ? "call" : callLinkInfo.callMode() == CallMode::Tail ? "tail call" : "construct")); 355 JITJSCallThunkEntryPointsWithRef callEntryPoints = JITJSCallThunkEntryPointsWithRef(codeRef); 356 357 callEntryPoints.setEntryFor(StackArgsEntry, codeRef.code()); 358 359 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 360 MacroAssemblerCodePtr registerEntryPtr = patchBuffer.locationOf(registerEntry); 361 362 for (unsigned entryIndex = Register1ArgEntry; entryIndex < ThunkEntryPointTypeCount; entryIndex++) 363 callEntryPoints.setEntryFor(static_cast<ThunkEntryPointType>(entryIndex), registerEntryPtr); 364 #endif 365 366 return callEntryPoints; 227 367 } 228 368 229 369 enum ThunkEntryType { EnterViaCall, EnterViaJumpWithSavedTags, EnterViaJumpWithoutSavedTags }; 230 370 231 static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind kind, ThunkEntryType entryType = EnterViaCall)371 static JITEntryPointsWithRef nativeForGenerator(VM* vm, CodeSpecializationKind kind, ThunkEntryType entryType = EnterViaCall) 232 372 { 233 373 // FIXME: This should be able to log ShadowChicken prologue packets. … … 238 378 JSInterfaceJIT jit(vm); 239 379 380 MacroAssembler::Label stackArgsEntry; 381 240 382 switch (entryType) { 241 383 case EnterViaCall: 384 jit.spillArgumentRegistersToFrameBeforePrologue(); 385 386 stackArgsEntry = jit.label(); 387 242 388 jit.emitFunctionPrologue(); 243 389 break; … … 380 526 381 527 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 382 return FINALIZE_CODE(patchBuffer, ("native %s%s trampoline", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data())); 383 } 384 385 MacroAssemblerCodeRef nativeCallGenerator(VM* vm) 528 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("native %s%s trampoline", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data())); 529 if (entryType == EnterViaCall) { 530 MacroAssemblerCodePtr stackEntryPtr = patchBuffer.locationOf(stackArgsEntry); 531 532 return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code(), codeRef.code(), stackEntryPtr, stackEntryPtr); 533 } 534 535 return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code()); 536 537 } 538 539 JITEntryPointsWithRef nativeCallGenerator(VM* vm) 386 540 { 387 541 return nativeForGenerator(vm, CodeForCall); … … 390 544 MacroAssemblerCodeRef nativeTailCallGenerator(VM* vm) 391 545 { 392 return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithSavedTags) ;546 return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithSavedTags).codeRef(); 393 547 } 394 548 395 549 MacroAssemblerCodeRef nativeTailCallWithoutSavedTagsGenerator(VM* vm) 396 550 { 397 return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithoutSavedTags) ;398 } 399 400 MacroAssemblerCodeRef nativeConstructGenerator(VM* vm)551 return nativeForGenerator(vm, CodeForCall, EnterViaJumpWithoutSavedTags).codeRef(); 552 } 553 554 JITEntryPointsWithRef nativeConstructGenerator(VM* vm) 401 555 { 402 556 return nativeForGenerator(vm, CodeForConstruct); … … 537 691 } 538 692 693 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 694 static void stringCharLoadRegCall(SpecializedThunkJIT& jit, VM* vm) 695 { 696 // load string 697 GPRReg thisReg = argumentRegisterForFunctionArgument(0); 698 GPRReg indexReg = argumentRegisterForFunctionArgument(2); 699 GPRReg lengthReg = argumentRegisterForFunctionArgument(3); 700 GPRReg tempReg = SpecializedThunkJIT::nonArgGPR0; 701 702 jit.checkJSStringArgument(*vm, thisReg); 703 704 // Load string length to regT2, and start the process of loading the data pointer into regT0 705 jit.load32(MacroAssembler::Address(thisReg, ThunkHelpers::jsStringLengthOffset()), lengthReg); 706 jit.loadPtr(MacroAssembler::Address(thisReg, ThunkHelpers::jsStringValueOffset()), tempReg); 707 jit.appendFailure(jit.branchTest32(MacroAssembler::Zero, tempReg)); 708 709 // load index 710 jit.move(argumentRegisterForFunctionArgument(1), indexReg); 711 jit.appendFailure(jit.emitJumpIfNotInt32(indexReg)); 712 713 // Do an unsigned compare to simultaneously filter negative indices as well as indices that are too large 714 jit.appendFailure(jit.branch32(MacroAssembler::AboveOrEqual, indexReg, lengthReg)); 715 716 // Load the character 717 SpecializedThunkJIT::JumpList is16Bit; 718 SpecializedThunkJIT::JumpList cont8Bit; 719 // Load the string flags 720 jit.loadPtr(MacroAssembler::Address(tempReg, StringImpl::flagsOffset()), lengthReg); 721 jit.loadPtr(MacroAssembler::Address(tempReg, StringImpl::dataOffset()), tempReg); 722 is16Bit.append(jit.branchTest32(MacroAssembler::Zero, lengthReg, MacroAssembler::TrustedImm32(StringImpl::flagIs8Bit()))); 723 jit.load8(MacroAssembler::BaseIndex(tempReg, indexReg, MacroAssembler::TimesOne, 0), tempReg); 724 cont8Bit.append(jit.jump()); 725 is16Bit.link(&jit); 726 jit.load16(MacroAssembler::BaseIndex(tempReg, indexReg, MacroAssembler::TimesTwo, 0), tempReg); 727 cont8Bit.link(&jit); 728 } 729 #else 539 730 static void stringCharLoad(SpecializedThunkJIT& jit, VM* vm) 540 731 { … … 566 757 cont8Bit.link(&jit); 567 758 } 759 #endif 568 760 569 761 static void charToString(SpecializedThunkJIT& jit, VM* vm, MacroAssembler::RegisterID src, MacroAssembler::RegisterID dst, MacroAssembler::RegisterID scratch) … … 575 767 } 576 768 577 MacroAssemblerCodeRef charCodeAtThunkGenerator(VM* vm) 578 { 769 JITEntryPointsWithRef charCodeAtThunkGenerator(VM* vm) 770 { 771 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 772 SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters); 773 stringCharLoadRegCall(jit, vm); 774 jit.returnInt32(SpecializedThunkJIT::nonArgGPR0); 775 jit.linkFailureHere(); 776 jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillExactly); 777 jit.appendFailure(jit.jump()); 778 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charCodeAt"); 779 #else 579 780 SpecializedThunkJIT jit(vm, 1); 580 781 stringCharLoad(jit, vm); 581 782 jit.returnInt32(SpecializedThunkJIT::regT0); 582 783 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charCodeAt"); 583 } 584 585 MacroAssemblerCodeRef charAtThunkGenerator(VM* vm) 586 { 784 #endif 785 } 786 787 JITEntryPointsWithRef charAtThunkGenerator(VM* vm) 788 { 789 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 790 SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters); 791 stringCharLoadRegCall(jit, vm); 792 charToString(jit, vm, SpecializedThunkJIT::nonArgGPR0, SpecializedThunkJIT::returnValueGPR, argumentRegisterForFunctionArgument(3)); 793 jit.returnJSCell(SpecializedThunkJIT::returnValueGPR); 794 jit.linkFailureHere(); 795 jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillExactly); 796 jit.appendFailure(jit.jump()); 797 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charAt"); 798 #else 587 799 SpecializedThunkJIT jit(vm, 1); 588 800 stringCharLoad(jit, vm); … … 590 802 jit.returnJSCell(SpecializedThunkJIT::regT0); 591 803 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "charAt"); 592 } 593 594 MacroAssemblerCodeRef fromCharCodeThunkGenerator(VM* vm) 595 { 596 SpecializedThunkJIT jit(vm, 1); 804 #endif 805 } 806 807 JITEntryPointsWithRef fromCharCodeThunkGenerator(VM* vm) 808 { 809 #if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 810 SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillExactly, SpecializedThunkJIT::InRegisters); 811 // load char code 812 jit.move(argumentRegisterForFunctionArgument(1), SpecializedThunkJIT::nonArgGPR0); 813 jit.appendFailure(jit.emitJumpIfNotInt32(SpecializedThunkJIT::nonArgGPR0)); 814 815 charToString(jit, vm, SpecializedThunkJIT::nonArgGPR0, SpecializedThunkJIT::returnValueGPR, argumentRegisterForFunctionArgument(3)); 816 jit.returnJSCell(SpecializedThunkJIT::returnValueGPR); 817 jit.linkFailureHere(); 818 jit.spillArgumentRegistersToFrame(2, AssemblyHelpers::SpillAll); 819 jit.appendFailure(jit.jump()); 820 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "fromCharCode"); 821 #else 822 SpecializedThunkJIT jit(vm, 1, AssemblyHelpers::SpillAll); 597 823 // load char code 598 824 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0); … … 600 826 jit.returnJSCell(SpecializedThunkJIT::regT0); 601 827 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "fromCharCode"); 602 } 603 604 MacroAssemblerCodeRef clz32ThunkGenerator(VM* vm) 828 #endif 829 } 830 831 JITEntryPointsWithRef clz32ThunkGenerator(VM* vm) 605 832 { 606 833 SpecializedThunkJIT jit(vm, 1); … … 623 850 } 624 851 625 MacroAssemblerCodeRef sqrtThunkGenerator(VM* vm)852 JITEntryPointsWithRef sqrtThunkGenerator(VM* vm) 626 853 { 627 854 SpecializedThunkJIT jit(vm, 1); 628 855 if (!jit.supportsFloatingPointSqrt()) 629 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));856 return vm->jitStubs->jitEntryNativeCall(vm); 630 857 631 858 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); … … 783 1010 static const double halfConstant = 0.5; 784 1011 785 MacroAssemblerCodeRef floorThunkGenerator(VM* vm)1012 JITEntryPointsWithRef floorThunkGenerator(VM* vm) 786 1013 { 787 1014 SpecializedThunkJIT jit(vm, 1); 788 1015 MacroAssembler::Jump nonIntJump; 789 1016 if (!UnaryDoubleOpWrapper(floor) || !jit.supportsFloatingPoint()) 790 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1017 return vm->jitStubs->jitEntryNativeCall(vm); 791 1018 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); 792 1019 jit.returnInt32(SpecializedThunkJIT::regT0); … … 826 1053 } 827 1054 828 MacroAssemblerCodeRef ceilThunkGenerator(VM* vm)1055 JITEntryPointsWithRef ceilThunkGenerator(VM* vm) 829 1056 { 830 1057 SpecializedThunkJIT jit(vm, 1); 831 1058 if (!UnaryDoubleOpWrapper(ceil) || !jit.supportsFloatingPoint()) 832 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1059 return vm->jitStubs->jitEntryNativeCall(vm); 833 1060 MacroAssembler::Jump nonIntJump; 834 1061 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 849 1076 } 850 1077 851 MacroAssemblerCodeRef truncThunkGenerator(VM* vm)1078 JITEntryPointsWithRef truncThunkGenerator(VM* vm) 852 1079 { 853 1080 SpecializedThunkJIT jit(vm, 1); 854 1081 if (!UnaryDoubleOpWrapper(trunc) || !jit.supportsFloatingPoint()) 855 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1082 return vm->jitStubs->jitEntryNativeCall(vm); 856 1083 MacroAssembler::Jump nonIntJump; 857 1084 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 872 1099 } 873 1100 874 MacroAssemblerCodeRef roundThunkGenerator(VM* vm)1101 JITEntryPointsWithRef roundThunkGenerator(VM* vm) 875 1102 { 876 1103 SpecializedThunkJIT jit(vm, 1); 877 1104 if (!UnaryDoubleOpWrapper(jsRound) || !jit.supportsFloatingPoint()) 878 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1105 return vm->jitStubs->jitEntryNativeCall(vm); 879 1106 MacroAssembler::Jump nonIntJump; 880 1107 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 906 1133 } 907 1134 908 MacroAssemblerCodeRef expThunkGenerator(VM* vm)1135 JITEntryPointsWithRef expThunkGenerator(VM* vm) 909 1136 { 910 1137 if (!UnaryDoubleOpWrapper(exp)) 911 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1138 return vm->jitStubs->jitEntryNativeCall(vm); 912 1139 SpecializedThunkJIT jit(vm, 1); 913 1140 if (!jit.supportsFloatingPoint()) 914 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1141 return vm->jitStubs->jitEntryNativeCall(vm); 915 1142 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); 916 1143 jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(exp)); … … 919 1146 } 920 1147 921 MacroAssemblerCodeRef logThunkGenerator(VM* vm)1148 JITEntryPointsWithRef logThunkGenerator(VM* vm) 922 1149 { 923 1150 if (!UnaryDoubleOpWrapper(log)) 924 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1151 return vm->jitStubs->jitEntryNativeCall(vm); 925 1152 SpecializedThunkJIT jit(vm, 1); 926 1153 if (!jit.supportsFloatingPoint()) 927 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1154 return vm->jitStubs->jitEntryNativeCall(vm); 928 1155 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); 929 1156 jit.callDoubleToDoublePreservingReturn(UnaryDoubleOpWrapper(log)); … … 932 1159 } 933 1160 934 MacroAssemblerCodeRef absThunkGenerator(VM* vm)1161 JITEntryPointsWithRef absThunkGenerator(VM* vm) 935 1162 { 936 1163 SpecializedThunkJIT jit(vm, 1); 937 1164 if (!jit.supportsFloatingPointAbs()) 938 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1165 return vm->jitStubs->jitEntryNativeCall(vm); 939 1166 940 1167 #if USE(JSVALUE64) … … 989 1216 } 990 1217 991 MacroAssemblerCodeRef imulThunkGenerator(VM* vm)1218 JITEntryPointsWithRef imulThunkGenerator(VM* vm) 992 1219 { 993 1220 SpecializedThunkJIT jit(vm, 2); … … 1020 1247 } 1021 1248 1022 MacroAssemblerCodeRef randomThunkGenerator(VM* vm)1249 JITEntryPointsWithRef randomThunkGenerator(VM* vm) 1023 1250 { 1024 1251 SpecializedThunkJIT jit(vm, 0); 1025 1252 if (!jit.supportsFloatingPoint()) 1026 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm));1253 return vm->jitStubs->jitEntryNativeCall(vm); 1027 1254 1028 1255 #if USE(JSVALUE64) … … 1032 1259 return jit.finalize(vm->jitStubs->ctiNativeTailCall(vm), "random"); 1033 1260 #else 1034 return MacroAssemblerCodeRef::createSelfManagedCodeRef(vm->jitStubs->ctiNativeCall(vm)); 1035 #endif 1036 } 1037 1038 MacroAssemblerCodeRef boundThisNoArgsFunctionCallGenerator(VM* vm) 1039 { 1040 CCallHelpers jit(vm); 1261 return vm->jitStubs->jitEntryNativeCall(vm); 1262 #endif 1263 } 1264 1265 JITEntryPointsWithRef boundThisNoArgsFunctionCallGenerator(VM* vm) 1266 { 1267 JSInterfaceJIT jit(vm); 1268 1269 MacroAssembler::JumpList failures; 1270 1271 jit.spillArgumentRegistersToFrameBeforePrologue(); 1272 1273 SpecializedThunkJIT::Label stackArgsEntry(&jit); 1041 1274 1042 1275 jit.emitFunctionPrologue(); 1043 1276 1044 1277 // Set up our call frame. 1045 1278 jit.storePtr(CCallHelpers::TrustedImmPtr(nullptr), CCallHelpers::addressFor(CallFrameSlot::codeBlock)); … … 1111 1344 jit.loadPtr( 1112 1345 CCallHelpers::Address( 1113 GPRInfo::regT0, ExecutableBase::offsetOf JITCodeWithArityCheckFor(CodeForCall)),1346 GPRInfo::regT0, ExecutableBase::offsetOfEntryFor(CodeForCall, StackArgsMustCheckArity)), 1114 1347 GPRInfo::regT0); 1115 CCallHelpers::Jump noCode = jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT0);1348 failures.append(jit.branchTestPtr(CCallHelpers::Zero, GPRInfo::regT0)); 1116 1349 1117 1350 emitPointerValidation(jit, GPRInfo::regT0); … … 1120 1353 jit.emitFunctionEpilogue(); 1121 1354 jit.ret(); 1122 1123 LinkBuffer linkBuffer(*vm, jit, GLOBAL_THUNK_ID); 1124 linkBuffer.link(noCode, CodeLocationLabel(vm->jitStubs->ctiNativeTailCallWithoutSavedTags(vm))); 1125 return FINALIZE_CODE( 1126 linkBuffer, ("Specialized thunk for bound function calls with no arguments")); 1355 1356 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 1357 patchBuffer.link(failures, CodeLocationLabel(vm->jitStubs->ctiNativeTailCallWithoutSavedTags(vm))); 1358 1359 MacroAssemblerCodeRef codeRef FINALIZE_CODE(patchBuffer, ("Specialized thunk for bound function calls with no arguments")); 1360 MacroAssemblerCodePtr stackEntryPtr = patchBuffer.locationOf(stackArgsEntry); 1361 1362 return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code(), codeRef.code(), stackEntryPtr, stackEntryPtr); 1127 1363 } 1128 1364 -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.h
r206525 r209653 27 27 28 28 #include "CodeSpecializationKind.h" 29 #include "JITEntryPoints.h" 29 30 #include "ThunkGenerator.h" 30 31 … … 37 38 38 39 MacroAssemblerCodeRef linkCallThunk(VM*, CallLinkInfo&, CodeSpecializationKind); 39 MacroAssemblerCodeRef linkCallThunkGenerator(VM*); 40 MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM*); 40 JITJSCallThunkEntryPointsWithRef linkCallThunkGenerator(VM*); 41 JITJSCallThunkEntryPointsWithRef linkDirectCallThunkGenerator(VM*); 42 JITJSCallThunkEntryPointsWithRef linkPolymorphicCallThunkGenerator(VM*); 41 43 42 MacroAssemblerCodeRef virtualThunkFor(VM*, CallLinkInfo&);44 JITJSCallThunkEntryPointsWithRef virtualThunkFor(VM*, CallLinkInfo&); 43 45 44 MacroAssemblerCodeRef nativeCallGenerator(VM*);45 MacroAssemblerCodeRef nativeConstructGenerator(VM*);46 JITEntryPointsWithRef nativeCallGenerator(VM*); 47 JITEntryPointsWithRef nativeConstructGenerator(VM*); 46 48 MacroAssemblerCodeRef nativeTailCallGenerator(VM*); 47 49 MacroAssemblerCodeRef nativeTailCallWithoutSavedTagsGenerator(VM*); … … 49 51 MacroAssemblerCodeRef unreachableGenerator(VM*); 50 52 51 MacroAssemblerCodeRef charCodeAtThunkGenerator(VM*);52 MacroAssemblerCodeRef charAtThunkGenerator(VM*);53 MacroAssemblerCodeRef clz32ThunkGenerator(VM*);54 MacroAssemblerCodeRef fromCharCodeThunkGenerator(VM*);55 MacroAssemblerCodeRef absThunkGenerator(VM*);56 MacroAssemblerCodeRef ceilThunkGenerator(VM*);57 MacroAssemblerCodeRef expThunkGenerator(VM*);58 MacroAssemblerCodeRef floorThunkGenerator(VM*);59 MacroAssemblerCodeRef logThunkGenerator(VM*);60 MacroAssemblerCodeRef roundThunkGenerator(VM*);61 MacroAssemblerCodeRef sqrtThunkGenerator(VM*);62 MacroAssemblerCodeRef imulThunkGenerator(VM*);63 MacroAssemblerCodeRef randomThunkGenerator(VM*);64 MacroAssemblerCodeRef truncThunkGenerator(VM*);53 JITEntryPointsWithRef charCodeAtThunkGenerator(VM*); 54 JITEntryPointsWithRef charAtThunkGenerator(VM*); 55 JITEntryPointsWithRef clz32ThunkGenerator(VM*); 56 JITEntryPointsWithRef fromCharCodeThunkGenerator(VM*); 57 JITEntryPointsWithRef absThunkGenerator(VM*); 58 JITEntryPointsWithRef ceilThunkGenerator(VM*); 59 JITEntryPointsWithRef expThunkGenerator(VM*); 60 JITEntryPointsWithRef floorThunkGenerator(VM*); 61 JITEntryPointsWithRef logThunkGenerator(VM*); 62 JITEntryPointsWithRef roundThunkGenerator(VM*); 63 JITEntryPointsWithRef sqrtThunkGenerator(VM*); 64 JITEntryPointsWithRef imulThunkGenerator(VM*); 65 JITEntryPointsWithRef randomThunkGenerator(VM*); 66 JITEntryPointsWithRef truncThunkGenerator(VM*); 65 67 66 MacroAssemblerCodeRef boundThisNoArgsFunctionCallGenerator(VM* vm);68 JITEntryPointsWithRef boundThisNoArgsFunctionCallGenerator(VM*); 67 69 68 70 } -
trunk/Source/JavaScriptCore/jsc.cpp
r209630 r209653 3252 3252 result = runJSC(vm, options); 3253 3253 3254 #if ENABLE(VM_COUNTERS) 3255 vm->dumpCounters(); 3256 #endif 3254 3257 if (Options::gcAtEnd()) { 3255 3258 // We need to hold the API lock to do a GC. -
trunk/Source/JavaScriptCore/llint/LLIntEntrypoint.cpp
r192937 r209653 47 47 if (kind == CodeForCall) { 48 48 codeBlock->setJITCode( 49 adoptRef(new DirectJITCode(vm.getCTIStub(functionForCallEntryThunkGenerator), vm.getCTIStub(functionForCallArityCheckThunkGenerator).code(), JITCode::InterpreterThunk))); 49 adoptRef(new DirectJITCode( 50 JITEntryPointsWithRef(vm.getCTIStub(functionForRegisterCallEntryThunkGenerator), 51 vm.getCTIStub(functionForRegisterCallEntryThunkGenerator).code(), 52 vm.getCTIStub(functionForRegisterCallEntryThunkGenerator).code(), 53 vm.getCTIStub(functionForRegisterCallArityCheckThunkGenerator).code(), 54 vm.getCTIStub(functionForStackCallEntryThunkGenerator).code(), 55 vm.getCTIStub(functionForStackCallArityCheckThunkGenerator).code()), 56 JITCode::InterpreterThunk))); 50 57 return; 51 58 } 52 59 ASSERT(kind == CodeForConstruct); 53 60 codeBlock->setJITCode( 54 adoptRef(new DirectJITCode(vm.getCTIStub(functionForConstructEntryThunkGenerator), vm.getCTIStub(functionForConstructArityCheckThunkGenerator).code(), JITCode::InterpreterThunk))); 61 adoptRef(new DirectJITCode( 62 JITEntryPointsWithRef(vm.getCTIStub(functionForRegisterCallEntryThunkGenerator), 63 vm.getCTIStub(functionForRegisterConstructEntryThunkGenerator).code(), 64 vm.getCTIStub(functionForRegisterConstructEntryThunkGenerator).code(), 65 vm.getCTIStub(functionForRegisterConstructArityCheckThunkGenerator).code(), 66 vm.getCTIStub(functionForStackConstructEntryThunkGenerator).code(), 67 vm.getCTIStub(functionForStackConstructArityCheckThunkGenerator).code()), 68 JITCode::InterpreterThunk))); 55 69 return; 56 70 } … … 60 74 if (kind == CodeForCall) { 61 75 codeBlock->setJITCode( 62 adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_call_prologue), MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check), JITCode::InterpreterThunk))); 76 adoptRef(new DirectJITCode( 77 JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_call_prologue), 78 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue), 79 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue), 80 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_prologue), 81 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check), 82 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_call_arity_check)), 83 JITCode::InterpreterThunk))); 63 84 return; 64 85 } 65 86 ASSERT(kind == CodeForConstruct); 66 87 codeBlock->setJITCode( 67 adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_construct_prologue), MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check), JITCode::InterpreterThunk))); 88 adoptRef(new DirectJITCode( 89 JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_function_for_construct_prologue), 90 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue), 91 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue), 92 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_prologue), 93 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check), 94 MacroAssemblerCodePtr::createLLIntCodePtr(llint_function_for_construct_arity_check)), 95 JITCode::InterpreterThunk))); 68 96 } 69 97 … … 73 101 if (vm.canUseJIT()) { 74 102 codeBlock->setJITCode( 75 adoptRef(new DirectJITCode(vm.getCTIStub(evalEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 76 return; 77 } 78 #endif // ENABLE(JIT) 79 80 UNUSED_PARAM(vm); 81 codeBlock->setJITCode( 82 adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 103 adoptRef(new DirectJITCode( 104 JITEntryPointsWithRef(vm.getCTIStub(evalEntryThunkGenerator), 105 MacroAssemblerCodePtr(), 106 MacroAssemblerCodePtr(), 107 MacroAssemblerCodePtr(), 108 vm.getCTIStub(evalEntryThunkGenerator).code(), 109 vm.getCTIStub(evalEntryThunkGenerator).code()), 110 JITCode::InterpreterThunk))); 111 return; 112 } 113 #endif // ENABLE(JIT) 114 115 UNUSED_PARAM(vm); 116 codeBlock->setJITCode( 117 adoptRef(new DirectJITCode( 118 JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue), 119 MacroAssemblerCodePtr(), 120 MacroAssemblerCodePtr(), 121 MacroAssemblerCodePtr(), 122 MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue).code(), 123 MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue).code()), 124 JITCode::InterpreterThunk))); 83 125 } 84 126 … … 88 130 if (vm.canUseJIT()) { 89 131 codeBlock->setJITCode( 90 adoptRef(new DirectJITCode(vm.getCTIStub(programEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 91 return; 92 } 93 #endif // ENABLE(JIT) 94 95 UNUSED_PARAM(vm); 96 codeBlock->setJITCode( 97 adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 132 adoptRef(new DirectJITCode( 133 JITEntryPointsWithRef(vm.getCTIStub(programEntryThunkGenerator), 134 MacroAssemblerCodePtr(), 135 MacroAssemblerCodePtr(), 136 MacroAssemblerCodePtr(), 137 vm.getCTIStub(programEntryThunkGenerator).code(), 138 vm.getCTIStub(programEntryThunkGenerator).code()), 139 JITCode::InterpreterThunk))); 140 return; 141 } 142 #endif // ENABLE(JIT) 143 144 UNUSED_PARAM(vm); 145 codeBlock->setJITCode( 146 adoptRef(new DirectJITCode( 147 JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue), 148 MacroAssemblerCodePtr(), 149 MacroAssemblerCodePtr(), 150 MacroAssemblerCodePtr(), 151 MacroAssemblerCodePtr::createLLIntCodePtr(llint_program_prologue), 152 MacroAssemblerCodePtr::createLLIntCodePtr(llint_program_prologue)), 153 JITCode::InterpreterThunk))); 98 154 } 99 155 … … 103 159 if (vm.canUseJIT()) { 104 160 codeBlock->setJITCode( 105 adoptRef(new DirectJITCode(vm.getCTIStub(moduleProgramEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 106 return; 107 } 108 #endif // ENABLE(JIT) 109 110 UNUSED_PARAM(vm); 111 codeBlock->setJITCode( 112 adoptRef(new DirectJITCode(MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk))); 161 adoptRef(new DirectJITCode( 162 JITEntryPointsWithRef(vm.getCTIStub(moduleProgramEntryThunkGenerator), 163 MacroAssemblerCodePtr(), 164 MacroAssemblerCodePtr(), 165 MacroAssemblerCodePtr(), 166 vm.getCTIStub(moduleProgramEntryThunkGenerator).code(), 167 vm.getCTIStub(moduleProgramEntryThunkGenerator).code()), 168 JITCode::InterpreterThunk))); 169 return; 170 } 171 #endif // ENABLE(JIT) 172 173 UNUSED_PARAM(vm); 174 codeBlock->setJITCode( 175 adoptRef(new DirectJITCode( 176 JITEntryPointsWithRef(MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue), 177 MacroAssemblerCodePtr(), 178 MacroAssemblerCodePtr(), 179 MacroAssemblerCodePtr(), 180 MacroAssemblerCodePtr::createLLIntCodePtr(llint_module_program_prologue), 181 MacroAssemblerCodePtr::createLLIntCodePtr(llint_module_program_prologue)), 182 JITCode::InterpreterThunk))); 113 183 } 114 184 -
trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
r209433 r209653 374 374 375 375 if (kind == Prologue) 376 LLINT_RETURN_TWO(codeBlock->jitCode()-> executableAddress(), 0);376 LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall(StackArgsArityCheckNotRequired).executableAddress(), 0); 377 377 ASSERT(kind == ArityCheck); 378 LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall( MustCheckArity).executableAddress(), 0);378 LLINT_RETURN_TWO(codeBlock->jitCode()->addressForCall(StackArgsMustCheckArity).executableAddress(), 0); 379 379 } 380 380 #else // ENABLE(JIT) … … 1293 1293 CodeBlock* codeBlock = 0; 1294 1294 if (executable->isHostFunction()) { 1295 codePtr = executable->entrypointFor(kind, MustCheckArity);1295 codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity); 1296 1296 } else { 1297 1297 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); … … 1307 1307 codeBlock = *codeBlockSlot; 1308 1308 ASSERT(codeBlock); 1309 ArityCheckMode arity;1309 EntryPointType entryType; 1310 1310 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters())) 1311 arity =MustCheckArity;1311 entryType = StackArgsMustCheckArity; 1312 1312 else 1313 arity =ArityCheckNotRequired;1314 codePtr = functionExecutable->entrypointFor(kind, arity);1313 entryType = StackArgsArityCheckNotRequired; 1314 codePtr = functionExecutable->entrypointFor(kind, entryType); 1315 1315 } 1316 1316 -
trunk/Source/JavaScriptCore/llint/LLIntThunks.cpp
r207693 r209653 52 52 namespace LLInt { 53 53 54 static MacroAssemblerCodeRef generateThunkWithJumpTo(VM* vm, void (*target)(), const char *thunkKind) 54 enum ShouldCreateRegisterEntry { CreateRegisterEntry, DontCreateRegisterEntry }; 55 56 static MacroAssemblerCodeRef generateThunkWithJumpTo(VM* vm, void (*target)(), const char *thunkKind, ShouldCreateRegisterEntry shouldCreateRegisterEntry = DontCreateRegisterEntry) 55 57 { 56 58 JSInterfaceJIT jit(vm); 57 59 60 #if USE(JSVALUE64) 61 if (shouldCreateRegisterEntry == CreateRegisterEntry) 62 jit.spillArgumentRegistersToFrameBeforePrologue(); 63 #else 64 UNUSED_PARAM(shouldCreateRegisterEntry); 65 #endif 66 58 67 // FIXME: there's probably a better way to do it on X86, but I'm not sure I care. 59 68 jit.move(JSInterfaceJIT::TrustedImmPtr(bitwise_cast<void*>(target)), JSInterfaceJIT::regT0); … … 64 73 } 65 74 66 MacroAssemblerCodeRef functionFor CallEntryThunkGenerator(VM* vm)75 MacroAssemblerCodeRef functionForRegisterCallEntryThunkGenerator(VM* vm) 67 76 { 68 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for call");77 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for register args call", CreateRegisterEntry); 69 78 } 70 79 71 MacroAssemblerCodeRef functionFor ConstructEntryThunkGenerator(VM* vm)80 MacroAssemblerCodeRef functionForStackCallEntryThunkGenerator(VM* vm) 72 81 { 73 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_c onstruct_prologue), "function for construct");82 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_prologue), "function for stack args call"); 74 83 } 75 84 76 MacroAssemblerCodeRef functionFor CallArityCheckThunkGenerator(VM* vm)85 MacroAssemblerCodeRef functionForRegisterConstructEntryThunkGenerator(VM* vm) 77 86 { 78 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_c all_arity_check), "function for call with arity check");87 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_prologue), "function for register args construct", CreateRegisterEntry); 79 88 } 80 89 81 MacroAssemblerCodeRef functionFor ConstructArityCheckThunkGenerator(VM* vm)90 MacroAssemblerCodeRef functionForStackConstructEntryThunkGenerator(VM* vm) 82 91 { 83 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for construct with arity check"); 92 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_prologue), "function for stack args construct"); 93 } 94 95 MacroAssemblerCodeRef functionForRegisterCallArityCheckThunkGenerator(VM* vm) 96 { 97 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_arity_check), "function for register args call with arity check", CreateRegisterEntry); 98 } 99 100 MacroAssemblerCodeRef functionForStackCallArityCheckThunkGenerator(VM* vm) 101 { 102 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_call_arity_check), "function for stack args call with arity check"); 103 } 104 105 MacroAssemblerCodeRef functionForRegisterConstructArityCheckThunkGenerator(VM* vm) 106 { 107 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for register args construct with arity check", CreateRegisterEntry); 108 } 109 110 MacroAssemblerCodeRef functionForStackConstructArityCheckThunkGenerator(VM* vm) 111 { 112 return generateThunkWithJumpTo(vm, LLInt::getCodeFunctionPtr(llint_function_for_construct_arity_check), "function for stack args construct with arity check"); 84 113 } 85 114 -
trunk/Source/JavaScriptCore/llint/LLIntThunks.h
r207693 r209653 43 43 namespace LLInt { 44 44 45 MacroAssemblerCodeRef functionForCallEntryThunkGenerator(VM*); 46 MacroAssemblerCodeRef functionForConstructEntryThunkGenerator(VM*); 47 MacroAssemblerCodeRef functionForCallArityCheckThunkGenerator(VM*); 48 MacroAssemblerCodeRef functionForConstructArityCheckThunkGenerator(VM*); 45 MacroAssemblerCodeRef functionForRegisterCallEntryThunkGenerator(VM*); 46 MacroAssemblerCodeRef functionForStackCallEntryThunkGenerator(VM*); 47 MacroAssemblerCodeRef functionForRegisterConstructEntryThunkGenerator(VM*); 48 MacroAssemblerCodeRef functionForStackConstructEntryThunkGenerator(VM*); 49 MacroAssemblerCodeRef functionForRegisterCallArityCheckThunkGenerator(VM*); 50 MacroAssemblerCodeRef functionForStackCallArityCheckThunkGenerator(VM*); 51 MacroAssemblerCodeRef functionForRegisterConstructArityCheckThunkGenerator(VM*); 52 MacroAssemblerCodeRef functionForStackConstructArityCheckThunkGenerator(VM*); 49 53 MacroAssemblerCodeRef evalEntryThunkGenerator(VM*); 50 54 MacroAssemblerCodeRef programEntryThunkGenerator(VM*); -
trunk/Source/JavaScriptCore/runtime/ArityCheckMode.h
r206525 r209653 29 29 30 30 enum ArityCheckMode { 31 RegisterEntry, 31 32 ArityCheckNotRequired, 32 33 MustCheckArity -
trunk/Source/JavaScriptCore/runtime/ExecutableBase.cpp
r209433 r209653 55 55 m_jitCodeForCall = nullptr; 56 56 m_jitCodeForConstruct = nullptr; 57 m_jit CodeForCallWithArityCheck = MacroAssemblerCodePtr();58 m_jit CodeForConstructWithArityCheck = MacroAssemblerCodePtr();57 m_jitEntriesForCall.clearEntries(); 58 m_jitEntriesForConstruct.clearEntries(); 59 59 #endif 60 60 m_numParametersForCall = NUM_PARAMETERS_NOT_COMPILED; -
trunk/Source/JavaScriptCore/runtime/ExecutableBase.h
r209433 r209653 26 26 #pragma once 27 27 28 #include "ArityCheckMode.h"29 28 #include "CallData.h" 30 29 #include "CodeBlockHash.h" … … 35 34 #include "InferredValue.h" 36 35 #include "JITCode.h" 36 #include "JITEntryPoints.h" 37 37 #include "JSGlobalObject.h" 38 38 #include "SourceCode.h" … … 146 146 } 147 147 148 MacroAssemblerCodePtr entrypointFor(CodeSpecializationKind kind, ArityCheckMode arity)148 MacroAssemblerCodePtr entrypointFor(CodeSpecializationKind kind, EntryPointType entryType) 149 149 { 150 150 // Check if we have a cached result. We only have it for arity check because we use the 151 151 // no-arity entrypoint in non-virtual calls, which will "cache" this value directly in 152 152 // machine code. 153 if (arity == MustCheckArity) { 154 switch (kind) { 155 case CodeForCall: 156 if (MacroAssemblerCodePtr result = m_jitCodeForCallWithArityCheck) 157 return result; 158 break; 159 case CodeForConstruct: 160 if (MacroAssemblerCodePtr result = m_jitCodeForConstructWithArityCheck) 161 return result; 162 break; 163 } 153 switch (kind) { 154 case CodeForCall: 155 if (MacroAssemblerCodePtr result = m_jitEntriesForCall.entryFor(entryType)) 156 return result; 157 break; 158 case CodeForConstruct: 159 if (MacroAssemblerCodePtr result = m_jitEntriesForConstruct.entryFor(entryType)) 160 return result; 161 break; 164 162 } 165 163 MacroAssemblerCodePtr result = 166 generatedJITCodeFor(kind)->addressForCall(arity); 167 if (arity == MustCheckArity) { 168 // Cache the result; this is necessary for the JIT's virtual call optimizations. 169 switch (kind) { 170 case CodeForCall: 171 m_jitCodeForCallWithArityCheck = result; 172 break; 173 case CodeForConstruct: 174 m_jitCodeForConstructWithArityCheck = result; 175 break; 176 } 164 generatedJITCodeFor(kind)->addressForCall(entryType); 165 // Cache the result; this is necessary for the JIT's virtual call optimizations. 166 switch (kind) { 167 case CodeForCall: 168 m_jitEntriesForCall.setEntryFor(entryType, result); 169 break; 170 case CodeForConstruct: 171 m_jitEntriesForConstruct.setEntryFor(entryType, result); 172 break; 177 173 } 178 174 return result; 179 175 } 180 176 181 static ptrdiff_t offsetOfJITCodeWithArityCheckFor( 182 CodeSpecializationKind kind) 177 static ptrdiff_t offsetOfEntryFor(CodeSpecializationKind kind, EntryPointType entryPointType) 183 178 { 184 179 switch (kind) { 185 180 case CodeForCall: 186 return OBJECT_OFFSETOF(ExecutableBase, m_jit CodeForCallWithArityCheck);181 return OBJECT_OFFSETOF(ExecutableBase, m_jitEntriesForCall) + JITEntryPoints::offsetOfEntryFor(entryPointType); 187 182 case CodeForConstruct: 188 return OBJECT_OFFSETOF(ExecutableBase, m_jit CodeForConstructWithArityCheck);183 return OBJECT_OFFSETOF(ExecutableBase, m_jitEntriesForConstruct) + JITEntryPoints::offsetOfEntryFor(entryPointType); 189 184 } 190 185 RELEASE_ASSERT_NOT_REACHED(); … … 234 229 RefPtr<JITCode> m_jitCodeForCall; 235 230 RefPtr<JITCode> m_jitCodeForConstruct; 236 MacroAssemblerCodePtr m_jitCodeForCallWithArityCheck;237 MacroAssemblerCodePtr m_jitCodeForConstructWithArityCheck;231 JITEntryPoints m_jitEntriesForCall; 232 JITEntryPoints m_jitEntriesForConstruct; 238 233 }; 239 234 -
trunk/Source/JavaScriptCore/runtime/JSBoundFunction.cpp
r209229 r209653 47 47 if (executable->hasJITCodeForCall()) { 48 48 // Force the executable to cache its arity entrypoint. 49 executable->entrypointFor(CodeForCall, MustCheckArity);49 executable->entrypointFor(CodeForCall, StackArgsMustCheckArity); 50 50 } 51 51 CallData callData; -
trunk/Source/JavaScriptCore/runtime/NativeExecutable.cpp
r208320 r209653 64 64 m_jitCodeForCall = callThunk; 65 65 m_jitCodeForConstruct = constructThunk; 66 m_jit CodeForCallWithArityCheck = m_jitCodeForCall->addressForCall(MustCheckArity);67 m_jit CodeForConstructWithArityCheck = m_jitCodeForConstruct->addressForCall(MustCheckArity);66 m_jitEntriesForCall.setEntryFor(StackArgsMustCheckArity, m_jitCodeForCall->addressForCall(StackArgsMustCheckArity)); 67 m_jitEntriesForConstruct.setEntryFor(StackArgsMustCheckArity, m_jitCodeForConstruct->addressForCall(StackArgsMustCheckArity)); 68 68 m_name = name; 69 69 } -
trunk/Source/JavaScriptCore/runtime/ScriptExecutable.cpp
r209353 r209653 140 140 case CodeForCall: 141 141 m_jitCodeForCall = genericCodeBlock ? genericCodeBlock->jitCode() : nullptr; 142 m_jit CodeForCallWithArityCheck = MacroAssemblerCodePtr();142 m_jitEntriesForCall.clearEntries(); 143 143 m_numParametersForCall = genericCodeBlock ? genericCodeBlock->numParameters() : NUM_PARAMETERS_NOT_COMPILED; 144 144 break; 145 145 case CodeForConstruct: 146 146 m_jitCodeForConstruct = genericCodeBlock ? genericCodeBlock->jitCode() : nullptr; 147 m_jit CodeForConstructWithArityCheck = MacroAssemblerCodePtr();147 m_jitEntriesForConstruct.clearEntries(); 148 148 m_numParametersForConstruct = genericCodeBlock ? genericCodeBlock->numParameters() : NUM_PARAMETERS_NOT_COMPILED; 149 149 break; -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r209570 r209653 211 211 setLastStackTop(stack.origin()); 212 212 213 #if ENABLE(VM_COUNTERS) 214 clearCounters(); 215 #endif 216 213 217 // Need to be careful to keep everything consistent here 214 218 JSLockHolder lock(this); … … 477 481 478 482 #if ENABLE(JIT) 479 static ThunkGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic)483 static JITEntryGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic) 480 484 { 481 485 switch (intrinsic) { … … 924 928 #endif 925 929 930 #if ENABLE(VM_COUNTERS) 931 void VM::clearCounters() 932 { 933 for (unsigned i = 0; i < NumberVMCounter; i++) 934 m_counters[i] = 0; 935 } 936 937 void VM::dumpCounters() 938 { 939 size_t totalCalls = counterFor(BaselineCaller) + counterFor(DFGCaller) + counterFor(FTLCaller); 940 dataLog("#### VM Call counters ####\n"); 941 dataLogF("%10zu Total calls\n", totalCalls); 942 dataLogF("%10zu Baseline calls\n", counterFor(BaselineCaller)); 943 dataLogF("%10zu DFG calls\n", counterFor(DFGCaller)); 944 dataLogF("%10zu FTL calls\n", counterFor(FTLCaller)); 945 dataLogF("%10zu Vararg calls\n", counterFor(CallVarargs)); 946 dataLogF("%10zu Tail calls\n", counterFor(TailCall)); 947 dataLogF("%10zu Eval calls\n", counterFor(CallEval)); 948 dataLogF("%10zu Direct calls\n", counterFor(DirectCall)); 949 dataLogF("%10zu Polymorphic calls\n", counterFor(PolymorphicCall)); 950 dataLogF("%10zu Virtual calls\n", counterFor(VirtualCall)); 951 dataLogF("%10zu Virtual slow calls\n", counterFor(VirtualSlowCall)); 952 dataLogF("%10zu Register args no arity\n", counterFor(RegArgsNoArity)); 953 dataLogF("%10zu Stack args no arity\n", counterFor(StackArgsNoArity)); 954 dataLogF("%10zu Register args extra arity\n", counterFor(RegArgsExtra)); 955 dataLogF("%10zu Register args arity check\n", counterFor(RegArgsArity)); 956 dataLogF("%10zu Stack args arity check\n", counterFor(StackArgsArity)); 957 dataLogF("%10zu Arity fixups required\n", counterFor(ArityFixupRequired)); 958 } 959 #endif 960 926 961 } // namespace JSC -
trunk/Source/JavaScriptCore/runtime/VM.h
r209630 r209653 433 433 return jitStubs->ctiStub(this, generator); 434 434 } 435 436 JITEntryPointsWithRef getJITEntryStub(JITEntryGenerator generator) 437 { 438 return jitStubs->jitEntryStub(this, generator); 439 } 440 441 JITJSCallThunkEntryPointsWithRef getJITCallThunkEntryStub(JITCallThunkEntryGenerator generator) 442 { 443 return jitStubs->jitCallThunkEntryStub(this, generator); 444 } 435 445 436 446 std::unique_ptr<RegisterAtOffsetList> allCalleeSaveRegisterOffsets; … … 574 584 BumpPointerAllocator m_regExpAllocator; 575 585 ConcurrentJSLock m_regExpAllocatorLock; 586 587 enum VMCounterType { 588 BaselineCaller, 589 DFGCaller, 590 FTLCaller, 591 CallVarargs, 592 TailCall, 593 CallEval, 594 DirectCall, 595 PolymorphicCall, 596 VirtualCall, 597 VirtualSlowCall, 598 RegArgsNoArity, 599 StackArgsNoArity, 600 RegArgsExtra, 601 RegArgsArity, 602 StackArgsArity, 603 ArityFixupRequired, 604 NumberVMCounter 605 }; 606 607 #if ENABLE(VM_COUNTERS) 608 size_t m_counters[NumberVMCounter]; 609 610 void clearCounters(); 611 612 size_t* addressOfCounter(VMCounterType counterType) 613 { 614 if (counterType >= NumberVMCounter) 615 return nullptr; 616 617 return &m_counters[counterType]; 618 } 619 620 size_t counterFor(VMCounterType counterType) 621 { 622 if (counterType >= NumberVMCounter) 623 return 0; 624 625 return m_counters[counterType]; 626 } 627 628 JS_EXPORT_PRIVATE void dumpCounters(); 629 #endif 576 630 577 631 std::unique_ptr<HasOwnPropertyCache> m_hasOwnPropertyCache; -
trunk/Source/JavaScriptCore/wasm/WasmBinding.cpp
r209597 r209653 134 134 } 135 135 136 GPRReg importJSCellGPRReg = GPRInfo::regT0; // Callee needs to be in regT0 for slow path below.136 GPRReg importJSCellGPRReg = argumentRegisterForCallee(); 137 137 ASSERT(!wasmCC.m_calleeSaveRegisters.get(importJSCellGPRReg)); 138 138 … … 149 149 150 150 CallLinkInfo* callLinkInfo = callLinkInfos.add(); 151 callLinkInfo->setUpCall(CallLinkInfo::Call, CodeOrigin(), importJSCellGPRReg);151 callLinkInfo->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(), importJSCellGPRReg); 152 152 JIT::DataLabelPtr targetToCheck; 153 153 JIT::TrustedImmPtr initialRightValue(0); … … 156 156 JIT::Jump done = jit.jump(); 157 157 slowPath.link(&jit); 158 // Callee needs to be in regT0 here. 159 jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2. 158 jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0 160 159 JIT::Call slowCall = jit.nearCall(); 161 160 done.link(&jit); … … 225 224 226 225 LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID); 227 patchBuffer.link(slowCall, FunctionPtr(vm->get CTIStub(linkCallThunkGenerator).code().executableAddress()));226 patchBuffer.link(slowCall, FunctionPtr(vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs).executableAddress())); 228 227 CodeLocationLabel callReturnLocation(patchBuffer.locationOfNearCall(slowCall)); 229 228 CodeLocationLabel hotPathBegin(patchBuffer.locationOf(targetToCheck)); -
trunk/Source/WTF/ChangeLog
r209632 r209653 1 2016-12-09 Michael Saboff <[email protected]> 2 3 JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls 4 https://bugs.webkit.org/show_bug.cgi?id=160355 5 6 Reviewed by Filip Pizlo. 7 8 Added a new build option ENABLE_VM_COUNTERS to enable JIT'able counters. 9 The default is for the option to be off. 10 11 * wtf/Platform.h: 12 Added ENABLE_VM_COUNTERS 13 1 14 2016-12-09 Geoffrey Garen <[email protected]> 2 15 -
trunk/Source/WTF/wtf/Platform.h
r209070 r209653 696 696 #endif 697 697 698 /* This enables per VM counters available for use by JIT'ed code. */ 699 #define ENABLE_VM_COUNTERS 0 700 698 701 /* The FTL *does not* work on 32-bit platforms. Disable it even if someone asked us to enable it. */ 699 702 #if USE(JSVALUE32_64)
Note:
See TracChangeset
for help on using the changeset viewer.