Changeset 250750 in webkit
- Timestamp:
- Oct 4, 2019, 3:20:57 PM (6 years ago)
- Location:
- trunk
- Files:
-
- 2 added
- 2 deleted
- 20 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/JSTests/ChangeLog
r250720 r250750 1 2019-10-04 Saam Barati <[email protected]> 2 3 Allow OSR exit to the LLInt 4 https://bugs.webkit.org/show_bug.cgi?id=197993 5 6 Reviewed by Tadeu Zagallo. 7 8 * stress/exit-from-getter-by-val.js: Added. 9 * stress/exit-from-setter-by-val.js: Added. 10 1 11 2019-10-04 Paulo Matos <[email protected]> 2 12 -
trunk/Source/JavaScriptCore/ChangeLog
r250725 r250750 1 2019-10-04 Saam Barati <[email protected]> 2 3 Allow OSR exit to the LLInt 4 https://bugs.webkit.org/show_bug.cgi?id=197993 5 6 Reviewed by Tadeu Zagallo. 7 8 This patch makes it so we can OSR exit to the LLInt. 9 Here are the interesting implementation details: 10 11 1. We no longer baseline compile everything in the inline stack. 12 13 2. When the top frame is a LLInt frame, we exit to the corresponding 14 LLInt bytecode. However, we need to materialize the LLInt registers 15 for PC, PB, and metadata. 16 17 3. When dealing with inline call frames where the caller is LLInt, we 18 need to return to the appropriate place. Let's consider we're exiting 19 at a place A->B (A calls B), where A is LLInt. If A is a normal call, 20 we place the return PC in the frame we materialize to B to be right 21 after the LLInt's inline cache for calls. If A is a varargs call, we place 22 it at the return location for vararg calls. The interesting scenario here 23 is where A is a getter/setter. This means that A might be get_by_id, 24 get_by_val, put_by_id, or put_by_val. Since the LLInt does not have any 25 form of IC for getters/setters, we make this work by creating new LLInt 26 "return location" stubs for these opcodes. 27 28 4. We need to update what callee saves we store in the callee if the caller frame 29 is a LLInt frame. Let's consider an inline stack A->B->C, where A is a LLInt frame. 30 When we materialize the stack frame for B, we need to ensure that the LLInt callee 31 saves that A uses is stored into B's preserved callee saves. Specifically, this 32 is just the PB/metadata registers. 33 34 This patch also fixes offlineasm's macro expansion to allow us to 35 use computed label names for global labels. 36 37 In a future bug, I'm going to investigate some kind of control system for 38 throwing away baseline code when we tier up: 39 https://bugs.webkit.org/show_bug.cgi?id=202503 40 41 * JavaScriptCore.xcodeproj/project.pbxproj: 42 * Sources.txt: 43 * bytecode/CodeBlock.h: 44 (JSC::CodeBlock::metadataTable): 45 (JSC::CodeBlock::instructionsRawPointer): 46 * dfg/DFGOSRExit.cpp: 47 (JSC::DFG::OSRExit::executeOSRExit): 48 (JSC::DFG::reifyInlinedCallFrames): 49 (JSC::DFG::adjustAndJumpToTarget): 50 (JSC::DFG::OSRExit::compileOSRExit): 51 * dfg/DFGOSRExit.h: 52 (JSC::DFG::OSRExitState::OSRExitState): 53 * dfg/DFGOSRExitCompilerCommon.cpp: 54 (JSC::DFG::callerReturnPC): 55 (JSC::DFG::calleeSaveSlot): 56 (JSC::DFG::reifyInlinedCallFrames): 57 (JSC::DFG::adjustAndJumpToTarget): 58 * dfg/DFGOSRExitCompilerCommon.h: 59 * dfg/DFGOSRExitPreparation.cpp: 60 (JSC::DFG::prepareCodeOriginForOSRExit): Deleted. 61 * dfg/DFGOSRExitPreparation.h: 62 * ftl/FTLOSRExitCompiler.cpp: 63 (JSC::FTL::compileFTLOSRExit): 64 * llint/LLIntData.h: 65 (JSC::LLInt::getCodePtr): 66 * llint/LowLevelInterpreter.asm: 67 * llint/LowLevelInterpreter32_64.asm: 68 * llint/LowLevelInterpreter64.asm: 69 * offlineasm/asm.rb: 70 * offlineasm/transform.rb: 71 * runtime/OptionsList.h: 72 1 73 2019-10-04 Truitt Savell <[email protected]> 2 74 -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r250630 r250750 183 183 0F235BE217178E1C00690C7F /* FTLThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BCC17178E1C00690C7F /* FTLThunks.h */; settings = {ATTRIBUTES = (Private, ); }; }; 184 184 0F235BEC17178E7300690C7F /* DFGOSRExitBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BE817178E7300690C7F /* DFGOSRExitBase.h */; }; 185 0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */; };186 185 0F24E54117EA9F5900ABB217 /* AssemblyHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; }; 187 186 0F24E54217EA9F5900ABB217 /* CCallHelpers.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2297 2296 0F235BE717178E7300690C7F /* DFGOSRExitBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitBase.cpp; path = dfg/DFGOSRExitBase.cpp; sourceTree = "<group>"; }; 2298 2297 0F235BE817178E7300690C7F /* DFGOSRExitBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitBase.h; path = dfg/DFGOSRExitBase.h; sourceTree = "<group>"; }; 2299 0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitPreparation.cpp; path = dfg/DFGOSRExitPreparation.cpp; sourceTree = "<group>"; };2300 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitPreparation.h; path = dfg/DFGOSRExitPreparation.h; sourceTree = "<group>"; };2301 2298 0F24E53B17EA9F5900ABB217 /* AssemblyHelpers.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AssemblyHelpers.cpp; sourceTree = "<group>"; }; 2302 2299 0F24E53C17EA9F5900ABB217 /* AssemblyHelpers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AssemblyHelpers.h; sourceTree = "<group>"; }; … … 7873 7870 0FEFC9A71681A3B000567F53 /* DFGOSRExitJumpPlaceholder.cpp */, 7874 7871 0FEFC9A81681A3B000567F53 /* DFGOSRExitJumpPlaceholder.h */, 7875 0F235BE917178E7300690C7F /* DFGOSRExitPreparation.cpp */,7876 0F235BEA17178E7300690C7F /* DFGOSRExitPreparation.h */,7877 7872 0F6237951AE45CA700D402EA /* DFGPhantomInsertionPhase.cpp */, 7878 7873 0F6237961AE45CA700D402EA /* DFGPhantomInsertionPhase.h */, … … 9240 9235 0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */, 9241 9236 0FEFC9AB1681A3B600567F53 /* DFGOSRExitJumpPlaceholder.h in Headers */, 9242 0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */,9243 9237 0F6237981AE45CA700D402EA /* DFGPhantomInsertionPhase.h in Headers */, 9244 9238 0FFFC95C14EF90AF00C72532 /* DFGPhase.h in Headers */, -
trunk/Source/JavaScriptCore/Sources.txt
r250630 r250750 383 383 dfg/DFGOSRExitFuzz.cpp 384 384 dfg/DFGOSRExitJumpPlaceholder.cpp 385 dfg/DFGOSRExitPreparation.cpp386 385 dfg/DFGObjectAllocationSinkingPhase.cpp 387 386 dfg/DFGObjectMaterializationData.cpp -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r250655 r250750 893 893 } 894 894 895 MetadataTable* metadataTable() { return m_metadata.get(); } 896 const void* instructionsRawPointer() { return m_instructionsRawPointer; } 897 895 898 protected: 896 899 void finalizeLLIntInlineCaches(); -
trunk/Source/JavaScriptCore/bytecode/InlineCallFrame.h
r245239 r250750 241 241 inline CodeBlock* baselineCodeBlockForOriginAndBaselineCodeBlock(const CodeOrigin& codeOrigin, CodeBlock* baselineCodeBlock) 242 242 { 243 ASSERT( baselineCodeBlock->jitType() == JITType::BaselineJIT);243 ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType())); 244 244 auto* inlineCallFrame = codeOrigin.inlineCallFrame(); 245 245 if (inlineCallFrame) -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp
r250486 r250750 35 35 #include "DFGMayExit.h" 36 36 #include "DFGOSRExitCompilerCommon.h" 37 #include "DFGOSRExitPreparation.h"38 37 #include "DFGOperations.h" 39 38 #include "DFGSpeculativeJIT.h" … … 373 372 // exit ramp code. 374 373 375 // Ensure we have baseline codeBlocks to OSR exit to.376 prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);377 378 374 CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative(); 379 ASSERT( baselineCodeBlock->jitType() == JITType::BaselineJIT);375 ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType())); 380 376 381 377 SpeculationRecovery* recovery = nullptr; … … 407 403 408 404 CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); 409 const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap(); 410 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex()); 411 ASSERT(codeLocation); 412 413 void* jumpTarget = codeLocation.executableAddress(); 405 bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk; 406 void* jumpTarget; 407 if (exitToLLInt) { 408 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); 409 const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); 410 MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction); 411 jumpTarget = destination.executableAddress(); 412 } else { 413 const JITCodeMap& codeMap = codeBlockForExit->jitCodeMap(); 414 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeMap.find(exit.m_codeOrigin.bytecodeIndex()); 415 ASSERT(codeLocation); 416 jumpTarget = codeLocation.executableAddress(); 417 } 414 418 415 419 // Compute the value recoveries. … … 419 423 ptrdiff_t stackPointerOffset = -static_cast<ptrdiff_t>(codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit) * sizeof(Register); 420 424 421 exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile ));425 exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, WTFMove(undefinedOperandSpans), recovery, stackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget, arrayProfile, exitToLLInt)); 422 426 423 427 if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) { … … 447 451 OSRExitState& exitState = *exit.exitState.get(); 448 452 CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock; 449 ASSERT( baselineCodeBlock->jitType() == JITType::BaselineJIT);453 ASSERT(JITCode::isBaselineCode(baselineCodeBlock->jitType())); 450 454 451 455 Operands<ValueRecovery>& operands = exitState.operands; … … 758 762 // in presence of inlined tail calls. 759 763 // https://bugs.webkit.org/show_bug.cgi?id=147511 760 ASSERT( outermostBaselineCodeBlock->jitType() == JITType::BaselineJIT);764 ASSERT(JITCode::isBaselineCode(outermostBaselineCodeBlock->jitType())); 761 765 frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock); 762 766 … … 769 773 void* callerFrame = cpu.fp(); 770 774 775 bool callerIsLLInt = false; 776 771 777 if (!trueCaller) { 772 778 ASSERT(inlineCallFrame->isTail()); … … 782 788 CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); 783 789 unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); 784 MacroAssemblerCodePtr<JSInternalPtrTag> jumpTarget; 785 786 switch (trueCallerCallKind) { 787 case InlineCallFrame::Call: 788 case InlineCallFrame::Construct: 789 case InlineCallFrame::CallVarargs: 790 case InlineCallFrame::ConstructVarargs: 791 case InlineCallFrame::TailCall: 792 case InlineCallFrame::TailCallVarargs: { 793 CallLinkInfo* callLinkInfo = 794 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 795 RELEASE_ASSERT(callLinkInfo); 796 797 jumpTarget = callLinkInfo->callReturnLocation(); 798 break; 799 } 800 801 case InlineCallFrame::GetterCall: 802 case InlineCallFrame::SetterCall: { 803 StructureStubInfo* stubInfo = 804 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 805 RELEASE_ASSERT(stubInfo); 806 807 jumpTarget = stubInfo->doneLocation(); 808 break; 809 } 810 811 default: 812 RELEASE_ASSERT_NOT_REACHED(); 813 } 790 void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt); 814 791 815 792 if (trueCaller->inlineCallFrame()) 816 793 callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue); 817 794 818 void* targetAddress = jumpTarget.executableAddress();819 795 #if CPU(ARM64E) 820 796 void* newEntrySP = cpu.fp<uint8_t*>() + inlineCallFrame->returnPCOffset() + sizeof(void*); 821 targetAddress = retagCodePtr(targetAddress, JSInternalPtrTag, bitwise_cast<PtrTag>(newEntrySP));822 #endif 823 frame.set<void*>(inlineCallFrame->returnPCOffset(), targetAddress);797 jumpTarget = tagCodePtr(jumpTarget, bitwise_cast<PtrTag>(newEntrySP)); 798 #endif 799 frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget); 824 800 } 825 801 … … 830 806 // copy the prior contents of the tag registers already saved for the outer frame to this frame. 831 807 saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller); 808 809 if (callerIsLLInt) { 810 CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock); 811 frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR).offset, baselineCodeBlockForCaller->metadataTable()); 812 #if USE(JSVALUE64) 813 frame.set<const void*>(calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR).offset, baselineCodeBlockForCaller->instructionsRawPointer()); 814 #endif 815 } 832 816 833 817 if (!inlineCallFrame->isVarargs()) … … 895 879 896 880 vm.topCallFrame = context.fp<ExecState*>(); 881 882 if (exitState->isJumpToLLInt) { 883 CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock); 884 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); 885 const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); 886 887 context.gpr(LLInt::Registers::metadataTableGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->metadataTable()); 888 #if USE(JSVALUE64) 889 context.gpr(LLInt::Registers::pbGPR) = bitwise_cast<uintptr_t>(codeBlockForExit->instructionsRawPointer()); 890 context.gpr(LLInt::Registers::pcGPR) = static_cast<uintptr_t>(exit.m_codeOrigin.bytecodeIndex()); 891 #else 892 context.gpr(LLInt::Registers::pcGPR) = bitwise_cast<uintptr_t>(¤tInstruction); 893 #endif 894 895 if (exit.isExceptionHandler()) 896 vm.targetInterpreterPCForThrow = ¤tInstruction; 897 } 898 897 899 context.pc() = untagCodePtr<JSEntryPtrTag>(jumpTarget); 898 900 } … … 1053 1055 EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler()); 1054 1056 1055 prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);1056 1057 1057 // Compute the value recoveries. 1058 1058 Operands<ValueRecovery> operands; -
trunk/Source/JavaScriptCore/dfg/DFGOSRExit.h
r248546 r250750 107 107 108 108 struct OSRExitState : RefCounted<OSRExitState> { 109 OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile )109 OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, Vector<UndefinedOperandSpan>&& undefinedOperandSpans, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget, ArrayProfile* arrayProfile, bool isJumpToLLInt) 110 110 : exit(exit) 111 111 , codeBlock(codeBlock) … … 119 119 , jumpTarget(jumpTarget) 120 120 , arrayProfile(arrayProfile) 121 , isJumpToLLInt(isJumpToLLInt) 121 122 { } 122 123 … … 132 133 void* jumpTarget; 133 134 ArrayProfile* arrayProfile; 135 bool isJumpToLLInt; 134 136 135 137 ExtraInitializationLevel extraInitializationLevel; -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
r249880 r250750 34 34 #include "JSCJSValueInlines.h" 35 35 #include "JSCInlines.h" 36 #include "LLIntData.h" 36 37 #include "StructureStubInfo.h" 37 38 38 39 namespace JSC { namespace DFG { 40 41 // These are the LLInt OSR exit return points. 42 extern "C" void op_call_return_location_narrow(); 43 extern "C" void op_call_return_location_wide_16(); 44 extern "C" void op_call_return_location_wide_32(); 45 46 extern "C" void op_construct_return_location_narrow(); 47 extern "C" void op_construct_return_location_wide_16(); 48 extern "C" void op_construct_return_location_wide_32(); 49 50 extern "C" void op_call_varargs_slow_return_location_narrow(); 51 extern "C" void op_call_varargs_slow_return_location_wide_16(); 52 extern "C" void op_call_varargs_slow_return_location_wide_32(); 53 54 extern "C" void op_construct_varargs_slow_return_location_narrow(); 55 extern "C" void op_construct_varargs_slow_return_location_wide_16(); 56 extern "C" void op_construct_varargs_slow_return_location_wide_32(); 57 58 extern "C" void op_get_by_id_return_location_narrow(); 59 extern "C" void op_get_by_id_return_location_wide_16(); 60 extern "C" void op_get_by_id_return_location_wide_32(); 61 62 extern "C" void op_get_by_val_return_location_narrow(); 63 extern "C" void op_get_by_val_return_location_wide_16(); 64 extern "C" void op_get_by_val_return_location_wide_32(); 65 66 extern "C" void op_put_by_id_return_location_narrow(); 67 extern "C" void op_put_by_id_return_location_wide_16(); 68 extern "C" void op_put_by_id_return_location_wide_32(); 69 70 extern "C" void op_put_by_val_return_location_narrow(); 71 extern "C" void op_put_by_val_return_location_wide_16(); 72 extern "C" void op_put_by_val_return_location_wide_32(); 39 73 40 74 void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit) … … 137 171 } 138 172 173 void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeIndex, InlineCallFrame::Kind trueCallerCallKind, bool& callerIsLLInt) 174 { 175 callerIsLLInt = Options::forceOSRExitToLLInt() || baselineCodeBlockForCaller->jitType() == JITType::InterpreterThunk; 176 177 void* jumpTarget; 178 179 if (callerIsLLInt) { 180 const Instruction& callInstruction = *baselineCodeBlockForCaller->instructions().at(callBytecodeIndex).ptr(); 181 182 #define LLINT_RETURN_LOCATION(name) FunctionPtr<NoPtrTag>(callInstruction.isWide16() ? name##_return_location_wide_16 : (callInstruction.isWide32() ? name##_return_location_wide_32 : name##_return_location_narrow)).executableAddress() 183 184 switch (trueCallerCallKind) { 185 case InlineCallFrame::Call: 186 jumpTarget = LLINT_RETURN_LOCATION(op_call); 187 break; 188 case InlineCallFrame::Construct: 189 jumpTarget = LLINT_RETURN_LOCATION(op_construct); 190 break; 191 case InlineCallFrame::CallVarargs: 192 jumpTarget = LLINT_RETURN_LOCATION(op_call_varargs_slow); 193 break; 194 case InlineCallFrame::ConstructVarargs: 195 jumpTarget = LLINT_RETURN_LOCATION(op_construct_varargs_slow); 196 break; 197 case InlineCallFrame::GetterCall: { 198 if (callInstruction.opcodeID() == op_get_by_id) 199 jumpTarget = LLINT_RETURN_LOCATION(op_get_by_id); 200 else if (callInstruction.opcodeID() == op_get_by_val) 201 jumpTarget = LLINT_RETURN_LOCATION(op_get_by_val); 202 else 203 RELEASE_ASSERT_NOT_REACHED(); 204 break; 205 } 206 case InlineCallFrame::SetterCall: { 207 if (callInstruction.opcodeID() == op_put_by_id) 208 jumpTarget = LLINT_RETURN_LOCATION(op_put_by_id); 209 else if (callInstruction.opcodeID() == op_put_by_val) 210 jumpTarget = LLINT_RETURN_LOCATION(op_put_by_val); 211 else 212 RELEASE_ASSERT_NOT_REACHED(); 213 break; 214 } 215 default: 216 RELEASE_ASSERT_NOT_REACHED(); 217 } 218 219 #undef LLINT_RETURN_LOCATION 220 221 } else { 222 switch (trueCallerCallKind) { 223 case InlineCallFrame::Call: 224 case InlineCallFrame::Construct: 225 case InlineCallFrame::CallVarargs: 226 case InlineCallFrame::ConstructVarargs: { 227 CallLinkInfo* callLinkInfo = 228 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 229 RELEASE_ASSERT(callLinkInfo); 230 231 jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress(); 232 break; 233 } 234 235 case InlineCallFrame::GetterCall: 236 case InlineCallFrame::SetterCall: { 237 StructureStubInfo* stubInfo = 238 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 239 RELEASE_ASSERT(stubInfo); 240 241 jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress(); 242 break; 243 } 244 245 default: 246 RELEASE_ASSERT_NOT_REACHED(); 247 } 248 } 249 250 return jumpTarget; 251 } 252 253 CCallHelpers::Address calleeSaveSlot(InlineCallFrame* inlineCallFrame, CodeBlock* baselineCodeBlock, GPRReg calleeSave) 254 { 255 const RegisterAtOffsetList* calleeSaves = baselineCodeBlock->calleeSaveRegisters(); 256 for (unsigned i = 0; i < calleeSaves->size(); i++) { 257 RegisterAtOffset entry = calleeSaves->at(i); 258 if (entry.reg() != calleeSave) 259 continue; 260 return CCallHelpers::Address(CCallHelpers::framePointerRegister, static_cast<VirtualRegister>(inlineCallFrame->stackOffset).offsetInBytes() + entry.offset()); 261 } 262 263 RELEASE_ASSERT_NOT_REACHED(); 264 return CCallHelpers::Address(CCallHelpers::framePointerRegister); 265 } 266 139 267 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit) 140 268 { … … 142 270 // in presence of inlined tail calls. 143 271 // https://bugs.webkit.org/show_bug.cgi?id=147511 144 ASSERT( jit.baselineCodeBlock()->jitType() == JITType::BaselineJIT);272 ASSERT(JITCode::isBaselineCode(jit.baselineCodeBlock()->jitType())); 145 273 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor((VirtualRegister)CallFrameSlot::codeBlock)); 146 274 … … 152 280 CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind); 153 281 GPRReg callerFrameGPR = GPRInfo::callFrameRegister; 282 283 bool callerIsLLInt = false; 154 284 155 285 if (!trueCaller) { … … 168 298 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); 169 299 unsigned callBytecodeIndex = trueCaller->bytecodeIndex(); 170 void* jumpTarget = nullptr; 171 172 switch (trueCallerCallKind) { 173 case InlineCallFrame::Call: 174 case InlineCallFrame::Construct: 175 case InlineCallFrame::CallVarargs: 176 case InlineCallFrame::ConstructVarargs: 177 case InlineCallFrame::TailCall: 178 case InlineCallFrame::TailCallVarargs: { 179 CallLinkInfo* callLinkInfo = 180 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex); 181 RELEASE_ASSERT(callLinkInfo); 182 183 jumpTarget = callLinkInfo->callReturnLocation().untaggedExecutableAddress(); 184 break; 185 } 186 187 case InlineCallFrame::GetterCall: 188 case InlineCallFrame::SetterCall: { 189 StructureStubInfo* stubInfo = 190 baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex)); 191 RELEASE_ASSERT(stubInfo); 192 193 jumpTarget = stubInfo->doneLocation().untaggedExecutableAddress(); 194 break; 195 } 196 197 default: 198 RELEASE_ASSERT_NOT_REACHED(); 199 } 300 void* jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt); 200 301 201 302 if (trueCaller->inlineCallFrame()) { … … 227 328 trueCaller ? AssemblyHelpers::UseExistingTagRegisterContents : AssemblyHelpers::CopyBaselineCalleeSavedRegistersFromBaseFrame, 228 329 GPRInfo::regT2); 330 331 if (callerIsLLInt) { 332 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller); 333 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->metadataTable()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR)); 334 #if USE(JSVALUE64) 335 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->instructionsRawPointer()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR)); 336 #endif 337 } 229 338 230 339 if (!inlineCallFrame->isVarargs()) … … 302 411 CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin); 303 412 ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion()); 304 ASSERT(codeBlockForExit->jitType() == JITType::BaselineJIT); 305 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex()); 306 ASSERT(codeLocation); 307 308 void* jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress(); 413 ASSERT(JITCode::isBaselineCode(codeBlockForExit->jitType())); 414 415 void* jumpTarget; 416 bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk; 417 if (exitToLLInt) { 418 unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex(); 419 const Instruction& currentInstruction = *codeBlockForExit->instructions().at(bytecodeOffset).ptr(); 420 MacroAssemblerCodePtr<JSEntryPtrTag> destination = LLInt::getCodePtr<JSEntryPtrTag>(currentInstruction); 421 422 if (exit.isExceptionHandler()) { 423 jit.move(CCallHelpers::TrustedImmPtr(¤tInstruction), GPRInfo::regT2); 424 jit.storePtr(GPRInfo::regT2, &vm.targetInterpreterPCForThrow); 425 } 426 427 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->metadataTable()), LLInt::Registers::metadataTableGPR); 428 #if USE(JSVALUE64) 429 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->instructionsRawPointer()), LLInt::Registers::pbGPR); 430 jit.move(CCallHelpers::TrustedImm32(bytecodeOffset), LLInt::Registers::pcGPR); 431 #else 432 jit.move(CCallHelpers::TrustedImmPtr(¤tInstruction), LLInt::Registers::pcGPR); 433 #endif 434 jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress(); 435 } else { 436 CodeLocationLabel<JSEntryPtrTag> codeLocation = codeBlockForExit->jitCodeMap().find(exit.m_codeOrigin.bytecodeIndex()); 437 ASSERT(codeLocation); 438 439 jumpTarget = codeLocation.retagged<OSRExitPtrTag>().executableAddress(); 440 } 441 309 442 jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(codeBlockForExit) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister); 310 443 if (exit.isExceptionHandler()) { -
trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h
r249175 r250750 40 40 void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&); 41 41 void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&); 42 void* callerReturnPC(CodeBlock* baselineCodeBlockForCaller, unsigned callBytecodeOffset, InlineCallFrame::Kind callerKind, bool& callerIsLLInt); 43 CCallHelpers::Address calleeSaveSlot(InlineCallFrame*, CodeBlock* baselineCodeBlock, GPRReg calleeSave); 42 44 43 45 template <typename JITCodeType> -
trunk/Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
r250629 r250750 31 31 #include "BytecodeStructs.h" 32 32 #include "DFGOSRExitCompilerCommon.h" 33 #include "DFGOSRExitPreparation.h"34 33 #include "FTLExitArgumentForOperand.h" 35 34 #include "FTLJITCode.h" … … 545 544 } 546 545 547 prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);548 549 546 compileStub(exitID, jitCode, exit, &vm, codeBlock); 550 547 -
trunk/Source/JavaScriptCore/llint/LLIntData.h
r245906 r250750 26 26 #pragma once 27 27 28 #include "GPRInfo.h" 29 #include "Instruction.h" 28 30 #include "JSCJSValue.h" 29 31 #include "MacroAssemblerCodeRef.h" … … 33 35 34 36 class VM; 35 struct Instruction;36 37 37 38 #if ENABLE(C_LOOP) … … 146 147 147 148 template<PtrTag tag> 149 ALWAYS_INLINE MacroAssemblerCodePtr<tag> getCodePtr(const Instruction& instruction) 150 { 151 if (instruction.isWide16()) 152 return getWide16CodePtr<tag>(instruction.opcodeID()); 153 if (instruction.isWide32()) 154 return getWide32CodePtr<tag>(instruction.opcodeID()); 155 return getCodePtr<tag>(instruction.opcodeID()); 156 } 157 158 template<PtrTag tag> 148 159 ALWAYS_INLINE MacroAssemblerCodeRef<tag> getCodeRef(OpcodeID opcodeID) 149 160 { … … 185 196 } 186 197 198 #if ENABLE(JIT) 199 struct Registers { 200 static const GPRReg pcGPR = GPRInfo::regT4; 201 202 #if CPU(X86_64) && !OS(WINDOWS) 203 static const GPRReg metadataTableGPR = GPRInfo::regCS1; 204 static const GPRReg pbGPR = GPRInfo::regCS2; 205 #elif CPU(X86_64) && OS(WINDOWS) 206 static const GPRReg metadataTableGPR = GPRInfo::regCS3; 207 static const GPRReg pbGPR = GPRInfo::regCS4; 208 #elif CPU(ARM64) 209 static const GPRReg metadataTableGPR = GPRInfo::regCS6; 210 static const GPRReg pbGPR = GPRInfo::regCS7; 211 #elif CPU(MIPS) || CPU(ARM_THUMB2) 212 static const GPRReg metadataTableGPR = GPRInfo::regCS0; 213 #endif 214 }; 215 #endif 216 187 217 } } // namespace JSC::LLInt -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r250630 r250750 930 930 end 931 931 932 macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag) 932 macro defineOSRExitReturnLabel(opcodeName, size) 933 macro defineNarrow() 934 global _%opcodeName%_return_location_narrow 935 _%opcodeName%_return_location_narrow: 936 end 937 938 macro defineWide16() 939 global _%opcodeName%_return_location_wide_16 940 _%opcodeName%_return_location_wide_16: 941 end 942 943 macro defineWide32() 944 global _%opcodeName%_return_location_wide_32 945 _%opcodeName%_return_location_wide_32: 946 end 947 948 size(defineNarrow, defineWide16, defineWide32, macro (f) f() end) 949 end 950 951 macro callTargetFunction(opcodeName, size, opcodeStruct, dispatch, callee, callPtrTag) 933 952 if C_LOOP or C_LOOP_WIN 934 953 cloopCallJSFunction callee … … 936 955 call callee, callPtrTag 937 956 end 957 958 defineOSRExitReturnLabel(opcodeName, size) 938 959 restoreStackPointerAfterCall() 939 960 dispatchAfterCall(size, opcodeStruct, dispatch) … … 1005 1026 end 1006 1027 1007 macro slowPathForCall( size, opcodeStruct, dispatch, slowPath, prepareCall)1028 macro slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1008 1029 callCallSlowPath( 1009 1030 slowPath, … … 1014 1035 prepareCall(callee, t2, t3, t4, SlowPathPtrTag) 1015 1036 .dontUpdateSP: 1016 callTargetFunction( size, opcodeStruct, dispatch, callee, SlowPathPtrTag)1037 callTargetFunction(%opcodeName%_slow, size, opcodeStruct, dispatch, callee, SlowPathPtrTag) 1017 1038 end) 1039 end 1040 1041 macro getterSetterOSRExitReturnPoint(opName, size) 1042 crash() # We don't reach this in straight line code. We only reach it via returning to the code below when reconstructing stack frames during OSR exit. 1043 1044 defineOSRExitReturnLabel(opName, size) 1045 1046 restoreStackPointerAfterCall() 1047 loadi ArgumentCount + TagOffset[cfr], PC 1018 1048 end 1019 1049 … … 1742 1772 1743 1773 1744 macro doCallVarargs( size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall)1774 macro doCallVarargs(opcodeName, size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall) 1745 1775 callSlowPath(frameSlowPath) 1746 1776 branchIfException(_llint_throw_from_slow_path_trampoline) … … 1757 1787 end 1758 1788 end 1759 slowPathForCall( size, opcodeStruct, dispatch, slowPath, prepareCall)1789 slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1760 1790 end 1761 1791 1762 1792 1763 1793 llintOp(op_call_varargs, OpCallVarargs, macro (size, get, dispatch) 1764 doCallVarargs( size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall)1794 doCallVarargs(op_call_varargs, size, OpCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_call_varargs, prepareForRegularCall) 1765 1795 end) 1766 1796 … … 1769 1799 # We lie and perform the tail call instead of preparing it since we can't 1770 1800 # prepare the frame for a call opcode 1771 doCallVarargs( size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall)1801 doCallVarargs(op_tail_call_varargs, size, OpTailCallVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_tail_call_varargs, prepareForTailCall) 1772 1802 end) 1773 1803 … … 1777 1807 # We lie and perform the tail call instead of preparing it since we can't 1778 1808 # prepare the frame for a call opcode 1779 doCallVarargs( size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall)1809 doCallVarargs(op_tail_call_forward_arguments, size, OpTailCallForwardArguments, dispatch, _llint_slow_path_size_frame_for_forward_arguments, _llint_slow_path_tail_call_forward_arguments, prepareForTailCall) 1780 1810 end) 1781 1811 1782 1812 1783 1813 llintOp(op_construct_varargs, OpConstructVarargs, macro (size, get, dispatch) 1784 doCallVarargs( size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall)1814 doCallVarargs(op_construct_varargs, size, OpConstructVarargs, dispatch, _llint_slow_path_size_frame_for_varargs, _llint_slow_path_construct_varargs, prepareForRegularCall) 1785 1815 end) 1786 1816 … … 1821 1851 _llint_op_call_eval: 1822 1852 slowPathForCall( 1853 op_call_eval_narrow, 1823 1854 narrow, 1824 1855 OpCallEval, … … 1829 1860 _llint_op_call_eval_wide16: 1830 1861 slowPathForCall( 1862 op_call_eval_wide16, 1831 1863 wide16, 1832 1864 OpCallEval, … … 1837 1869 _llint_op_call_eval_wide32: 1838 1870 slowPathForCall( 1871 op_call_eval_wide32, 1839 1872 wide32, 1840 1873 OpCallEval, -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r249547 r250750 1399 1399 callSlowPath(_llint_slow_path_get_by_id) 1400 1400 dispatch() 1401 1402 # osr return point 1403 getterSetterOSRExitReturnPoint(op_get_by_id, size) 1404 metadata(t2, t3) 1405 valueProfile(OpGetById, t2, r1, r0) 1406 return(r1, r0) 1407 1401 1408 end) 1402 1409 … … 1461 1468 callSlowPath(_llint_slow_path_put_by_id) 1462 1469 dispatch() 1470 1471 # osr return point 1472 getterSetterOSRExitReturnPoint(op_put_by_id, size) 1473 dispatch() 1474 1463 1475 end) 1464 1476 … … 1512 1524 callSlowPath(_llint_slow_path_get_by_val) 1513 1525 dispatch() 1514 end) 1515 1516 1517 macro putByValOp(opcodeName, opcodeStruct) 1526 1527 # osr return point 1528 getterSetterOSRExitReturnPoint(op_get_by_val, size) 1529 metadata(t2, t3) 1530 valueProfile(OpGetByVal, t2, r1, r0) 1531 return(r1, r0) 1532 1533 end) 1534 1535 1536 macro putByValOp(opcodeName, opcodeStruct, osrExitPoint) 1518 1537 llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) 1519 1538 macro contiguousPutByVal(storeCallback) … … 1603 1622 callSlowPath(_llint_slow_path_%opcodeName%) 1604 1623 dispatch() 1624 1625 .osrExitPoint: 1626 osrExitPoint(size, dispatch) 1605 1627 end) 1606 1628 end 1607 1629 1608 1630 1609 putByValOp(put_by_val, OpPutByVal) 1610 1611 putByValOp(put_by_val_direct, OpPutByValDirect) 1631 putByValOp(put_by_val, OpPutByVal, macro (size, dispatch) 1632 # osr return point 1633 getterSetterOSRExitReturnPoint(op_put_by_val, size) 1634 dispatch() 1635 end) 1636 1637 putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end) 1612 1638 1613 1639 … … 1876 1902 move t3, sp 1877 1903 prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) 1878 callTargetFunction( size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)1904 callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 1879 1905 1880 1906 .opCallSlow: 1881 slowPathForCall( size, opcodeStruct, dispatch, slowPath, prepareCall)1907 slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 1882 1908 end) 1883 1909 end -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r250313 r250750 1326 1326 end) 1327 1327 1328 1329 1328 llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return) 1330 1329 metadata(t2, t1) … … 1377 1376 callSlowPath(_llint_slow_path_get_by_id) 1378 1377 dispatch() 1378 1379 # osr return point 1380 getterSetterOSRExitReturnPoint(op_get_by_id, size) 1381 metadata(t2, t3) 1382 valueProfile(OpGetById, t2, r0) 1383 return(r0) 1384 1379 1385 end) 1380 1386 … … 1449 1455 callSlowPath(_llint_slow_path_put_by_id) 1450 1456 dispatch() 1457 1458 # osr return point 1459 getterSetterOSRExitReturnPoint(op_put_by_id, size) 1460 dispatch() 1461 1451 1462 end) 1452 1463 … … 1620 1631 callSlowPath(_llint_slow_path_get_by_val) 1621 1632 dispatch() 1622 end) 1623 1624 1625 macro putByValOp(opcodeName, opcodeStruct) 1633 1634 # osr return point 1635 getterSetterOSRExitReturnPoint(op_get_by_val, size) 1636 metadata(t5, t2) 1637 valueProfile(OpGetByVal, t5, r0) 1638 return(r0) 1639 1640 end) 1641 1642 1643 macro putByValOp(opcodeName, opcodeStruct, osrExitPoint) 1626 1644 llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) 1627 1645 macro contiguousPutByVal(storeCallback) … … 1711 1729 callSlowPath(_llint_slow_path_%opcodeName%) 1712 1730 dispatch() 1731 1732 osrExitPoint(size, dispatch) 1733 1713 1734 end) 1714 1735 end 1715 1736 1716 putByValOp(put_by_val, OpPutByVal) 1717 1718 putByValOp(put_by_val_direct, OpPutByValDirect) 1737 putByValOp(put_by_val, OpPutByVal, macro (size, dispatch) 1738 # osr return point 1739 getterSetterOSRExitReturnPoint(op_put_by_val, size) 1740 dispatch() 1741 end) 1742 1743 putByValOp(put_by_val_direct, OpPutByValDirect, macro (a, b) end) 1719 1744 1720 1745 … … 2005 2030 move t3, sp 2006 2031 prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) 2007 callTargetFunction( size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)2032 callTargetFunction(opcodeName, size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag) 2008 2033 2009 2034 .opCallSlow: 2010 slowPathForCall( size, opcodeStruct, dispatch, slowPath, prepareCall)2035 slowPathForCall(opcodeName, size, opcodeStruct, dispatch, slowPath, prepareCall) 2011 2036 end) 2012 2037 end -
trunk/Source/JavaScriptCore/offlineasm/asm.rb
r245906 r250750 215 215 def putsLabel(labelName, isGlobal) 216 216 raise unless @state == :asm 217 @deferredNextLabelActions.each { 218 | action | 219 action.call() 220 } 217 unless isGlobal 218 @deferredNextLabelActions.each { 219 | action | 220 action.call() 221 } 222 end 221 223 @deferredNextLabelActions = [] 222 224 @numGlobalLabels += 1 … … 402 404 lowLevelAST.validate 403 405 emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) { 404 406 $currentSettings = concreteSettings 405 407 $asm.inAsm { 406 408 lowLevelAST.lower(backend) -
trunk/Source/JavaScriptCore/offlineasm/transform.rb
r237547 r250750 260 260 end 261 261 } 262 Label.forName(codeOrigin, name, @definedInFile) 262 result = Label.forName(codeOrigin, name, @definedInFile) 263 result.setGlobal() if @global 264 result 263 265 else 264 266 self … … 273 275 mapping[var].name 274 276 } 275 Label.forName(codeOrigin, name, @definedInFile) 277 result = Label.forName(codeOrigin, name, @definedInFile) 278 result.setGlobal() if @global 279 result 276 280 else 277 281 self -
trunk/Source/JavaScriptCore/runtime/OptionsList.h
r250559 r250750 465 465 v(Double, dumpJITMemoryFlushInterval, 10, Restricted, "Maximum time in between flushes of the JIT memory dump in seconds.") \ 466 466 v(Bool, useUnlinkedCodeBlockJettisoning, false, Normal, "If true, UnlinkedCodeBlock can be jettisoned.") \ 467 v(Bool, forceOSRExitToLLInt, false, Normal, "If true, we always exit to the LLInt. If false, we exit to whatever is most convenient.") \ 467 468 468 469 enum OptionEquivalence { -
trunk/Tools/ChangeLog
r250746 r250750 1 2019-10-04 Saam Barati <[email protected]> 2 3 Allow OSR exit to the LLInt 4 https://bugs.webkit.org/show_bug.cgi?id=197993 5 6 Reviewed by Tadeu Zagallo. 7 8 * Scripts/run-jsc-stress-tests: 9 1 10 2019-10-04 Matt Lewis <[email protected]> 2 11 -
trunk/Tools/Scripts/run-jsc-stress-tests
r250559 r250750 496 496 FTL_OPTIONS = ["--useFTLJIT=true"] 497 497 PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"] 498 FORCE_LLINT_EXIT_OPTIONS = ["--forceOSRExitToLLInt=true"] 498 499 499 500 require_relative "webkitruby/jsc-stress-test-writer-#{$testWriter}" … … 709 710 710 711 def runFTLNoCJITB3O0(*optionalTestSpecificOptions) 711 run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + optionalTestSpecificOptions))712 run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) 712 713 end 713 714 … … 729 730 730 731 def runDFGEager(*optionalTestSpecificOptions) 731 run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + optionalTestSpecificOptions))732 run("dfg-eager", *(EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + PROBE_OSR_EXIT_OPTION + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) 732 733 end 733 734 … … 746 747 747 748 def runFTLEagerNoCJITValidate(*optionalTestSpecificOptions) 748 run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + optionalTestSpecificOptions))749 run("ftl-eager-no-cjit", "--validateGraph=true", "--airForceIRCAllocator=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + EAGER_OPTIONS + COLLECT_CONTINUOUSLY_OPTIONS + FORCE_LLINT_EXIT_OPTIONS + optionalTestSpecificOptions)) 749 750 end 750 751
Note:
See TracChangeset
for help on using the changeset viewer.