source: webkit/trunk/Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp

Last change on this file was 295614, checked in by [email protected], 3 years ago

[JSC] Always create StructureStubInfo for op_get_by_val
https://bugs.webkit.org/show_bug.cgi?id=241669
rdar://75146284

Reviewed by Saam Barati and Mark Lam.

DFG OSR exit requires StructureStubInfo for getter / setter calls. However very generic baseline JIT
op_get_by_val does not create StructureStubInfo. It is possible that OSR exit crashes because of this
missing StructureStubInfo. Let's consider the following edge case.

  1. Now, Baseline detects that this is very generic op_get_by_val. So we do not create StructureStubInfo.
  2. This function is inlined in DFG. And DFG emits IC for this GetByVal.
  3. (2)'s DFG function collects information in DFG-level IC. And luckily, in this inlined call path, it was not so generic.
  4. Then, due to different OSR exit or something, we recreate DFG code for this function with (2)'s inlining.
  5. DFG detects that DFG-level IC has more specialized information. So it can inline getter call in this op_get_by_val.
  6. Inside this getter, we perform OSR exit.
  7. Looking into Baseline, and we found that there is no StructureStubInfo!

We always create StructureStubInfo. In very generic op_get_by_val case, we create this with tookSlowPath = true.
And we emit empty inline path to record doneLocation. So, OSR exit can jump to this place.

We also clean up StructureStubInfo code.

  1. "start" is renamed to startLocation. And we do not record it in DataIC case since it is not necessary.
  2. Rename inlineSize to inlineCodeSize.
  3. Add some assertions to ensure that this path is not used for DataIC case.
  4. We also record opcode value in the crashing RELEASE_ASSERT to get more information if this does not fix the issue.
  • Source/JavaScriptCore/bytecode/InlineAccess.cpp:

(JSC::linkCodeInline):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::generateStringLength):
(JSC::InlineAccess::rewireStubAsJumpInAccessNotUsingInlineAccess):
(JSC::InlineAccess::rewireStubAsJumpInAccess):
(JSC::InlineAccess::resetStubAsJumpInAccess):

  • Source/JavaScriptCore/bytecode/StructureStubInfo.cpp:

(JSC::StructureStubInfo::initializeFromUnlinkedStructureStubInfo):
(JSC::StructureStubInfo::initializeFromDFGUnlinkedStructureStubInfo):

  • Source/JavaScriptCore/bytecode/StructureStubInfo.h:

(JSC::StructureStubInfo::inlineCodeSize const):
(JSC::StructureStubInfo::inlineSize const): Deleted.

  • Source/JavaScriptCore/dfg/DFGInlineCacheWrapperInlines.h:

(JSC::DFG::InlineCacheWrapper<GeneratorType>::finalize):

  • Source/JavaScriptCore/dfg/DFGJITCode.h:
  • Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp:

(JSC::DFG::callerReturnPC):

  • Source/JavaScriptCore/jit/JIT.cpp:

(JSC::JIT::link):

  • Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp:

(JSC::JITInlineCacheGenerator::finalize):
(JSC::JITGetByValGenerator::generateEmptyPath):

  • Source/JavaScriptCore/jit/JITInlineCacheGenerator.h:
  • Source/JavaScriptCore/jit/JITPropertyAccess.cpp:

(JSC::JIT::emit_op_get_by_val):

  • JSTests/stress/get-by-val-generic-structurestubinfo.js: Added.

(let.program):
(runMono.let.o.get x):
(runMono):
(runPoly):

Canonical link: https://commits.webkit.org/251619@main

File size: 21.7 KB
Line 
1/*
2 * Copyright (C) 2013-2020 Apple Inc. All rights reserved.
3 *
4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions
6 * are met:
7 * 1. Redistributions of source code must retain the above copyright
8 * notice, this list of conditions and the following disclaimer.
9 * 2. Redistributions in binary form must reproduce the above copyright
10 * notice, this list of conditions and the following disclaimer in the
11 * documentation and/or other materials provided with the distribution.
12 *
13 * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
14 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
16 * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
17 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
18 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
19 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
20 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
21 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
22 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
23 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
24 */
25
26#include "config.h"
27#include "DFGOSRExitCompilerCommon.h"
28
29#if ENABLE(DFG_JIT)
30
31#include "CodeBlockInlines.h"
32#include "DFGJITCode.h"
33#include "DFGOperations.h"
34#include "JIT.h"
35#include "JSCJSValueInlines.h"
36#include "LLIntData.h"
37#include "LLIntThunks.h"
38#include "StructureStubInfo.h"
39
40namespace JSC { namespace DFG {
41
42void handleExitCounts(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
43{
44 if (!exitKindMayJettison(exit.m_kind)) {
45 // FIXME: We may want to notice that we're frequently exiting
46 // at an op_catch that we didn't compile an entrypoint for, and
47 // then trigger a reoptimization of this CodeBlock:
48 // https://bugs.webkit.org/show_bug.cgi?id=175842
49 return;
50 }
51
52 jit.add32(AssemblyHelpers::TrustedImm32(1), AssemblyHelpers::AbsoluteAddress(&exit.m_count));
53
54 jit.move(AssemblyHelpers::TrustedImmPtr(jit.codeBlock()), GPRInfo::regT3);
55
56 AssemblyHelpers::Jump tooFewFails;
57
58 jit.load32(AssemblyHelpers::Address(GPRInfo::regT3, CodeBlock::offsetOfOSRExitCounter()), GPRInfo::regT2);
59 jit.add32(AssemblyHelpers::TrustedImm32(1), GPRInfo::regT2);
60 jit.store32(GPRInfo::regT2, AssemblyHelpers::Address(GPRInfo::regT3, CodeBlock::offsetOfOSRExitCounter()));
61
62 jit.move(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), GPRInfo::regT0);
63 AssemblyHelpers::Jump reoptimizeNow = jit.branch32(
64 AssemblyHelpers::GreaterThanOrEqual,
65 AssemblyHelpers::Address(GPRInfo::regT0, CodeBlock::offsetOfJITExecuteCounter()),
66 AssemblyHelpers::TrustedImm32(0));
67
68 // We want to figure out if there's a possibility that we're in a loop. For the outermost
69 // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
70 // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
71 // problem is the inlined functions, which might also have loops, but whose baseline versions
72 // don't know where to look for the exit count. Figure out if those loops are severe enough
73 // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
74 // Otherwise, we should use the normal reoptimization trigger.
75
76 AssemblyHelpers::JumpList loopThreshold;
77
78 for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
79 loopThreshold.append(
80 jit.branchTest8(
81 AssemblyHelpers::NonZero,
82 AssemblyHelpers::AbsoluteAddress(
83 inlineCallFrame->baselineCodeBlock->ownerExecutable()->addressOfDidTryToEnterInLoop())));
84 }
85
86 jit.move(
87 AssemblyHelpers::TrustedImm32(jit.codeBlock()->exitCountThresholdForReoptimization()),
88 GPRInfo::regT1);
89
90 if (!loopThreshold.empty()) {
91 AssemblyHelpers::Jump done = jit.jump();
92
93 loopThreshold.link(&jit);
94 jit.move(
95 AssemblyHelpers::TrustedImm32(
96 jit.codeBlock()->exitCountThresholdForReoptimizationFromLoop()),
97 GPRInfo::regT1);
98
99 done.link(&jit);
100 }
101
102 tooFewFails = jit.branch32(AssemblyHelpers::BelowOrEqual, GPRInfo::regT2, GPRInfo::regT1);
103
104 reoptimizeNow.link(&jit);
105
106 jit.setupArguments<decltype(operationTriggerReoptimizationNow)>(GPRInfo::regT0, GPRInfo::regT3, AssemblyHelpers::TrustedImmPtr(&exit));
107 jit.prepareCallOperation(vm);
108 jit.move(AssemblyHelpers::TrustedImmPtr(tagCFunction<OperationPtrTag>(operationTriggerReoptimizationNow)), GPRInfo::nonArgGPR0);
109 jit.call(GPRInfo::nonArgGPR0, OperationPtrTag);
110 AssemblyHelpers::Jump doneAdjusting = jit.jump();
111
112 tooFewFails.link(&jit);
113
114 // Adjust the execution counter such that the target is to only optimize after a while.
115 int32_t activeThreshold =
116 jit.baselineCodeBlock()->adjustedCounterValue(
117 Options::thresholdForOptimizeAfterLongWarmUp());
118 int32_t targetValue = applyMemoryUsageHeuristicsAndConvertToInt(
119 activeThreshold, jit.baselineCodeBlock());
120 int32_t clippedValue;
121 switch (jit.codeBlock()->jitType()) {
122 case JITType::DFGJIT:
123 clippedValue = BaselineExecutionCounter::clippedThreshold(targetValue);
124 break;
125 case JITType::FTLJIT:
126 clippedValue = UpperTierExecutionCounter::clippedThreshold(targetValue);
127 break;
128 default:
129 RELEASE_ASSERT_NOT_REACHED();
130#if COMPILER_QUIRK(CONSIDERS_UNREACHABLE_CODE)
131 clippedValue = 0; // Make some compilers, and mhahnenberg, happy.
132#endif
133 break;
134 }
135 jit.store32(AssemblyHelpers::TrustedImm32(-clippedValue), AssemblyHelpers::Address(GPRInfo::regT0, CodeBlock::offsetOfJITExecuteCounter()));
136 jit.store32(AssemblyHelpers::TrustedImm32(activeThreshold), AssemblyHelpers::Address(GPRInfo::regT0, CodeBlock::offsetOfJITExecutionActiveThreshold()));
137 jit.store32(AssemblyHelpers::TrustedImm32(formattedTotalExecutionCount(clippedValue)), AssemblyHelpers::Address(GPRInfo::regT0, CodeBlock::offsetOfJITExecutionTotalCount()));
138
139 doneAdjusting.link(&jit);
140}
141
142static MacroAssemblerCodePtr<JSEntryPtrTag> callerReturnPC(CodeBlock* baselineCodeBlockForCaller, BytecodeIndex callBytecodeIndex, InlineCallFrame::Kind trueCallerCallKind, bool& callerIsLLInt)
143{
144 callerIsLLInt = Options::forceOSRExitToLLInt() || baselineCodeBlockForCaller->jitType() == JITType::InterpreterThunk;
145
146 if (callBytecodeIndex.checkpoint())
147 return LLInt::checkpointOSRExitFromInlinedCallTrampolineThunk().code();
148
149 MacroAssemblerCodePtr<JSEntryPtrTag> jumpTarget;
150
151 const auto& callInstruction = *baselineCodeBlockForCaller->instructions().at(callBytecodeIndex).ptr();
152 if (callerIsLLInt) {
153#define LLINT_RETURN_LOCATION(name) LLInt::returnLocationThunk(name##_return_location, callInstruction.width()).code()
154
155 switch (trueCallerCallKind) {
156 case InlineCallFrame::Call: {
157 if (callInstruction.opcodeID() == op_call)
158 jumpTarget = LLINT_RETURN_LOCATION(op_call);
159 else if (callInstruction.opcodeID() == op_iterator_open)
160 jumpTarget = LLINT_RETURN_LOCATION(op_iterator_open);
161 else if (callInstruction.opcodeID() == op_iterator_next)
162 jumpTarget = LLINT_RETURN_LOCATION(op_iterator_next);
163 break;
164 }
165 case InlineCallFrame::Construct:
166 jumpTarget = LLINT_RETURN_LOCATION(op_construct);
167 break;
168 case InlineCallFrame::CallVarargs:
169 jumpTarget = LLINT_RETURN_LOCATION(op_call_varargs);
170 break;
171 case InlineCallFrame::ConstructVarargs:
172 jumpTarget = LLINT_RETURN_LOCATION(op_construct_varargs);
173 break;
174 case InlineCallFrame::GetterCall: {
175 if (callInstruction.opcodeID() == op_get_by_id)
176 jumpTarget = LLINT_RETURN_LOCATION(op_get_by_id);
177 else if (callInstruction.opcodeID() == op_get_by_val)
178 jumpTarget = LLINT_RETURN_LOCATION(op_get_by_val);
179 else
180 RELEASE_ASSERT_NOT_REACHED();
181 break;
182 }
183 case InlineCallFrame::SetterCall: {
184 if (callInstruction.opcodeID() == op_put_by_id)
185 jumpTarget = LLINT_RETURN_LOCATION(op_put_by_id);
186 else if (callInstruction.opcodeID() == op_put_by_val)
187 jumpTarget = LLINT_RETURN_LOCATION(op_put_by_val);
188 else
189 RELEASE_ASSERT_NOT_REACHED();
190 break;
191 }
192 default:
193 RELEASE_ASSERT_NOT_REACHED();
194 }
195
196#undef LLINT_RETURN_LOCATION
197
198 } else {
199 switch (trueCallerCallKind) {
200 case InlineCallFrame::Call:
201 case InlineCallFrame::Construct:
202 case InlineCallFrame::CallVarargs:
203 case InlineCallFrame::ConstructVarargs: {
204 CallLinkInfo* callLinkInfo = nullptr;
205 {
206 ConcurrentJSLocker locker(baselineCodeBlockForCaller->m_lock);
207 callLinkInfo = baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(locker, callBytecodeIndex);
208 }
209 RELEASE_ASSERT(callLinkInfo);
210 jumpTarget = callLinkInfo->doneLocation().retagged<JSEntryPtrTag>();
211 break;
212 }
213
214 case InlineCallFrame::GetterCall:
215 case InlineCallFrame::SetterCall: {
216 StructureStubInfo* stubInfo = baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
217 RELEASE_ASSERT(stubInfo, callInstruction.opcodeID());
218 jumpTarget = stubInfo->doneLocation.retagged<JSEntryPtrTag>();
219 break;
220 }
221
222 default:
223 RELEASE_ASSERT_NOT_REACHED();
224 }
225 }
226
227 ASSERT(jumpTarget);
228 return jumpTarget;
229}
230
231CCallHelpers::Address calleeSaveSlot(InlineCallFrame* inlineCallFrame, CodeBlock* baselineCodeBlock, GPRReg calleeSave)
232{
233 const RegisterAtOffsetList* calleeSaves = baselineCodeBlock->jitCode()->calleeSaveRegisters();
234 for (unsigned i = 0; i < calleeSaves->registerCount(); i++) {
235 RegisterAtOffset entry = calleeSaves->at(i);
236 if (entry.reg() != calleeSave)
237 continue;
238 return CCallHelpers::Address(CCallHelpers::framePointerRegister, static_cast<VirtualRegister>(inlineCallFrame->stackOffset).offsetInBytes() + entry.offset());
239 }
240
241 RELEASE_ASSERT_NOT_REACHED();
242 return CCallHelpers::Address(CCallHelpers::framePointerRegister);
243}
244
245void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
246{
247 // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
248 // in presence of inlined tail calls.
249 // https://bugs.webkit.org/show_bug.cgi?id=147511
250 ASSERT(JITCode::isBaselineCode(jit.baselineCodeBlock()->jitType()));
251 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jit.baselineCodeBlock()), AssemblyHelpers::addressFor(CallFrameSlot::codeBlock));
252
253 const CodeOrigin* codeOrigin;
254 for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame(); codeOrigin = codeOrigin->inlineCallFrame()->getCallerSkippingTailCalls()) {
255 InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame();
256 CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(*codeOrigin);
257 InlineCallFrame::Kind trueCallerCallKind;
258 CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
259 GPRReg callerFrameGPR = GPRInfo::callFrameRegister;
260
261 bool callerIsLLInt = false;
262
263 if (!trueCaller) {
264 ASSERT(inlineCallFrame->isTail());
265 jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::returnPCOffset()), GPRInfo::regT3);
266#if CPU(ARM64E)
267 jit.addPtr(AssemblyHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), GPRInfo::callFrameRegister, GPRInfo::regT2);
268 jit.untagPtr(GPRInfo::regT2, GPRInfo::regT3);
269 jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->returnPCOffset() + sizeof(void*)), GPRInfo::callFrameRegister, GPRInfo::regT2);
270 jit.validateUntaggedPtr(GPRInfo::regT3, GPRInfo::regT4);
271 jit.tagPtr(GPRInfo::regT2, GPRInfo::regT3);
272#endif
273 jit.storePtr(GPRInfo::regT3, AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
274 jit.loadPtr(AssemblyHelpers::Address(GPRInfo::callFrameRegister, CallFrame::callerFrameOffset()), GPRInfo::regT3);
275 callerFrameGPR = GPRInfo::regT3;
276 } else {
277 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
278 auto callBytecodeIndex = trueCaller->bytecodeIndex();
279 MacroAssemblerCodePtr<JSEntryPtrTag> jumpTarget = callerReturnPC(baselineCodeBlockForCaller, callBytecodeIndex, trueCallerCallKind, callerIsLLInt);
280
281 if (trueCaller->inlineCallFrame()) {
282 jit.addPtr(
283 AssemblyHelpers::TrustedImm32(trueCaller->inlineCallFrame()->stackOffset * sizeof(EncodedJSValue)),
284 GPRInfo::callFrameRegister,
285 GPRInfo::regT3);
286 callerFrameGPR = GPRInfo::regT3;
287 }
288
289#if CPU(ARM64E)
290 jit.addPtr(AssemblyHelpers::TrustedImm32(inlineCallFrame->returnPCOffset() + sizeof(void*)), GPRInfo::callFrameRegister, GPRInfo::regT2);
291 jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget.untaggedExecutableAddress()), GPRInfo::regT4);
292 jit.tagPtr(GPRInfo::regT2, GPRInfo::regT4);
293 jit.storePtr(GPRInfo::regT4, AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
294#else
295 jit.storePtr(AssemblyHelpers::TrustedImmPtr(jumpTarget.untaggedExecutableAddress()), AssemblyHelpers::addressForByteOffset(inlineCallFrame->returnPCOffset()));
296#endif
297 }
298
299 jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock)));
300
301 // Restore the inline call frame's callee save registers.
302 // If this inlined frame is a tail call that will return back to the original caller, we need to
303 // copy the prior contents of the tag registers already saved for the outer frame to this frame.
304 jit.emitSaveOrCopyLLIntBaselineCalleeSavesFor(
305 baselineCodeBlock,
306 static_cast<VirtualRegister>(inlineCallFrame->stackOffset),
307 trueCaller ? AssemblyHelpers::UseExistingTagRegisterContents : AssemblyHelpers::CopyBaselineCalleeSavedRegistersFromBaseFrame,
308 GPRInfo::regT2, GPRInfo::regT1, GPRInfo::regT4);
309
310 if (callerIsLLInt) {
311 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
312 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->metadataTable()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::metadataTableGPR));
313 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->instructionsRawPointer()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, LLInt::Registers::pbGPR));
314 } else if (trueCaller) {
315 CodeBlock* baselineCodeBlockForCaller = jit.baselineCodeBlockFor(*trueCaller);
316 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->metadataTable()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, JIT::s_metadataGPR));
317 jit.storePtr(CCallHelpers::TrustedImmPtr(baselineCodeBlockForCaller->baselineJITData()), calleeSaveSlot(inlineCallFrame, baselineCodeBlock, JIT::s_constantsGPR));
318 }
319
320 if (!inlineCallFrame->isVarargs())
321 jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
322 jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
323 uint32_t locationBits = CallSiteIndex(baselineCodeBlock->bytecodeIndexForExit(codeOrigin->bytecodeIndex())).bits();
324 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
325 if (!inlineCallFrame->isClosureCall)
326 jit.storeCell(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()), AssemblyHelpers::addressFor(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
327 }
328
329 // Don't need to set the toplevel code origin if we only did inline tail calls
330 if (codeOrigin) {
331 uint32_t locationBits = CallSiteIndex(BytecodeIndex(codeOrigin->bytecodeIndex().offset())).bits();
332 jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
333 }
334}
335
336static void osrWriteBarrier(VM& vm, CCallHelpers& jit, GPRReg owner, GPRReg scratch)
337{
338 AssemblyHelpers::Jump ownerIsRememberedOrInEden = jit.barrierBranchWithoutFence(owner);
339
340 jit.setupArguments<decltype(operationOSRWriteBarrier)>(CCallHelpers::TrustedImmPtr(&vm), owner);
341 jit.prepareCallOperation(vm);
342 jit.move(MacroAssembler::TrustedImmPtr(tagCFunction<OperationPtrTag>(operationOSRWriteBarrier)), scratch);
343 jit.call(scratch, OperationPtrTag);
344
345 ownerIsRememberedOrInEden.link(&jit);
346}
347
348void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
349{
350 jit.memoryFence();
351
352 jit.move(
353 AssemblyHelpers::TrustedImmPtr(
354 jit.codeBlock()->baselineAlternative()), GPRInfo::argumentGPR1);
355 osrWriteBarrier(vm, jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0);
356
357 // We barrier all inlined frames -- and not just the current inline stack --
358 // because we don't know which inlined function owns the value profile that
359 // we'll update when we exit. In the case of "f() { a(); b(); }", if both
360 // a and b are inlined, we might exit inside b due to a bad value loaded
361 // from a.
362 // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
363 // the value profile.
364 InlineCallFrameSet* inlineCallFrames = jit.codeBlock()->jitCode()->dfgCommon()->inlineCallFrames.get();
365 if (inlineCallFrames) {
366 for (InlineCallFrame* inlineCallFrame : *inlineCallFrames) {
367 jit.move(
368 AssemblyHelpers::TrustedImmPtr(
369 inlineCallFrame->baselineCodeBlock.get()), GPRInfo::argumentGPR1);
370 osrWriteBarrier(vm, jit, GPRInfo::argumentGPR1, GPRInfo::nonArgGPR0);
371 }
372 }
373
374 auto* exitInlineCallFrame = exit.m_codeOrigin.inlineCallFrame();
375 if (exitInlineCallFrame)
376 jit.addPtr(AssemblyHelpers::TrustedImm32(exitInlineCallFrame->stackOffset * sizeof(EncodedJSValue)), GPRInfo::callFrameRegister);
377
378 CodeBlock* codeBlockForExit = jit.baselineCodeBlockFor(exit.m_codeOrigin);
379 ASSERT(codeBlockForExit == codeBlockForExit->baselineVersion());
380 ASSERT(JITCode::isBaselineCode(codeBlockForExit->jitType()));
381
382 void* jumpTarget;
383 bool exitToLLInt = Options::forceOSRExitToLLInt() || codeBlockForExit->jitType() == JITType::InterpreterThunk;
384 if (exitToLLInt) {
385 auto bytecodeIndex = exit.m_codeOrigin.bytecodeIndex();
386 const auto& currentInstruction = *codeBlockForExit->instructions().at(bytecodeIndex).ptr();
387 MacroAssemblerCodePtr<JSEntryPtrTag> destination;
388 if (bytecodeIndex.checkpoint())
389 destination = LLInt::checkpointOSRExitTrampolineThunk().code();
390 else
391 destination = LLInt::normalOSRExitTrampolineThunk().code();
392
393 if (exit.isExceptionHandler()) {
394 jit.move(CCallHelpers::TrustedImmPtr(&currentInstruction), GPRInfo::regT2);
395 jit.storePtr(GPRInfo::regT2, &std::get<const JSInstruction*>(vm.targetInterpreterPCForThrow));
396 }
397
398 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->metadataTable()), LLInt::Registers::metadataTableGPR);
399 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->instructionsRawPointer()), LLInt::Registers::pbGPR);
400 jit.move(CCallHelpers::TrustedImm32(bytecodeIndex.offset()), LLInt::Registers::pcGPR);
401 jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress();
402 } else {
403 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->metadataTable()), JIT::s_metadataGPR);
404 jit.move(CCallHelpers::TrustedImmPtr(codeBlockForExit->baselineJITData()), JIT::s_constantsGPR);
405
406 BytecodeIndex exitIndex = exit.m_codeOrigin.bytecodeIndex();
407 MacroAssemblerCodePtr<JSEntryPtrTag> destination;
408 if (exitIndex.checkpoint())
409 destination = LLInt::checkpointOSRExitTrampolineThunk().code();
410 else {
411 ASSERT(codeBlockForExit->bytecodeIndexForExit(exitIndex) == exitIndex);
412 destination = codeBlockForExit->jitCodeMap().find(exitIndex);
413 }
414
415 ASSERT(destination);
416
417 jumpTarget = destination.retagged<OSRExitPtrTag>().executableAddress();
418 }
419
420 if (exit.isExceptionHandler()) {
421 ASSERT(!RegisterSet::vmCalleeSaveRegisters().contains(LLInt::Registers::pcGPR));
422 jit.copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm.topEntryFrame, AssemblyHelpers::selectScratchGPR(LLInt::Registers::pcGPR));
423
424 // Since we're jumping to op_catch, we need to set callFrameForCatch.
425 jit.storePtr(GPRInfo::callFrameRegister, vm.addressOfCallFrameForCatch());
426 }
427
428 jit.addPtr(AssemblyHelpers::TrustedImm32(JIT::stackPointerOffsetFor(codeBlockForExit) * sizeof(Register)), GPRInfo::callFrameRegister, AssemblyHelpers::stackPointerRegister);
429
430 jit.move(AssemblyHelpers::TrustedImmPtr(jumpTarget), GPRInfo::regT2);
431 jit.farJump(GPRInfo::regT2, OSRExitPtrTag);
432}
433
434} } // namespace JSC::DFG
435
436#endif // ENABLE(DFG_JIT)
437
Note: See TracBrowser for help on using the repository browser.