source: webkit/trunk/JavaScriptCore/VM/CTI.cpp@ 37891

Last change on this file since 37891 was 37891, checked in by [email protected], 17 years ago

2008-10-25 Geoffrey Garen <[email protected]>

Reviewed by Sam Weinig, with Gavin Barraclough's help.


Fixed Sampling Tool:

  • Made CodeBlock sampling work with CTI
  • Improved accuracy by unifying most sampling data into a single 32bit word, which can be written / read atomically.
  • Split out three different #ifdefs for modularity: OPCODE_SAMPLING; CODEBLOCK_SAMPLING; OPCODE_STATS.
  • Improved reporting clarity
  • Refactored for code clarity
  • VM/CTI.cpp: (JSC::CTI::emitCTICall): (JSC::CTI::compileOpCall): (JSC::CTI::emitSlowScriptCheck): (JSC::CTI::compileBinaryArithOpSlowCase): (JSC::CTI::privateCompileMainPass): (JSC::CTI::privateCompileSlowCases): (JSC::CTI::privateCompile):
  • VM/CTI.h: Updated CTI codegen to use the unified SamplingTool interface for encoding samples. (This required passing the current vPC to a lot more functions, since the unified interface samples the current vPC.) Added hooks for writing the current CodeBlock* on function entry and after a function call, for the sake of the CodeBlock sampler. Removed obsolete hook for clearing the current sample inside op_end. Also removed the custom enum used to differentiate flavors of op_call, since the OpcodeID enum works just as well. (This was important in an earlier version of the patch, but now it's just cleanup.)
  • VM/CodeBlock.cpp: (JSC::CodeBlock::lineNumberForVPC):
  • VM/CodeBlock.h: Upated for refactored #ifdefs. Changed lineNumberForVPC to be robust against vPCs not recorded for exception handling, since the Sampler may ask for an arbitrary vPC.
  • VM/Machine.cpp: (JSC::Machine::execute): (JSC::Machine::privateExecute): (JSC::Machine::cti_op_call_NotJSFunction): (JSC::Machine::cti_op_construct_NotJSConstruct):
  • VM/Machine.h: (JSC::Machine::setSampler): (JSC::Machine::sampler): (JSC::Machine::jitCodeBuffer): Upated for refactored #ifdefs. Changed Machine to use SamplingTool helper objects to record movement in and out of host code. This makes samples a bit more precise.


  • VM/Opcode.cpp: (JSC::OpcodeStats::~OpcodeStats):
  • VM/Opcode.h: Upated for refactored #ifdefs. Added a little more padding, to accomodate our more verbose opcode names.
  • VM/SamplingTool.cpp: (JSC::ScopeSampleRecord::sample): Only count a sample toward our total if we actually record it. This solves cases where a CodeBlock will claim to have been sampled many times, with reported samples that don't match.

(JSC::SamplingTool::run): Read the current sample into a Sample helper
object, to ensure that the data doesn't change while we're analyzing it,
and to help decode the data. Only access the CodeBlock sampling hash
table if CodeBlock sampling has been enabled, so non-CodeBlock sampling
runs can operate with even less overhead.

(JSC::SamplingTool::dump): I reorganized this code a lot to print the
most important info at the top, print as a table, annotate and document
the stuff I didn't understand when I started, etc.

  • VM/SamplingTool.h: New helper classes, described above.
  • kjs/Parser.h:
  • kjs/Shell.cpp: (runWithScripts):
  • kjs/nodes.cpp: (JSC::ScopeNode::ScopeNode): Updated for new sampling APIs.
  • wtf/Platform.h: Moved sampling #defines here, since our custom is to put ENABLE #defines into Platform.h. Made explicit the fact that CODEBLOCK_SAMPLING depends on OPCODE_SAMPLING.
File size: 148.2 KB
Line 
1/*
2 * Copyright (C) 2008 Apple Inc. All rights reserved.
3 *
4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions
6 * are met:
7 * 1. Redistributions of source code must retain the above copyright
8 * notice, this list of conditions and the following disclaimer.
9 * 2. Redistributions in binary form must reproduce the above copyright
10 * notice, this list of conditions and the following disclaimer in the
11 * documentation and/or other materials provided with the distribution.
12 *
13 * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
14 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
15 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
16 * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
17 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
18 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
19 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
20 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
21 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
22 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
23 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
24 */
25
26#include "config.h"
27#include "CTI.h"
28
29#if ENABLE(CTI)
30
31#include "CodeBlock.h"
32#include "JSArray.h"
33#include "JSFunction.h"
34#include "Machine.h"
35#include "wrec/WREC.h"
36#include "ResultType.h"
37#include "SamplingTool.h"
38
39#ifndef NDEBUG
40#include <stdio.h>
41#endif
42
43using namespace std;
44
45namespace JSC {
46
47#if PLATFORM(MAC)
48
49static inline bool isSSE2Present()
50{
51 return true; // All X86 Macs are guaranteed to support at least SSE2
52}
53
54#else
55
56static bool isSSE2Present()
57{
58 static const int SSE2FeatureBit = 1 << 26;
59 struct SSE2Check {
60 SSE2Check()
61 {
62 int flags;
63#if COMPILER(MSVC)
64 _asm {
65 mov eax, 1 // cpuid function 1 gives us the standard feature set
66 cpuid;
67 mov flags, edx;
68 }
69#else
70 flags = 0;
71 // FIXME: Add GCC code to do above asm
72#endif
73 present = (flags & SSE2FeatureBit) != 0;
74 }
75 bool present;
76 };
77 static SSE2Check check;
78 return check.present;
79}
80
81#endif
82
83COMPILE_ASSERT(CTI_ARGS_code == 0xC, CTI_ARGS_code_is_C);
84COMPILE_ASSERT(CTI_ARGS_callFrame == 0xE, CTI_ARGS_callFrame_is_E);
85
86#if COMPILER(GCC) && PLATFORM(X86)
87
88#if PLATFORM(DARWIN)
89#define SYMBOL_STRING(name) "_" #name
90#else
91#define SYMBOL_STRING(name) #name
92#endif
93
94asm(
95".globl " SYMBOL_STRING(ctiTrampoline) "\n"
96SYMBOL_STRING(ctiTrampoline) ":" "\n"
97 "pushl %esi" "\n"
98 "pushl %edi" "\n"
99 "pushl %ebx" "\n"
100 "subl $0x20, %esp" "\n"
101 "movl $512, %esi" "\n"
102 "movl 0x38(%esp), %edi" "\n" // Ox38 = 0x0E * 4, 0x0E = CTI_ARGS_callFrame (see assertion above)
103 "call *0x30(%esp)" "\n" // Ox30 = 0x0C * 4, 0x0C = CTI_ARGS_code (see assertion above)
104 "addl $0x20, %esp" "\n"
105 "popl %ebx" "\n"
106 "popl %edi" "\n"
107 "popl %esi" "\n"
108 "ret" "\n"
109);
110
111asm(
112".globl " SYMBOL_STRING(ctiVMThrowTrampoline) "\n"
113SYMBOL_STRING(ctiVMThrowTrampoline) ":" "\n"
114#if USE(CTI_ARGUMENT)
115#if USE(FAST_CALL_CTI_ARGUMENT)
116 "movl %esp, %ecx" "\n"
117#else
118 "movl %esp, 0(%esp)" "\n"
119#endif
120 "call " SYMBOL_STRING(_ZN3JSC7Machine12cti_vm_throwEPPv) "\n"
121#else
122 "call " SYMBOL_STRING(_ZN3JSC7Machine12cti_vm_throwEPvz) "\n"
123#endif
124 "addl $0x20, %esp" "\n"
125 "popl %ebx" "\n"
126 "popl %edi" "\n"
127 "popl %esi" "\n"
128 "ret" "\n"
129);
130
131#elif COMPILER(MSVC)
132
133extern "C" {
134
135 __declspec(naked) JSValue* ctiTrampoline(void* code, RegisterFile*, CallFrame*, JSValue** exception, Profiler**, JSGlobalData*)
136 {
137 __asm {
138 push esi;
139 push edi;
140 push ebx;
141 sub esp, 0x20;
142 mov esi, 512;
143 mov ecx, esp;
144 mov edi, [esp + 0x38];
145 call [esp + 0x30]; // Ox30 = 0x0C * 4, 0x0C = CTI_ARGS_code (see assertion above)
146 add esp, 0x20;
147 pop ebx;
148 pop edi;
149 pop esi;
150 ret;
151 }
152 }
153
154 __declspec(naked) void ctiVMThrowTrampoline()
155 {
156 __asm {
157 mov ecx, esp;
158 call JSC::Machine::cti_vm_throw;
159 add esp, 0x20;
160 pop ebx;
161 pop edi;
162 pop esi;
163 ret;
164 }
165 }
166
167}
168
169#endif
170
171ALWAYS_INLINE bool CTI::isConstant(int src)
172{
173 return src >= m_codeBlock->numVars && src < m_codeBlock->numVars + m_codeBlock->numConstants;
174}
175
176ALWAYS_INLINE JSValue* CTI::getConstant(CallFrame* callFrame, int src)
177{
178 return m_codeBlock->constantRegisters[src - m_codeBlock->numVars].jsValue(callFrame);
179}
180
181inline uintptr_t CTI::asInteger(JSValue* value)
182{
183 return reinterpret_cast<uintptr_t>(value);
184}
185
186// get arg puts an arg from the SF register array into a h/w register
187ALWAYS_INLINE void CTI::emitGetArg(int src, X86Assembler::RegisterID dst)
188{
189 // TODO: we want to reuse values that are already in registers if we can - add a register allocator!
190 if (isConstant(src)) {
191 JSValue* js = getConstant(m_callFrame, src);
192 m_jit.movl_i32r(asInteger(js), dst);
193 } else
194 m_jit.movl_mr(src * sizeof(Register), X86::edi, dst);
195}
196
197// get arg puts an arg from the SF register array onto the stack, as an arg to a context threaded function.
198ALWAYS_INLINE void CTI::emitGetPutArg(unsigned src, unsigned offset, X86Assembler::RegisterID scratch)
199{
200 if (isConstant(src)) {
201 JSValue* js = getConstant(m_callFrame, src);
202 m_jit.movl_i32m(asInteger(js), offset + sizeof(void*), X86::esp);
203 } else {
204 m_jit.movl_mr(src * sizeof(Register), X86::edi, scratch);
205 m_jit.movl_rm(scratch, offset + sizeof(void*), X86::esp);
206 }
207}
208
209// puts an arg onto the stack, as an arg to a context threaded function.
210ALWAYS_INLINE void CTI::emitPutArg(X86Assembler::RegisterID src, unsigned offset)
211{
212 m_jit.movl_rm(src, offset + sizeof(void*), X86::esp);
213}
214
215ALWAYS_INLINE void CTI::emitPutArgConstant(unsigned value, unsigned offset)
216{
217 m_jit.movl_i32m(value, offset + sizeof(void*), X86::esp);
218}
219
220ALWAYS_INLINE JSValue* CTI::getConstantImmediateNumericArg(unsigned src)
221{
222 if (isConstant(src)) {
223 JSValue* js = getConstant(m_callFrame, src);
224 return JSImmediate::isNumber(js) ? js : noValue();
225 }
226 return noValue();
227}
228
229ALWAYS_INLINE void CTI::emitPutCTIParam(void* value, unsigned name)
230{
231 m_jit.movl_i32m(reinterpret_cast<intptr_t>(value), name * sizeof(void*), X86::esp);
232}
233
234ALWAYS_INLINE void CTI::emitPutCTIParam(X86Assembler::RegisterID from, unsigned name)
235{
236 m_jit.movl_rm(from, name * sizeof(void*), X86::esp);
237}
238
239ALWAYS_INLINE void CTI::emitGetCTIParam(unsigned name, X86Assembler::RegisterID to)
240{
241 m_jit.movl_mr(name * sizeof(void*), X86::esp, to);
242}
243
244ALWAYS_INLINE void CTI::emitPutToCallFrameHeader(X86Assembler::RegisterID from, RegisterFile::CallFrameHeaderEntry entry)
245{
246 m_jit.movl_rm(from, entry * sizeof(Register), X86::edi);
247}
248
249ALWAYS_INLINE void CTI::emitGetFromCallFrameHeader(RegisterFile::CallFrameHeaderEntry entry, X86Assembler::RegisterID to)
250{
251 m_jit.movl_mr(entry * sizeof(Register), X86::edi, to);
252}
253
254ALWAYS_INLINE void CTI::emitPutResult(unsigned dst, X86Assembler::RegisterID from)
255{
256 m_jit.movl_rm(from, dst * sizeof(Register), X86::edi);
257 // FIXME: #ifndef NDEBUG, Write the correct m_type to the register.
258}
259
260ALWAYS_INLINE void CTI::emitInitRegister(unsigned dst)
261{
262 m_jit.movl_i32m(asInteger(jsUndefined()), dst * sizeof(Register), X86::edi);
263 // FIXME: #ifndef NDEBUG, Write the correct m_type to the register.
264}
265
266void ctiSetReturnAddress(void** where, void* what)
267{
268 *where = what;
269}
270
271void ctiRepatchCallByReturnAddress(void* where, void* what)
272{
273 (static_cast<void**>(where))[-1] = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(what) - reinterpret_cast<uintptr_t>(where));
274}
275
276#ifndef NDEBUG
277
278void CTI::printOpcodeOperandTypes(unsigned src1, unsigned src2)
279{
280 char which1 = '*';
281 if (isConstant(src1)) {
282 JSValue* js = getConstant(m_callFrame, src1);
283 which1 =
284 JSImmediate::isImmediate(js) ?
285 (JSImmediate::isNumber(js) ? 'i' :
286 JSImmediate::isBoolean(js) ? 'b' :
287 js->isUndefined() ? 'u' :
288 js->isNull() ? 'n' : '?')
289 :
290 (js->isString() ? 's' :
291 js->isObject() ? 'o' :
292 'k');
293 }
294 char which2 = '*';
295 if (isConstant(src2)) {
296 JSValue* js = getConstant(m_callFrame, src2);
297 which2 =
298 JSImmediate::isImmediate(js) ?
299 (JSImmediate::isNumber(js) ? 'i' :
300 JSImmediate::isBoolean(js) ? 'b' :
301 js->isUndefined() ? 'u' :
302 js->isNull() ? 'n' : '?')
303 :
304 (js->isString() ? 's' :
305 js->isObject() ? 'o' :
306 'k');
307 }
308 if ((which1 != '*') | (which2 != '*'))
309 fprintf(stderr, "Types %c %c\n", which1, which2);
310}
311
312#endif
313
314ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitNakedCall(unsigned opcodeIndex, X86::RegisterID r)
315{
316 X86Assembler::JmpSrc call = m_jit.emitCall(r);
317 m_calls.append(CallRecord(call, opcodeIndex));
318
319 return call;
320}
321
322ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitNakedCall(unsigned opcodeIndex, void(*function)())
323{
324 X86Assembler::JmpSrc call = m_jit.emitCall();
325 m_calls.append(CallRecord(call, reinterpret_cast<CTIHelper_v>(function), opcodeIndex));
326 return call;
327}
328
329ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_j helper)
330{
331#if ENABLE(OPCODE_SAMPLING)
332 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
333#else
334 UNUSED_PARAM(vPC);
335#endif
336 m_jit.emitRestoreArgumentReference();
337 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
338 X86Assembler::JmpSrc call = m_jit.emitCall();
339 m_calls.append(CallRecord(call, helper, opcodeIndex));
340#if ENABLE(OPCODE_SAMPLING)
341 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
342#endif
343
344 return call;
345}
346
347ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_o helper)
348{
349#if ENABLE(OPCODE_SAMPLING)
350 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
351#else
352 UNUSED_PARAM(vPC);
353#endif
354 m_jit.emitRestoreArgumentReference();
355 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
356 X86Assembler::JmpSrc call = m_jit.emitCall();
357 m_calls.append(CallRecord(call, helper, opcodeIndex));
358#if ENABLE(OPCODE_SAMPLING)
359 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
360#endif
361
362 return call;
363}
364
365ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_p helper)
366{
367#if ENABLE(OPCODE_SAMPLING)
368 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
369#else
370 UNUSED_PARAM(vPC);
371#endif
372 m_jit.emitRestoreArgumentReference();
373 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
374 X86Assembler::JmpSrc call = m_jit.emitCall();
375 m_calls.append(CallRecord(call, helper, opcodeIndex));
376#if ENABLE(OPCODE_SAMPLING)
377 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
378#endif
379
380 return call;
381}
382
383ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_b helper)
384{
385#if ENABLE(OPCODE_SAMPLING)
386 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
387#else
388 UNUSED_PARAM(vPC);
389#endif
390 m_jit.emitRestoreArgumentReference();
391 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
392 X86Assembler::JmpSrc call = m_jit.emitCall();
393 m_calls.append(CallRecord(call, helper, opcodeIndex));
394#if ENABLE(OPCODE_SAMPLING)
395 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
396#endif
397
398 return call;
399}
400
401ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_v helper)
402{
403#if ENABLE(OPCODE_SAMPLING)
404 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
405#else
406 UNUSED_PARAM(vPC);
407#endif
408 m_jit.emitRestoreArgumentReference();
409 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
410 X86Assembler::JmpSrc call = m_jit.emitCall();
411 m_calls.append(CallRecord(call, helper, opcodeIndex));
412#if ENABLE(OPCODE_SAMPLING)
413 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
414#endif
415
416 return call;
417}
418
419ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_s helper)
420{
421#if ENABLE(OPCODE_SAMPLING)
422 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
423#else
424 UNUSED_PARAM(vPC);
425#endif
426 m_jit.emitRestoreArgumentReference();
427 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
428 X86Assembler::JmpSrc call = m_jit.emitCall();
429 m_calls.append(CallRecord(call, helper, opcodeIndex));
430#if ENABLE(OPCODE_SAMPLING)
431 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
432#endif
433
434 return call;
435}
436
437ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitCTICall(Instruction* vPC, unsigned opcodeIndex, CTIHelper_2 helper)
438{
439#if ENABLE(OPCODE_SAMPLING)
440 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, true), m_machine->sampler()->sampleSlot());
441#else
442 UNUSED_PARAM(vPC);
443#endif
444 m_jit.emitRestoreArgumentReference();
445 emitPutCTIParam(X86::edi, CTI_ARGS_callFrame);
446 X86Assembler::JmpSrc call = m_jit.emitCall();
447 m_calls.append(CallRecord(call, helper, opcodeIndex));
448#if ENABLE(OPCODE_SAMPLING)
449 m_jit.movl_i32m(m_machine->sampler()->encodeSample(vPC, false), m_machine->sampler()->sampleSlot());
450#endif
451
452 return call;
453}
454
455ALWAYS_INLINE void CTI::emitJumpSlowCaseIfNotJSCell(X86Assembler::RegisterID reg, unsigned opcodeIndex)
456{
457 m_jit.testl_i32r(JSImmediate::TagMask, reg);
458 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), opcodeIndex));
459}
460
461ALWAYS_INLINE void CTI::emitJumpSlowCaseIfNotImmNum(X86Assembler::RegisterID reg, unsigned opcodeIndex)
462{
463 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, reg);
464 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJe(), opcodeIndex));
465}
466
467ALWAYS_INLINE void CTI::emitJumpSlowCaseIfNotImmNums(X86Assembler::RegisterID reg1, X86Assembler::RegisterID reg2, unsigned opcodeIndex)
468{
469 m_jit.movl_rr(reg1, X86::ecx);
470 m_jit.andl_rr(reg2, X86::ecx);
471 emitJumpSlowCaseIfNotImmNum(X86::ecx, opcodeIndex);
472}
473
474ALWAYS_INLINE unsigned CTI::getDeTaggedConstantImmediate(JSValue* imm)
475{
476 ASSERT(JSImmediate::isNumber(imm));
477 return asInteger(imm) & ~JSImmediate::TagBitTypeInteger;
478}
479
480ALWAYS_INLINE void CTI::emitFastArithDeTagImmediate(X86Assembler::RegisterID reg)
481{
482 m_jit.subl_i8r(JSImmediate::TagBitTypeInteger, reg);
483}
484
485ALWAYS_INLINE X86Assembler::JmpSrc CTI::emitFastArithDeTagImmediateJumpIfZero(X86Assembler::RegisterID reg)
486{
487 m_jit.subl_i8r(JSImmediate::TagBitTypeInteger, reg);
488 return m_jit.emitUnlinkedJe();
489}
490
491ALWAYS_INLINE void CTI::emitFastArithReTagImmediate(X86Assembler::RegisterID reg)
492{
493 m_jit.addl_i8r(JSImmediate::TagBitTypeInteger, reg);
494}
495
496ALWAYS_INLINE void CTI::emitFastArithPotentiallyReTagImmediate(X86Assembler::RegisterID reg)
497{
498 m_jit.orl_i32r(JSImmediate::TagBitTypeInteger, reg);
499}
500
501ALWAYS_INLINE void CTI::emitFastArithImmToInt(X86Assembler::RegisterID reg)
502{
503 m_jit.sarl_i8r(1, reg);
504}
505
506ALWAYS_INLINE void CTI::emitFastArithIntToImmOrSlowCase(X86Assembler::RegisterID reg, unsigned opcodeIndex)
507{
508 m_jit.addl_rr(reg, reg);
509 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), opcodeIndex));
510 emitFastArithReTagImmediate(reg);
511}
512
513ALWAYS_INLINE void CTI::emitFastArithIntToImmNoCheck(X86Assembler::RegisterID reg)
514{
515 m_jit.addl_rr(reg, reg);
516 emitFastArithReTagImmediate(reg);
517}
518
519ALWAYS_INLINE void CTI::emitTagAsBoolImmediate(X86Assembler::RegisterID reg)
520{
521 m_jit.shl_i8r(JSImmediate::ExtendedPayloadShift, reg);
522 m_jit.orl_i32r(JSImmediate::FullTagTypeBool, reg);
523}
524
525CTI::CTI(Machine* machine, CallFrame* callFrame, CodeBlock* codeBlock)
526 : m_jit(machine->jitCodeBuffer())
527 , m_machine(machine)
528 , m_callFrame(callFrame)
529 , m_codeBlock(codeBlock)
530 , m_labels(codeBlock ? codeBlock->instructions.size() : 0)
531 , m_propertyAccessCompilationInfo(codeBlock ? codeBlock->propertyAccessInstructions.size() : 0)
532 , m_callStructureStubCompilationInfo(codeBlock ? codeBlock->callLinkInfos.size() : 0)
533{
534}
535
536#define CTI_COMPILE_BINARY_OP(name) \
537 case name: { \
538 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx); \
539 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx); \
540 emitCTICall(instruction + i, i, Machine::cti_##name); \
541 emitPutResult(instruction[i + 1].u.operand); \
542 i += 4; \
543 break; \
544 }
545
546#define CTI_COMPILE_UNARY_OP(name) \
547 case name: { \
548 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx); \
549 emitCTICall(instruction + i, i, Machine::cti_##name); \
550 emitPutResult(instruction[i + 1].u.operand); \
551 i += 3; \
552 break; \
553 }
554
555static void unreachable()
556{
557 ASSERT_NOT_REACHED();
558 exit(1);
559}
560
561void CTI::compileOpCallInitializeCallFrame(unsigned callee, unsigned argCount)
562{
563 emitGetArg(callee, X86::ecx); // Load callee JSFunction into ecx
564 m_jit.movl_rm(X86::eax, RegisterFile::CodeBlock * static_cast<int>(sizeof(Register)), X86::edx); // callee CodeBlock was returned in eax
565 m_jit.movl_i32m(asInteger(noValue()), RegisterFile::OptionalCalleeArguments * static_cast<int>(sizeof(Register)), X86::edx);
566 m_jit.movl_rm(X86::ecx, RegisterFile::Callee * static_cast<int>(sizeof(Register)), X86::edx);
567
568 m_jit.movl_mr(OBJECT_OFFSET(JSFunction, m_scopeChain) + OBJECT_OFFSET(ScopeChain, m_node), X86::ecx, X86::ebx); // newScopeChain
569 m_jit.movl_i32m(argCount, RegisterFile::ArgumentCount * static_cast<int>(sizeof(Register)), X86::edx);
570 m_jit.movl_rm(X86::edi, RegisterFile::CallerFrame * static_cast<int>(sizeof(Register)), X86::edx);
571 m_jit.movl_rm(X86::ebx, RegisterFile::ScopeChain * static_cast<int>(sizeof(Register)), X86::edx);
572}
573
574void CTI::compileOpCallSetupArgs(Instruction* instruction, bool isConstruct, bool isEval)
575{
576 int firstArg = instruction[4].u.operand;
577 int argCount = instruction[5].u.operand;
578 int registerOffset = instruction[6].u.operand;
579
580 emitPutArg(X86::ecx, 0);
581 emitPutArgConstant(registerOffset, 4);
582 emitPutArgConstant(argCount, 8);
583 emitPutArgConstant(reinterpret_cast<unsigned>(instruction), 12);
584 if (isConstruct) {
585 emitGetPutArg(instruction[3].u.operand, 16, X86::eax);
586 emitPutArgConstant(firstArg, 20);
587 } else if (isEval)
588 emitGetPutArg(instruction[3].u.operand, 16, X86::eax);
589}
590
591void CTI::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned i, unsigned callLinkInfoIndex)
592{
593 int dst = instruction[1].u.operand;
594 int callee = instruction[2].u.operand;
595 int firstArg = instruction[4].u.operand;
596 int argCount = instruction[5].u.operand;
597 int registerOffset = instruction[6].u.operand;
598
599 // Setup this value as the first argument (does not apply to constructors)
600 if (opcodeID != op_construct) {
601 int thisVal = instruction[3].u.operand;
602 if (thisVal == missingThisObjectMarker()) {
603 // FIXME: should this be loaded dynamically off m_callFrame?
604 m_jit.movl_i32m(asInteger(m_callFrame->globalThisValue()), firstArg * sizeof(Register), X86::edi);
605 } else {
606 emitGetArg(thisVal, X86::eax);
607 emitPutResult(firstArg);
608 }
609 }
610
611 // Handle eval
612 X86Assembler::JmpSrc wasEval;
613 if (opcodeID == op_call_eval) {
614 emitGetArg(callee, X86::ecx);
615 compileOpCallSetupArgs(instruction, false, true);
616
617 emitCTICall(instruction, i, Machine::cti_op_call_eval);
618 m_jit.cmpl_i32r(asInteger(JSImmediate::impossibleValue()), X86::eax);
619 wasEval = m_jit.emitUnlinkedJne();
620 }
621
622 // This plants a check for a cached JSFunction value, so we can plant a fast link to the callee.
623 // This deliberately leaves the callee in ecx, used when setting up the stack frame below
624 emitGetArg(callee, X86::ecx);
625 m_jit.cmpl_i32r(asInteger(JSImmediate::impossibleValue()), X86::ecx);
626 X86Assembler::JmpDst addressOfLinkedFunctionCheck = m_jit.label();
627 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
628 ASSERT(X86Assembler::getDifferenceBetweenLabels(addressOfLinkedFunctionCheck, m_jit.label()) == repatchOffsetOpCallCall);
629 m_callStructureStubCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck;
630
631 // The following is the fast case, only used whan a callee can be linked.
632
633 // In the case of OpConstruct, call oout to a cti_ function to create the new object.
634 if (opcodeID == op_construct) {
635 emitPutArg(X86::ecx, 0);
636 emitGetPutArg(instruction[3].u.operand, 4, X86::eax);
637 emitCTICall(instruction, i, Machine::cti_op_construct_JSConstructFast);
638 emitPutResult(instruction[4].u.operand);
639 emitGetArg(callee, X86::ecx);
640 }
641
642 // Fast version of stack frame initialization, directly relative to edi.
643 // Note that this omits to set up RegisterFile::CodeBlock, which is set in the callee
644 m_jit.movl_i32m(asInteger(noValue()), (registerOffset + RegisterFile::OptionalCalleeArguments) * static_cast<int>(sizeof(Register)), X86::edi);
645 m_jit.movl_rm(X86::ecx, (registerOffset + RegisterFile::Callee) * static_cast<int>(sizeof(Register)), X86::edi);
646 m_jit.movl_mr(OBJECT_OFFSET(JSFunction, m_scopeChain) + OBJECT_OFFSET(ScopeChain, m_node), X86::ecx, X86::edx); // newScopeChain
647 m_jit.movl_i32m(argCount, (registerOffset + RegisterFile::ArgumentCount) * static_cast<int>(sizeof(Register)), X86::edi);
648 m_jit.movl_rm(X86::edi, (registerOffset + RegisterFile::CallerFrame) * static_cast<int>(sizeof(Register)), X86::edi);
649 m_jit.movl_rm(X86::edx, (registerOffset + RegisterFile::ScopeChain) * static_cast<int>(sizeof(Register)), X86::edi);
650 m_jit.addl_i32r(registerOffset * sizeof(Register), X86::edi);
651
652 // Call to the callee
653 m_callStructureStubCompilationInfo[callLinkInfoIndex].hotPathOther = emitNakedCall(i, unreachable);
654
655 if (opcodeID == op_call_eval)
656 m_jit.link(wasEval, m_jit.label());
657
658 // Put the return value in dst. In the interpreter, op_ret does this.
659 emitPutResult(dst);
660
661#if ENABLE(CODEBLOCK_SAMPLING)
662 m_jit.movl_i32m(reinterpret_cast<unsigned>(m_codeBlock), m_machine->sampler()->codeBlockSlot());
663#endif
664}
665
666void CTI::compileOpStrictEq(Instruction* instruction, unsigned i, CompileOpStrictEqType type)
667{
668 bool negated = (type == OpNStrictEq);
669
670 unsigned dst = instruction[i + 1].u.operand;
671 unsigned src1 = instruction[i + 2].u.operand;
672 unsigned src2 = instruction[i + 3].u.operand;
673
674 emitGetArg(src1, X86::eax);
675 emitGetArg(src2, X86::edx);
676
677 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
678 X86Assembler::JmpSrc firstNotImmediate = m_jit.emitUnlinkedJe();
679 m_jit.testl_i32r(JSImmediate::TagMask, X86::edx);
680 X86Assembler::JmpSrc secondNotImmediate = m_jit.emitUnlinkedJe();
681
682 m_jit.cmpl_rr(X86::edx, X86::eax);
683 if (negated)
684 m_jit.setne_r(X86::eax);
685 else
686 m_jit.sete_r(X86::eax);
687 m_jit.movzbl_rr(X86::eax, X86::eax);
688 emitTagAsBoolImmediate(X86::eax);
689
690 X86Assembler::JmpSrc bothWereImmediates = m_jit.emitUnlinkedJmp();
691
692 m_jit.link(firstNotImmediate, m_jit.label());
693
694 // check that edx is immediate but not the zero immediate
695 m_jit.testl_i32r(JSImmediate::TagMask, X86::edx);
696 m_jit.setz_r(X86::ecx);
697 m_jit.movzbl_rr(X86::ecx, X86::ecx); // ecx is now 1 if edx was nonimmediate
698 m_jit.cmpl_i32r(asInteger(JSImmediate::zeroImmediate()), X86::edx);
699 m_jit.sete_r(X86::edx);
700 m_jit.movzbl_rr(X86::edx, X86::edx); // edx is now 1 if edx was the 0 immediate
701 m_jit.orl_rr(X86::ecx, X86::edx);
702
703 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJnz(), i));
704
705 m_jit.movl_i32r(asInteger(jsBoolean(negated)), X86::eax);
706
707 X86Assembler::JmpSrc firstWasNotImmediate = m_jit.emitUnlinkedJmp();
708
709 m_jit.link(secondNotImmediate, m_jit.label());
710 // check that eax is not the zero immediate (we know it must be immediate)
711 m_jit.cmpl_i32r(asInteger(JSImmediate::zeroImmediate()), X86::eax);
712 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJe(), i));
713
714 m_jit.movl_i32r(asInteger(jsBoolean(negated)), X86::eax);
715
716 m_jit.link(bothWereImmediates, m_jit.label());
717 m_jit.link(firstWasNotImmediate, m_jit.label());
718
719 emitPutResult(dst);
720}
721
722void CTI::emitSlowScriptCheck(Instruction* vPC, unsigned opcodeIndex)
723{
724 m_jit.subl_i8r(1, X86::esi);
725 X86Assembler::JmpSrc skipTimeout = m_jit.emitUnlinkedJne();
726 emitCTICall(vPC, opcodeIndex, Machine::cti_timeout_check);
727
728 emitGetCTIParam(CTI_ARGS_globalData, X86::ecx);
729 m_jit.movl_mr(OBJECT_OFFSET(JSGlobalData, machine), X86::ecx, X86::ecx);
730 m_jit.movl_mr(OBJECT_OFFSET(Machine, m_ticksUntilNextTimeoutCheck), X86::ecx, X86::esi);
731 m_jit.link(skipTimeout, m_jit.label());
732}
733
734/*
735 This is required since number representation is canonical - values representable as a JSImmediate should not be stored in a JSNumberCell.
736
737 In the common case, the double value from 'xmmSource' is written to the reusable JSNumberCell pointed to by 'jsNumberCell', then 'jsNumberCell'
738 is written to the output SF Register 'dst', and then a jump is planted (stored into *wroteJSNumberCell).
739
740 However if the value from xmmSource is representable as a JSImmediate, then the JSImmediate value will be written to the output, and flow
741 control will fall through from the code planted.
742*/
743void CTI::putDoubleResultToJSNumberCellOrJSImmediate(X86::XMMRegisterID xmmSource, X86::RegisterID jsNumberCell, unsigned dst, X86Assembler::JmpSrc* wroteJSNumberCell, X86::XMMRegisterID tempXmm, X86::RegisterID tempReg1, X86::RegisterID tempReg2)
744{
745 // convert (double -> JSImmediate -> double), and check if the value is unchanged - in which case the value is representable as a JSImmediate.
746 m_jit.cvttsd2si_rr(xmmSource, tempReg1);
747 m_jit.addl_rr(tempReg1, tempReg1);
748 m_jit.sarl_i8r(1, tempReg1);
749 m_jit.cvtsi2sd_rr(tempReg1, tempXmm);
750 // Compare & branch if immediate.
751 m_jit.ucomis_rr(tempXmm, xmmSource);
752 X86Assembler::JmpSrc resultIsImm = m_jit.emitUnlinkedJe();
753 X86Assembler::JmpDst resultLookedLikeImmButActuallyIsnt = m_jit.label();
754
755 // Store the result to the JSNumberCell and jump.
756 m_jit.movsd_rm(xmmSource, OBJECT_OFFSET(JSNumberCell, m_value), jsNumberCell);
757 emitPutResult(dst, jsNumberCell);
758 *wroteJSNumberCell = m_jit.emitUnlinkedJmp();
759
760 m_jit.link(resultIsImm, m_jit.label());
761 // value == (double)(JSImmediate)value... or at least, it looks that way...
762 // ucomi will report that (0 == -0), and will report true if either input in NaN (result is unordered).
763 m_jit.link(m_jit.emitUnlinkedJp(), resultLookedLikeImmButActuallyIsnt); // Actually was a NaN
764 m_jit.pextrw_irr(3, xmmSource, tempReg2);
765 m_jit.cmpl_i32r(0x8000, tempReg2);
766 m_jit.link(m_jit.emitUnlinkedJe(), resultLookedLikeImmButActuallyIsnt); // Actually was -0
767 // Yes it really really really is representable as a JSImmediate.
768 emitFastArithIntToImmNoCheck(tempReg1);
769 emitPutResult(dst, X86::ecx);
770}
771
772void CTI::compileBinaryArithOp(OpcodeID opcodeID, unsigned dst, unsigned src1, unsigned src2, OperandTypes types, unsigned i)
773{
774 StructureID* numberStructureID = m_callFrame->globalData().numberStructureID.get();
775 X86Assembler::JmpSrc wasJSNumberCell1, wasJSNumberCell1b, wasJSNumberCell2, wasJSNumberCell2b;
776
777 emitGetArg(src1, X86::eax);
778 emitGetArg(src2, X86::edx);
779
780 if (types.second().isReusable() && isSSE2Present()) {
781 ASSERT(types.second().mightBeNumber());
782
783 // Check op2 is a number
784 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::edx);
785 X86Assembler::JmpSrc op2imm = m_jit.emitUnlinkedJne();
786 if (!types.second().definitelyIsNumber()) {
787 emitJumpSlowCaseIfNotJSCell(X86::edx, i);
788 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(numberStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::edx);
789 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
790 }
791
792 // (1) In this case src2 is a reusable number cell.
793 // Slow case if src1 is not a number type.
794 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
795 X86Assembler::JmpSrc op1imm = m_jit.emitUnlinkedJne();
796 if (!types.first().definitelyIsNumber()) {
797 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
798 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(numberStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
799 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
800 }
801
802 // (1a) if we get here, src1 is also a number cell
803 m_jit.movsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::eax, X86::xmm0);
804 X86Assembler::JmpSrc loadedDouble = m_jit.emitUnlinkedJmp();
805 // (1b) if we get here, src1 is an immediate
806 m_jit.link(op1imm, m_jit.label());
807 emitFastArithImmToInt(X86::eax);
808 m_jit.cvtsi2sd_rr(X86::eax, X86::xmm0);
809 // (1c)
810 m_jit.link(loadedDouble, m_jit.label());
811 if (opcodeID == op_add)
812 m_jit.addsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::edx, X86::xmm0);
813 else if (opcodeID == op_sub)
814 m_jit.subsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::edx, X86::xmm0);
815 else {
816 ASSERT(opcodeID == op_mul);
817 m_jit.mulsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::edx, X86::xmm0);
818 }
819
820 putDoubleResultToJSNumberCellOrJSImmediate(X86::xmm0, X86::edx, dst, &wasJSNumberCell2, X86::xmm1, X86::ecx, X86::eax);
821 wasJSNumberCell2b = m_jit.emitUnlinkedJmp();
822
823 // (2) This handles cases where src2 is an immediate number.
824 // Two slow cases - either src1 isn't an immediate, or the subtract overflows.
825 m_jit.link(op2imm, m_jit.label());
826 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
827 } else if (types.first().isReusable() && isSSE2Present()) {
828 ASSERT(types.first().mightBeNumber());
829
830 // Check op1 is a number
831 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
832 X86Assembler::JmpSrc op1imm = m_jit.emitUnlinkedJne();
833 if (!types.first().definitelyIsNumber()) {
834 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
835 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(numberStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
836 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
837 }
838
839 // (1) In this case src1 is a reusable number cell.
840 // Slow case if src2 is not a number type.
841 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::edx);
842 X86Assembler::JmpSrc op2imm = m_jit.emitUnlinkedJne();
843 if (!types.second().definitelyIsNumber()) {
844 emitJumpSlowCaseIfNotJSCell(X86::edx, i);
845 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(numberStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::edx);
846 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
847 }
848
849 // (1a) if we get here, src2 is also a number cell
850 m_jit.movsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::edx, X86::xmm1);
851 X86Assembler::JmpSrc loadedDouble = m_jit.emitUnlinkedJmp();
852 // (1b) if we get here, src2 is an immediate
853 m_jit.link(op2imm, m_jit.label());
854 emitFastArithImmToInt(X86::edx);
855 m_jit.cvtsi2sd_rr(X86::edx, X86::xmm1);
856 // (1c)
857 m_jit.link(loadedDouble, m_jit.label());
858 m_jit.movsd_mr(OBJECT_OFFSET(JSNumberCell, m_value), X86::eax, X86::xmm0);
859 if (opcodeID == op_add)
860 m_jit.addsd_rr(X86::xmm1, X86::xmm0);
861 else if (opcodeID == op_sub)
862 m_jit.subsd_rr(X86::xmm1, X86::xmm0);
863 else {
864 ASSERT(opcodeID == op_mul);
865 m_jit.mulsd_rr(X86::xmm1, X86::xmm0);
866 }
867 m_jit.movsd_rm(X86::xmm0, OBJECT_OFFSET(JSNumberCell, m_value), X86::eax);
868 emitPutResult(dst);
869
870 putDoubleResultToJSNumberCellOrJSImmediate(X86::xmm0, X86::eax, dst, &wasJSNumberCell1, X86::xmm1, X86::ecx, X86::edx);
871 wasJSNumberCell1b = m_jit.emitUnlinkedJmp();
872
873 // (2) This handles cases where src1 is an immediate number.
874 // Two slow cases - either src2 isn't an immediate, or the subtract overflows.
875 m_jit.link(op1imm, m_jit.label());
876 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
877 } else
878 emitJumpSlowCaseIfNotImmNums(X86::eax, X86::edx, i);
879
880 if (opcodeID == op_add) {
881 emitFastArithDeTagImmediate(X86::eax);
882 m_jit.addl_rr(X86::edx, X86::eax);
883 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
884 } else if (opcodeID == op_sub) {
885 m_jit.subl_rr(X86::edx, X86::eax);
886 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
887 emitFastArithReTagImmediate(X86::eax);
888 } else {
889 ASSERT(opcodeID == op_mul);
890 // convert eax & edx from JSImmediates to ints, and check if either are zero
891 emitFastArithImmToInt(X86::edx);
892 X86Assembler::JmpSrc op1Zero = emitFastArithDeTagImmediateJumpIfZero(X86::eax);
893 m_jit.testl_rr(X86::edx, X86::edx);
894 X86Assembler::JmpSrc op2NonZero = m_jit.emitUnlinkedJne();
895 m_jit.link(op1Zero, m_jit.label());
896 // if either input is zero, add the two together, and check if the result is < 0.
897 // If it is, we have a problem (N < 0), (N * 0) == -0, not representatble as a JSImmediate.
898 m_jit.movl_rr(X86::eax, X86::ecx);
899 m_jit.addl_rr(X86::edx, X86::ecx);
900 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJs(), i));
901 // Skip the above check if neither input is zero
902 m_jit.link(op2NonZero, m_jit.label());
903 m_jit.imull_rr(X86::edx, X86::eax);
904 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
905 emitFastArithReTagImmediate(X86::eax);
906 }
907 emitPutResult(dst);
908
909 if (types.second().isReusable() && isSSE2Present()) {
910 m_jit.link(wasJSNumberCell2, m_jit.label());
911 m_jit.link(wasJSNumberCell2b, m_jit.label());
912 }
913 else if (types.first().isReusable() && isSSE2Present()) {
914 m_jit.link(wasJSNumberCell1, m_jit.label());
915 m_jit.link(wasJSNumberCell1b, m_jit.label());
916 }
917}
918
919void CTI::compileBinaryArithOpSlowCase(Instruction* vPC, OpcodeID opcodeID, Vector<SlowCaseEntry>::iterator& iter, unsigned dst, unsigned src1, unsigned src2, OperandTypes types, unsigned i)
920{
921 X86Assembler::JmpDst here = m_jit.label();
922 m_jit.link(iter->from, here);
923 if (types.second().isReusable() && isSSE2Present()) {
924 if (!types.first().definitelyIsNumber()) {
925 m_jit.link((++iter)->from, here);
926 m_jit.link((++iter)->from, here);
927 }
928 if (!types.second().definitelyIsNumber()) {
929 m_jit.link((++iter)->from, here);
930 m_jit.link((++iter)->from, here);
931 }
932 m_jit.link((++iter)->from, here);
933 } else if (types.first().isReusable() && isSSE2Present()) {
934 if (!types.first().definitelyIsNumber()) {
935 m_jit.link((++iter)->from, here);
936 m_jit.link((++iter)->from, here);
937 }
938 if (!types.second().definitelyIsNumber()) {
939 m_jit.link((++iter)->from, here);
940 m_jit.link((++iter)->from, here);
941 }
942 m_jit.link((++iter)->from, here);
943 } else
944 m_jit.link((++iter)->from, here);
945
946 // additional entry point to handle -0 cases.
947 if (opcodeID == op_mul)
948 m_jit.link((++iter)->from, here);
949
950 emitGetPutArg(src1, 0, X86::ecx);
951 emitGetPutArg(src2, 4, X86::ecx);
952 if (opcodeID == op_add)
953 emitCTICall(vPC, i, Machine::cti_op_add);
954 else if (opcodeID == op_sub)
955 emitCTICall(vPC, i, Machine::cti_op_sub);
956 else {
957 ASSERT(opcodeID == op_mul);
958 emitCTICall(vPC, i, Machine::cti_op_mul);
959 }
960 emitPutResult(dst);
961}
962
963void CTI::privateCompileMainPass()
964{
965 Instruction* instruction = m_codeBlock->instructions.begin();
966 unsigned instructionCount = m_codeBlock->instructions.size();
967
968 unsigned propertyAccessInstructionIndex = 0;
969 unsigned callLinkInfoIndex = 0;
970
971 for (unsigned i = 0; i < instructionCount; ) {
972 ASSERT_WITH_MESSAGE(m_machine->isOpcode(instruction[i].u.opcode), "privateCompileMainPass gone bad @ %d", i);
973
974#if ENABLE(OPCODE_SAMPLING)
975 m_jit.movl_i32m(m_machine->sampler()->encodeSample(instruction + i), m_machine->sampler()->sampleSlot());
976#endif
977
978 m_labels[i] = m_jit.label();
979 OpcodeID opcodeID = m_machine->getOpcodeID(instruction[i].u.opcode);
980 switch (opcodeID) {
981 case op_mov: {
982 unsigned src = instruction[i + 2].u.operand;
983 if (isConstant(src))
984 m_jit.movl_i32r(asInteger(getConstant(m_callFrame, src)), X86::edx);
985 else
986 emitGetArg(src, X86::edx);
987 emitPutResult(instruction[i + 1].u.operand, X86::edx);
988 i += 3;
989 break;
990 }
991 case op_add: {
992 unsigned dst = instruction[i + 1].u.operand;
993 unsigned src1 = instruction[i + 2].u.operand;
994 unsigned src2 = instruction[i + 3].u.operand;
995
996 if (JSValue* value = getConstantImmediateNumericArg(src1)) {
997 emitGetArg(src2, X86::edx);
998 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
999 m_jit.addl_i32r(getDeTaggedConstantImmediate(value), X86::edx);
1000 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1001 emitPutResult(dst, X86::edx);
1002 } else if (JSValue* value = getConstantImmediateNumericArg(src2)) {
1003 emitGetArg(src1, X86::eax);
1004 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1005 m_jit.addl_i32r(getDeTaggedConstantImmediate(value), X86::eax);
1006 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1007 emitPutResult(dst);
1008 } else {
1009 OperandTypes types = OperandTypes::fromInt(instruction[i + 4].u.operand);
1010 if (types.first().mightBeNumber() && types.second().mightBeNumber())
1011 compileBinaryArithOp(op_add, instruction[i + 1].u.operand, instruction[i + 2].u.operand, instruction[i + 3].u.operand, OperandTypes::fromInt(instruction[i + 4].u.operand), i);
1012 else {
1013 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1014 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx);
1015 emitCTICall(instruction + i, i, Machine::cti_op_add);
1016 emitPutResult(instruction[i + 1].u.operand);
1017 }
1018 }
1019
1020 i += 5;
1021 break;
1022 }
1023 case op_end: {
1024 if (m_codeBlock->needsFullScopeChain)
1025 emitCTICall(instruction + i, i, Machine::cti_op_end);
1026 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1027 m_jit.pushl_m(RegisterFile::ReturnPC * static_cast<int>(sizeof(Register)), X86::edi);
1028 m_jit.ret();
1029 i += 2;
1030 break;
1031 }
1032 case op_jmp: {
1033 unsigned target = instruction[i + 1].u.operand;
1034 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJmp(), i + 1 + target));
1035 i += 2;
1036 break;
1037 }
1038 case op_pre_inc: {
1039 int srcDst = instruction[i + 1].u.operand;
1040 emitGetArg(srcDst, X86::eax);
1041 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1042 m_jit.addl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::eax);
1043 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1044 emitPutResult(srcDst, X86::eax);
1045 i += 2;
1046 break;
1047 }
1048 case op_loop: {
1049 emitSlowScriptCheck(instruction, i);
1050
1051 unsigned target = instruction[i + 1].u.operand;
1052 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJmp(), i + 1 + target));
1053 i += 2;
1054 break;
1055 }
1056 case op_loop_if_less: {
1057 emitSlowScriptCheck(instruction, i);
1058
1059 unsigned target = instruction[i + 3].u.operand;
1060 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
1061 if (src2imm) {
1062 emitGetArg(instruction[i + 1].u.operand, X86::edx);
1063 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1064 m_jit.cmpl_i32r(asInteger(src2imm), X86::edx);
1065 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJl(), i + 3 + target));
1066 } else {
1067 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1068 emitGetArg(instruction[i + 2].u.operand, X86::edx);
1069 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1070 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1071 m_jit.cmpl_rr(X86::edx, X86::eax);
1072 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJl(), i + 3 + target));
1073 }
1074 i += 4;
1075 break;
1076 }
1077 case op_loop_if_lesseq: {
1078 emitSlowScriptCheck(instruction, i);
1079
1080 unsigned target = instruction[i + 3].u.operand;
1081 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
1082 if (src2imm) {
1083 emitGetArg(instruction[i + 1].u.operand, X86::edx);
1084 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1085 m_jit.cmpl_i32r(asInteger(src2imm), X86::edx);
1086 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJle(), i + 3 + target));
1087 } else {
1088 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1089 emitGetArg(instruction[i + 2].u.operand, X86::edx);
1090 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1091 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1092 m_jit.cmpl_rr(X86::edx, X86::eax);
1093 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJle(), i + 3 + target));
1094 }
1095 i += 4;
1096 break;
1097 }
1098 case op_new_object: {
1099 emitCTICall(instruction + i, i, Machine::cti_op_new_object);
1100 emitPutResult(instruction[i + 1].u.operand);
1101 i += 2;
1102 break;
1103 }
1104 case op_put_by_id: {
1105 // In order to be able to repatch both the StructureID, and the object offset, we store one pointer,
1106 // to just after the arguments have been loaded into registers 'hotPathBegin', and we generate code
1107 // such that the StructureID & offset are always at the same distance from this.
1108
1109 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1110 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1111
1112 ASSERT(m_codeBlock->propertyAccessInstructions[propertyAccessInstructionIndex].opcodeIndex == i);
1113 X86Assembler::JmpDst hotPathBegin = m_jit.label();
1114 m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].hotPathBegin = hotPathBegin;
1115 ++propertyAccessInstructionIndex;
1116
1117 // Jump to a slow case if either the base object is an immediate, or if the StructureID does not match.
1118 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
1119 // It is important that the following instruction plants a 32bit immediate, in order that it can be patched over.
1120 m_jit.cmpl_i32m(repatchGetByIdDefaultStructureID, OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
1121 ASSERT(X86Assembler::getDifferenceBetweenLabels(hotPathBegin, m_jit.label()) == repatchOffsetPutByIdStructureID);
1122 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1123
1124 // Plant a load from a bogus ofset in the object's property map; we will patch this later, if it is to be used.
1125 m_jit.movl_mr(OBJECT_OFFSET(JSObject, m_propertyStorage), X86::eax, X86::eax);
1126 m_jit.movl_rm(X86::edx, repatchGetByIdDefaultOffset, X86::eax);
1127 ASSERT(X86Assembler::getDifferenceBetweenLabels(hotPathBegin, m_jit.label()) == repatchOffsetPutByIdPropertyMapOffset);
1128
1129 i += 8;
1130 break;
1131 }
1132 case op_get_by_id: {
1133 // As for put_by_id, get_by_id requires the offset of the StructureID and the offset of the access to be repatched.
1134 // Additionally, for get_by_id we need repatch the offset of the branch to the slow case (we repatch this to jump
1135 // to array-length / prototype access tranpolines, and finally we also the the property-map access offset as a label
1136 // to jump back to if one of these trampolies finds a match.
1137
1138 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1139
1140 ASSERT(m_codeBlock->propertyAccessInstructions[propertyAccessInstructionIndex].opcodeIndex == i);
1141
1142 X86Assembler::JmpDst hotPathBegin = m_jit.label();
1143 m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].hotPathBegin = hotPathBegin;
1144 ++propertyAccessInstructionIndex;
1145
1146 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
1147 m_jit.cmpl_i32m(repatchGetByIdDefaultStructureID, OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
1148 ASSERT(X86Assembler::getDifferenceBetweenLabels(hotPathBegin, m_jit.label()) == repatchOffsetGetByIdStructureID);
1149 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1150 ASSERT(X86Assembler::getDifferenceBetweenLabels(hotPathBegin, m_jit.label()) == repatchOffsetGetByIdBranchToSlowCase);
1151
1152 m_jit.movl_mr(OBJECT_OFFSET(JSObject, m_propertyStorage), X86::eax, X86::eax);
1153 m_jit.movl_mr(repatchGetByIdDefaultOffset, X86::eax, X86::ecx);
1154 ASSERT(X86Assembler::getDifferenceBetweenLabels(hotPathBegin, m_jit.label()) == repatchOffsetGetByIdPropertyMapOffset);
1155 emitPutResult(instruction[i + 1].u.operand, X86::ecx);
1156
1157 i += 8;
1158 break;
1159 }
1160 case op_instanceof: {
1161 emitGetArg(instruction[i + 2].u.operand, X86::eax); // value
1162 emitGetArg(instruction[i + 3].u.operand, X86::ecx); // baseVal
1163 emitGetArg(instruction[i + 4].u.operand, X86::edx); // proto
1164
1165 // check if any are immediates
1166 m_jit.orl_rr(X86::eax, X86::ecx);
1167 m_jit.orl_rr(X86::edx, X86::ecx);
1168 m_jit.testl_i32r(JSImmediate::TagMask, X86::ecx);
1169
1170 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJnz(), i));
1171
1172 // check that all are object type - this is a bit of a bithack to avoid excess branching;
1173 // we check that the sum of the three type codes from StructureIDs is exactly 3 * ObjectType,
1174 // this works because NumberType and StringType are smaller
1175 m_jit.movl_i32r(3 * ObjectType, X86::ecx);
1176 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::eax);
1177 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::edx, X86::edx);
1178 m_jit.subl_mr(OBJECT_OFFSET(StructureID, m_typeInfo.m_type), X86::eax, X86::ecx);
1179 m_jit.subl_mr(OBJECT_OFFSET(StructureID, m_typeInfo.m_type), X86::edx, X86::ecx);
1180 emitGetArg(instruction[i + 3].u.operand, X86::edx); // reload baseVal
1181 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::edx, X86::edx);
1182 m_jit.cmpl_rm(X86::ecx, OBJECT_OFFSET(StructureID, m_typeInfo.m_type), X86::edx);
1183
1184 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1185
1186 // check that baseVal's flags include ImplementsHasInstance but not OverridesHasInstance
1187 m_jit.movl_mr(OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::edx, X86::ecx);
1188 m_jit.andl_i32r(ImplementsHasInstance | OverridesHasInstance, X86::ecx);
1189 m_jit.cmpl_i32r(ImplementsHasInstance, X86::ecx);
1190
1191 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1192
1193 emitGetArg(instruction[i + 2].u.operand, X86::ecx); // reload value
1194 emitGetArg(instruction[i + 4].u.operand, X86::edx); // reload proto
1195
1196 // optimistically load true result
1197 m_jit.movl_i32r(asInteger(jsBoolean(true)), X86::eax);
1198
1199 X86Assembler::JmpDst loop = m_jit.label();
1200
1201 // load value's prototype
1202 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::ecx, X86::ecx);
1203 m_jit.movl_mr(OBJECT_OFFSET(StructureID, m_prototype), X86::ecx, X86::ecx);
1204
1205 m_jit.cmpl_rr(X86::ecx, X86::edx);
1206 X86Assembler::JmpSrc exit = m_jit.emitUnlinkedJe();
1207
1208 m_jit.cmpl_i32r(asInteger(jsNull()), X86::ecx);
1209 X86Assembler::JmpSrc goToLoop = m_jit.emitUnlinkedJne();
1210 m_jit.link(goToLoop, loop);
1211
1212 m_jit.movl_i32r(asInteger(jsBoolean(false)), X86::eax);
1213
1214 m_jit.link(exit, m_jit.label());
1215
1216 emitPutResult(instruction[i + 1].u.operand);
1217
1218 i += 5;
1219 break;
1220 }
1221 case op_del_by_id: {
1222 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1223 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 3].u.operand]);
1224 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
1225 emitCTICall(instruction + i, i, Machine::cti_op_del_by_id);
1226 emitPutResult(instruction[i + 1].u.operand);
1227 i += 4;
1228 break;
1229 }
1230 case op_mul: {
1231 unsigned dst = instruction[i + 1].u.operand;
1232 unsigned src1 = instruction[i + 2].u.operand;
1233 unsigned src2 = instruction[i + 3].u.operand;
1234
1235 // For now, only plant a fast int case if the constant operand is greater than zero.
1236 JSValue* src1Value = getConstantImmediateNumericArg(src1);
1237 JSValue* src2Value = getConstantImmediateNumericArg(src2);
1238 int32_t value;
1239 if (src1Value && ((value = JSImmediate::intValue(src1Value)) > 0)) {
1240 emitGetArg(src2, X86::eax);
1241 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1242 emitFastArithDeTagImmediate(X86::eax);
1243 m_jit.imull_i32r(X86::eax, value, X86::eax);
1244 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1245 emitFastArithReTagImmediate(X86::eax);
1246 emitPutResult(dst);
1247 } else if (src2Value && ((value = JSImmediate::intValue(src2Value)) > 0)) {
1248 emitGetArg(src1, X86::eax);
1249 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1250 emitFastArithDeTagImmediate(X86::eax);
1251 m_jit.imull_i32r(X86::eax, value, X86::eax);
1252 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1253 emitFastArithReTagImmediate(X86::eax);
1254 emitPutResult(dst);
1255 } else
1256 compileBinaryArithOp(op_mul, instruction[i + 1].u.operand, instruction[i + 2].u.operand, instruction[i + 3].u.operand, OperandTypes::fromInt(instruction[i + 4].u.operand), i);
1257
1258 i += 5;
1259 break;
1260 }
1261 case op_new_func: {
1262 FuncDeclNode* func = (m_codeBlock->functions[instruction[i + 2].u.operand]).get();
1263 emitPutArgConstant(reinterpret_cast<unsigned>(func), 0);
1264 emitCTICall(instruction + i, i, Machine::cti_op_new_func);
1265 emitPutResult(instruction[i + 1].u.operand);
1266 i += 3;
1267 break;
1268 }
1269 case op_call: {
1270 compileOpCall(opcodeID, instruction + i, i, callLinkInfoIndex++);
1271 i += 7;
1272 break;
1273 }
1274 case op_get_global_var: {
1275 JSVariableObject* globalObject = static_cast<JSVariableObject*>(instruction[i + 2].u.jsCell);
1276 m_jit.movl_i32r(asInteger(globalObject), X86::eax);
1277 emitGetVariableObjectRegister(X86::eax, instruction[i + 3].u.operand, X86::eax);
1278 emitPutResult(instruction[i + 1].u.operand, X86::eax);
1279 i += 4;
1280 break;
1281 }
1282 case op_put_global_var: {
1283 JSVariableObject* globalObject = static_cast<JSVariableObject*>(instruction[i + 1].u.jsCell);
1284 m_jit.movl_i32r(asInteger(globalObject), X86::eax);
1285 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1286 emitPutVariableObjectRegister(X86::edx, X86::eax, instruction[i + 2].u.operand);
1287 i += 4;
1288 break;
1289 }
1290 case op_get_scoped_var: {
1291 int skip = instruction[i + 3].u.operand + m_codeBlock->needsFullScopeChain;
1292
1293 emitGetArg(RegisterFile::ScopeChain, X86::eax);
1294 while (skip--)
1295 m_jit.movl_mr(OBJECT_OFFSET(ScopeChainNode, next), X86::eax, X86::eax);
1296
1297 m_jit.movl_mr(OBJECT_OFFSET(ScopeChainNode, object), X86::eax, X86::eax);
1298 emitGetVariableObjectRegister(X86::eax, instruction[i + 2].u.operand, X86::eax);
1299 emitPutResult(instruction[i + 1].u.operand);
1300 i += 4;
1301 break;
1302 }
1303 case op_put_scoped_var: {
1304 int skip = instruction[i + 2].u.operand + m_codeBlock->needsFullScopeChain;
1305
1306 emitGetArg(RegisterFile::ScopeChain, X86::edx);
1307 emitGetArg(instruction[i + 3].u.operand, X86::eax);
1308 while (skip--)
1309 m_jit.movl_mr(OBJECT_OFFSET(ScopeChainNode, next), X86::edx, X86::edx);
1310
1311 m_jit.movl_mr(OBJECT_OFFSET(ScopeChainNode, object), X86::edx, X86::edx);
1312 emitPutVariableObjectRegister(X86::eax, X86::edx, instruction[i + 1].u.operand);
1313 i += 4;
1314 break;
1315 }
1316 case op_tear_off_activation: {
1317 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
1318 emitCTICall(instruction + i, i, Machine::cti_op_tear_off_activation);
1319 i += 2;
1320 break;
1321 }
1322 case op_tear_off_arguments: {
1323 emitCTICall(instruction + i, i, Machine::cti_op_tear_off_arguments);
1324 i += 1;
1325 break;
1326 }
1327 case op_ret: {
1328 // We could JIT generate the deref, only calling out to C when the refcount hits zero.
1329 if (m_codeBlock->needsFullScopeChain)
1330 emitCTICall(instruction + i, i, Machine::cti_op_ret_scopeChain);
1331
1332 // Return the result in %eax.
1333 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1334
1335 // Grab the return address.
1336 emitGetArg(RegisterFile::ReturnPC, X86::edx);
1337
1338 // Restore our caller's "r".
1339 emitGetArg(RegisterFile::CallerFrame, X86::edi);
1340
1341 // Return.
1342 m_jit.pushl_r(X86::edx);
1343 m_jit.ret();
1344
1345 i += 2;
1346 break;
1347 }
1348 case op_new_array: {
1349 m_jit.leal_mr(sizeof(Register) * instruction[i + 2].u.operand, X86::edi, X86::edx);
1350 emitPutArg(X86::edx, 0);
1351 emitPutArgConstant(instruction[i + 3].u.operand, 4);
1352 emitCTICall(instruction + i, i, Machine::cti_op_new_array);
1353 emitPutResult(instruction[i + 1].u.operand);
1354 i += 4;
1355 break;
1356 }
1357 case op_resolve: {
1358 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
1359 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1360 emitCTICall(instruction + i, i, Machine::cti_op_resolve);
1361 emitPutResult(instruction[i + 1].u.operand);
1362 i += 3;
1363 break;
1364 }
1365 case op_construct: {
1366 compileOpCall(opcodeID, instruction + i, i, callLinkInfoIndex++);
1367 i += 7;
1368 break;
1369 }
1370 case op_construct_verify: {
1371 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1372
1373 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
1374 X86Assembler::JmpSrc isImmediate = m_jit.emitUnlinkedJne();
1375 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
1376 m_jit.cmpl_i32m(ObjectType, OBJECT_OFFSET(StructureID, m_typeInfo) + OBJECT_OFFSET(TypeInfo, m_type), X86::ecx);
1377 X86Assembler::JmpSrc isObject = m_jit.emitUnlinkedJe();
1378
1379 m_jit.link(isImmediate, m_jit.label());
1380 emitGetArg(instruction[i + 2].u.operand, X86::ecx);
1381 emitPutResult(instruction[i + 1].u.operand, X86::ecx);
1382 m_jit.link(isObject, m_jit.label());
1383
1384 i += 3;
1385 break;
1386 }
1387 case op_get_by_val: {
1388 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1389 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1390 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1391 emitFastArithImmToInt(X86::edx);
1392 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
1393 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1394 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsArrayVptr), X86::eax);
1395 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1396
1397 // This is an array; get the m_storage pointer into ecx, then check if the index is below the fast cutoff
1398 m_jit.movl_mr(OBJECT_OFFSET(JSArray, m_storage), X86::eax, X86::ecx);
1399 m_jit.cmpl_rm(X86::edx, OBJECT_OFFSET(JSArray, m_fastAccessCutoff), X86::eax);
1400 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJbe(), i));
1401
1402 // Get the value from the vector
1403 m_jit.movl_mr(OBJECT_OFFSET(ArrayStorage, m_vector[0]), X86::ecx, X86::edx, sizeof(JSValue*), X86::eax);
1404 emitPutResult(instruction[i + 1].u.operand);
1405 i += 4;
1406 break;
1407 }
1408 case op_resolve_func: {
1409 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 3].u.operand]);
1410 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1411 emitCTICall(instruction + i, i, Machine::cti_op_resolve_func);
1412 emitPutResult(instruction[i + 1].u.operand);
1413 emitPutResult(instruction[i + 2].u.operand, X86::edx);
1414 i += 4;
1415 break;
1416 }
1417 case op_sub: {
1418 compileBinaryArithOp(op_sub, instruction[i + 1].u.operand, instruction[i + 2].u.operand, instruction[i + 3].u.operand, OperandTypes::fromInt(instruction[i + 4].u.operand), i);
1419 i += 5;
1420 break;
1421 }
1422 case op_put_by_val: {
1423 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1424 emitGetArg(instruction[i + 2].u.operand, X86::edx);
1425 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1426 emitFastArithImmToInt(X86::edx);
1427 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
1428 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1429 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsArrayVptr), X86::eax);
1430 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1431
1432 // This is an array; get the m_storage pointer into ecx, then check if the index is below the fast cutoff
1433 m_jit.movl_mr(OBJECT_OFFSET(JSArray, m_storage), X86::eax, X86::ecx);
1434 m_jit.cmpl_rm(X86::edx, OBJECT_OFFSET(JSArray, m_fastAccessCutoff), X86::eax);
1435 X86Assembler::JmpSrc inFastVector = m_jit.emitUnlinkedJa();
1436 // No; oh well, check if the access if within the vector - if so, we may still be okay.
1437 m_jit.cmpl_rm(X86::edx, OBJECT_OFFSET(ArrayStorage, m_vectorLength), X86::ecx);
1438 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJbe(), i));
1439
1440 // This is a write to the slow part of the vector; first, we have to check if this would be the first write to this location.
1441 // FIXME: should be able to handle initial write to array; increment the the number of items in the array, and potentially update fast access cutoff.
1442 m_jit.cmpl_i8m(0, OBJECT_OFFSET(ArrayStorage, m_vector[0]), X86::ecx, X86::edx, sizeof(JSValue*));
1443 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJe(), i));
1444
1445 // All good - put the value into the array.
1446 m_jit.link(inFastVector, m_jit.label());
1447 emitGetArg(instruction[i + 3].u.operand, X86::eax);
1448 m_jit.movl_rm(X86::eax, OBJECT_OFFSET(ArrayStorage, m_vector[0]), X86::ecx, X86::edx, sizeof(JSValue*));
1449 i += 4;
1450 break;
1451 }
1452 CTI_COMPILE_BINARY_OP(op_lesseq)
1453 case op_loop_if_true: {
1454 emitSlowScriptCheck(instruction, i);
1455
1456 unsigned target = instruction[i + 2].u.operand;
1457 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1458
1459 m_jit.cmpl_i32r(asInteger(JSImmediate::zeroImmediate()), X86::eax);
1460 X86Assembler::JmpSrc isZero = m_jit.emitUnlinkedJe();
1461 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
1462 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJne(), i + 2 + target));
1463
1464 m_jit.cmpl_i32r(asInteger(JSImmediate::trueImmediate()), X86::eax);
1465 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJe(), i + 2 + target));
1466 m_jit.cmpl_i32r(asInteger(JSImmediate::falseImmediate()), X86::eax);
1467 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1468
1469 m_jit.link(isZero, m_jit.label());
1470 i += 3;
1471 break;
1472 };
1473 case op_resolve_base: {
1474 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
1475 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1476 emitCTICall(instruction + i, i, Machine::cti_op_resolve_base);
1477 emitPutResult(instruction[i + 1].u.operand);
1478 i += 3;
1479 break;
1480 }
1481 case op_negate: {
1482 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1483 emitCTICall(instruction + i, i, Machine::cti_op_negate);
1484 emitPutResult(instruction[i + 1].u.operand);
1485 i += 3;
1486 break;
1487 }
1488 case op_resolve_skip: {
1489 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
1490 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1491 emitPutArgConstant(instruction[i + 3].u.operand + m_codeBlock->needsFullScopeChain, 4);
1492 emitCTICall(instruction + i, i, Machine::cti_op_resolve_skip);
1493 emitPutResult(instruction[i + 1].u.operand);
1494 i += 4;
1495 break;
1496 }
1497 case op_resolve_global: {
1498 // Fast case
1499 unsigned globalObject = asInteger(instruction[i + 2].u.jsCell);
1500 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 3].u.operand]);
1501 void* structureIDAddr = reinterpret_cast<void*>(instruction + i + 4);
1502 void* offsetAddr = reinterpret_cast<void*>(instruction + i + 5);
1503
1504 // Check StructureID of global object
1505 m_jit.movl_i32r(globalObject, X86::eax);
1506 m_jit.movl_mr(structureIDAddr, X86::edx);
1507 m_jit.cmpl_rm(X86::edx, OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
1508 X86Assembler::JmpSrc noMatch = m_jit.emitUnlinkedJne(); // StructureIDs don't match
1509
1510 // Load cached property
1511 m_jit.movl_mr(OBJECT_OFFSET(JSGlobalObject, m_propertyStorage), X86::eax, X86::eax);
1512 m_jit.movl_mr(offsetAddr, X86::edx);
1513 m_jit.movl_mr(0, X86::eax, X86::edx, sizeof(JSValue*), X86::eax);
1514 emitPutResult(instruction[i + 1].u.operand);
1515 X86Assembler::JmpSrc end = m_jit.emitUnlinkedJmp();
1516
1517 // Slow case
1518 m_jit.link(noMatch, m_jit.label());
1519 emitPutArgConstant(globalObject, 0);
1520 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
1521 emitPutArgConstant(reinterpret_cast<unsigned>(instruction + i), 8);
1522 emitCTICall(instruction + i, i, Machine::cti_op_resolve_global);
1523 emitPutResult(instruction[i + 1].u.operand);
1524 m_jit.link(end, m_jit.label());
1525 i += 6;
1526 break;
1527 }
1528 CTI_COMPILE_BINARY_OP(op_div)
1529 case op_pre_dec: {
1530 int srcDst = instruction[i + 1].u.operand;
1531 emitGetArg(srcDst, X86::eax);
1532 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1533 m_jit.subl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::eax);
1534 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1535 emitPutResult(srcDst, X86::eax);
1536 i += 2;
1537 break;
1538 }
1539 case op_jnless: {
1540 unsigned target = instruction[i + 3].u.operand;
1541 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
1542 if (src2imm) {
1543 emitGetArg(instruction[i + 1].u.operand, X86::edx);
1544 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1545 m_jit.cmpl_i32r(asInteger(src2imm), X86::edx);
1546 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJge(), i + 3 + target));
1547 } else {
1548 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1549 emitGetArg(instruction[i + 2].u.operand, X86::edx);
1550 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1551 emitJumpSlowCaseIfNotImmNum(X86::edx, i);
1552 m_jit.cmpl_rr(X86::edx, X86::eax);
1553 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJge(), i + 3 + target));
1554 }
1555 i += 4;
1556 break;
1557 }
1558 case op_not: {
1559 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1560 m_jit.xorl_i8r(JSImmediate::FullTagTypeBool, X86::eax);
1561 m_jit.testl_i32r(JSImmediate::FullTagTypeMask, X86::eax); // i8?
1562 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1563 m_jit.xorl_i8r((JSImmediate::FullTagTypeBool | JSImmediate::ExtendedPayloadBitBoolValue), X86::eax);
1564 emitPutResult(instruction[i + 1].u.operand);
1565 i += 3;
1566 break;
1567 }
1568 case op_jfalse: {
1569 unsigned target = instruction[i + 2].u.operand;
1570 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1571
1572 m_jit.cmpl_i32r(asInteger(JSImmediate::zeroImmediate()), X86::eax);
1573 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJe(), i + 2 + target));
1574 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
1575 X86Assembler::JmpSrc isNonZero = m_jit.emitUnlinkedJne();
1576
1577 m_jit.cmpl_i32r(asInteger(JSImmediate::falseImmediate()), X86::eax);
1578 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJe(), i + 2 + target));
1579 m_jit.cmpl_i32r(asInteger(JSImmediate::trueImmediate()), X86::eax);
1580 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1581
1582 m_jit.link(isNonZero, m_jit.label());
1583 i += 3;
1584 break;
1585 };
1586 case op_jeq_null: {
1587 unsigned src = instruction[i + 1].u.operand;
1588 unsigned target = instruction[i + 2].u.operand;
1589
1590 emitGetArg(src, X86::eax);
1591 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
1592 X86Assembler::JmpSrc isImmediate = m_jit.emitUnlinkedJnz();
1593
1594 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
1595 m_jit.testl_i32m(MasqueradesAsUndefined, OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::ecx);
1596 m_jit.setnz_r(X86::eax);
1597
1598 X86Assembler::JmpSrc wasNotImmediate = m_jit.emitUnlinkedJmp();
1599
1600 m_jit.link(isImmediate, m_jit.label());
1601
1602 m_jit.movl_i32r(~JSImmediate::ExtendedTagBitUndefined, X86::ecx);
1603 m_jit.andl_rr(X86::eax, X86::ecx);
1604 m_jit.cmpl_i32r(JSImmediate::FullTagTypeNull, X86::ecx);
1605 m_jit.sete_r(X86::eax);
1606
1607 m_jit.link(wasNotImmediate, m_jit.label());
1608
1609 m_jit.movzbl_rr(X86::eax, X86::eax);
1610 m_jit.cmpl_i32r(0, X86::eax);
1611 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJnz(), i + 2 + target));
1612
1613 i += 3;
1614 break;
1615 };
1616 case op_jneq_null: {
1617 unsigned src = instruction[i + 1].u.operand;
1618 unsigned target = instruction[i + 2].u.operand;
1619
1620 emitGetArg(src, X86::eax);
1621 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
1622 X86Assembler::JmpSrc isImmediate = m_jit.emitUnlinkedJnz();
1623
1624 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
1625 m_jit.testl_i32m(MasqueradesAsUndefined, OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::ecx);
1626 m_jit.setz_r(X86::eax);
1627
1628 X86Assembler::JmpSrc wasNotImmediate = m_jit.emitUnlinkedJmp();
1629
1630 m_jit.link(isImmediate, m_jit.label());
1631
1632 m_jit.movl_i32r(~JSImmediate::ExtendedTagBitUndefined, X86::ecx);
1633 m_jit.andl_rr(X86::eax, X86::ecx);
1634 m_jit.cmpl_i32r(JSImmediate::FullTagTypeNull, X86::ecx);
1635 m_jit.setne_r(X86::eax);
1636
1637 m_jit.link(wasNotImmediate, m_jit.label());
1638
1639 m_jit.movzbl_rr(X86::eax, X86::eax);
1640 m_jit.cmpl_i32r(0, X86::eax);
1641 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJnz(), i + 2 + target));
1642
1643 i += 3;
1644 break;
1645 }
1646 case op_post_inc: {
1647 int srcDst = instruction[i + 2].u.operand;
1648 emitGetArg(srcDst, X86::eax);
1649 m_jit.movl_rr(X86::eax, X86::edx);
1650 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1651 m_jit.addl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::edx);
1652 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1653 emitPutResult(srcDst, X86::edx);
1654 emitPutResult(instruction[i + 1].u.operand);
1655 i += 3;
1656 break;
1657 }
1658 case op_unexpected_load: {
1659 JSValue* v = m_codeBlock->unexpectedConstants[instruction[i + 2].u.operand];
1660 m_jit.movl_i32r(asInteger(v), X86::eax);
1661 emitPutResult(instruction[i + 1].u.operand);
1662 i += 3;
1663 break;
1664 }
1665 case op_jsr: {
1666 int retAddrDst = instruction[i + 1].u.operand;
1667 int target = instruction[i + 2].u.operand;
1668 m_jit.movl_i32m(0, sizeof(Register) * retAddrDst, X86::edi);
1669 X86Assembler::JmpDst addrPosition = m_jit.label();
1670 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJmp(), i + 2 + target));
1671 X86Assembler::JmpDst sretTarget = m_jit.label();
1672 m_jsrSites.append(JSRInfo(addrPosition, sretTarget));
1673 i += 3;
1674 break;
1675 }
1676 case op_sret: {
1677 m_jit.jmp_m(sizeof(Register) * instruction[i + 1].u.operand, X86::edi);
1678 i += 2;
1679 break;
1680 }
1681 case op_eq: {
1682 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1683 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1684 emitJumpSlowCaseIfNotImmNums(X86::eax, X86::edx, i);
1685 m_jit.cmpl_rr(X86::edx, X86::eax);
1686 m_jit.sete_r(X86::eax);
1687 m_jit.movzbl_rr(X86::eax, X86::eax);
1688 emitTagAsBoolImmediate(X86::eax);
1689 emitPutResult(instruction[i + 1].u.operand);
1690 i += 4;
1691 break;
1692 }
1693 case op_lshift: {
1694 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1695 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
1696 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1697 emitJumpSlowCaseIfNotImmNum(X86::ecx, i);
1698 emitFastArithImmToInt(X86::eax);
1699 emitFastArithImmToInt(X86::ecx);
1700 m_jit.shll_CLr(X86::eax);
1701 emitFastArithIntToImmOrSlowCase(X86::eax, i);
1702 emitPutResult(instruction[i + 1].u.operand);
1703 i += 4;
1704 break;
1705 }
1706 case op_bitand: {
1707 unsigned src1 = instruction[i + 2].u.operand;
1708 unsigned src2 = instruction[i + 3].u.operand;
1709 unsigned dst = instruction[i + 1].u.operand;
1710 if (JSValue* value = getConstantImmediateNumericArg(src1)) {
1711 emitGetArg(src2, X86::eax);
1712 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1713 m_jit.andl_i32r(asInteger(value), X86::eax); // FIXME: make it more obvious this is relying on the format of JSImmediate
1714 emitPutResult(dst);
1715 } else if (JSValue* value = getConstantImmediateNumericArg(src2)) {
1716 emitGetArg(src1, X86::eax);
1717 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1718 m_jit.andl_i32r(asInteger(value), X86::eax);
1719 emitPutResult(dst);
1720 } else {
1721 emitGetArg(src1, X86::eax);
1722 emitGetArg(src2, X86::edx);
1723 m_jit.andl_rr(X86::edx, X86::eax);
1724 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1725 emitPutResult(dst);
1726 }
1727 i += 5;
1728 break;
1729 }
1730 case op_rshift: {
1731 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1732 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
1733 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1734 emitJumpSlowCaseIfNotImmNum(X86::ecx, i);
1735 emitFastArithImmToInt(X86::ecx);
1736 m_jit.sarl_CLr(X86::eax);
1737 emitFastArithPotentiallyReTagImmediate(X86::eax);
1738 emitPutResult(instruction[i + 1].u.operand);
1739 i += 4;
1740 break;
1741 }
1742 case op_bitnot: {
1743 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1744 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1745 m_jit.xorl_i8r(~JSImmediate::TagBitTypeInteger, X86::eax);
1746 emitPutResult(instruction[i + 1].u.operand);
1747 i += 3;
1748 break;
1749 }
1750 case op_resolve_with_base: {
1751 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 3].u.operand]);
1752 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1753 emitCTICall(instruction + i, i, Machine::cti_op_resolve_with_base);
1754 emitPutResult(instruction[i + 1].u.operand);
1755 emitPutResult(instruction[i + 2].u.operand, X86::edx);
1756 i += 4;
1757 break;
1758 }
1759 case op_new_func_exp: {
1760 FuncExprNode* func = (m_codeBlock->functionExpressions[instruction[i + 2].u.operand]).get();
1761 emitPutArgConstant(reinterpret_cast<unsigned>(func), 0);
1762 emitCTICall(instruction + i, i, Machine::cti_op_new_func_exp);
1763 emitPutResult(instruction[i + 1].u.operand);
1764 i += 3;
1765 break;
1766 }
1767 case op_mod: {
1768 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1769 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
1770 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1771 emitJumpSlowCaseIfNotImmNum(X86::ecx, i);
1772 emitFastArithDeTagImmediate(X86::eax);
1773 m_slowCases.append(SlowCaseEntry(emitFastArithDeTagImmediateJumpIfZero(X86::ecx), i));
1774 m_jit.cdq();
1775 m_jit.idivl_r(X86::ecx);
1776 emitFastArithReTagImmediate(X86::edx);
1777 m_jit.movl_rr(X86::edx, X86::eax);
1778 emitPutResult(instruction[i + 1].u.operand);
1779 i += 4;
1780 break;
1781 }
1782 case op_jtrue: {
1783 unsigned target = instruction[i + 2].u.operand;
1784 emitGetArg(instruction[i + 1].u.operand, X86::eax);
1785
1786 m_jit.cmpl_i32r(asInteger(JSImmediate::zeroImmediate()), X86::eax);
1787 X86Assembler::JmpSrc isZero = m_jit.emitUnlinkedJe();
1788 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
1789 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJne(), i + 2 + target));
1790
1791 m_jit.cmpl_i32r(asInteger(JSImmediate::trueImmediate()), X86::eax);
1792 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJe(), i + 2 + target));
1793 m_jit.cmpl_i32r(asInteger(JSImmediate::falseImmediate()), X86::eax);
1794 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1795
1796 m_jit.link(isZero, m_jit.label());
1797 i += 3;
1798 break;
1799 }
1800 CTI_COMPILE_BINARY_OP(op_less)
1801 case op_neq: {
1802 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1803 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1804 emitJumpSlowCaseIfNotImmNums(X86::eax, X86::edx, i);
1805 m_jit.cmpl_rr(X86::eax, X86::edx);
1806
1807 m_jit.setne_r(X86::eax);
1808 m_jit.movzbl_rr(X86::eax, X86::eax);
1809 emitTagAsBoolImmediate(X86::eax);
1810
1811 emitPutResult(instruction[i + 1].u.operand);
1812
1813 i += 4;
1814 break;
1815 }
1816 case op_post_dec: {
1817 int srcDst = instruction[i + 2].u.operand;
1818 emitGetArg(srcDst, X86::eax);
1819 m_jit.movl_rr(X86::eax, X86::edx);
1820 emitJumpSlowCaseIfNotImmNum(X86::eax, i);
1821 m_jit.subl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::edx);
1822 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJo(), i));
1823 emitPutResult(srcDst, X86::edx);
1824 emitPutResult(instruction[i + 1].u.operand);
1825 i += 3;
1826 break;
1827 }
1828 CTI_COMPILE_BINARY_OP(op_urshift)
1829 case op_bitxor: {
1830 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1831 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1832 emitJumpSlowCaseIfNotImmNums(X86::eax, X86::edx, i);
1833 m_jit.xorl_rr(X86::edx, X86::eax);
1834 emitFastArithReTagImmediate(X86::eax);
1835 emitPutResult(instruction[i + 1].u.operand);
1836 i += 5;
1837 break;
1838 }
1839 case op_new_regexp: {
1840 RegExp* regExp = m_codeBlock->regexps[instruction[i + 2].u.operand].get();
1841 emitPutArgConstant(reinterpret_cast<unsigned>(regExp), 0);
1842 emitCTICall(instruction + i, i, Machine::cti_op_new_regexp);
1843 emitPutResult(instruction[i + 1].u.operand);
1844 i += 3;
1845 break;
1846 }
1847 case op_bitor: {
1848 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1849 emitGetArg(instruction[i + 3].u.operand, X86::edx);
1850 emitJumpSlowCaseIfNotImmNums(X86::eax, X86::edx, i);
1851 m_jit.orl_rr(X86::edx, X86::eax);
1852 emitPutResult(instruction[i + 1].u.operand);
1853 i += 5;
1854 break;
1855 }
1856 case op_call_eval: {
1857 compileOpCall(opcodeID, instruction + i, i, callLinkInfoIndex++);
1858 i += 7;
1859 break;
1860 }
1861 case op_throw: {
1862 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
1863 emitCTICall(instruction + i, i, Machine::cti_op_throw);
1864 m_jit.addl_i8r(0x20, X86::esp);
1865 m_jit.popl_r(X86::ebx);
1866 m_jit.popl_r(X86::edi);
1867 m_jit.popl_r(X86::esi);
1868 m_jit.ret();
1869 i += 2;
1870 break;
1871 }
1872 case op_get_pnames: {
1873 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1874 emitCTICall(instruction + i, i, Machine::cti_op_get_pnames);
1875 emitPutResult(instruction[i + 1].u.operand);
1876 i += 3;
1877 break;
1878 }
1879 case op_next_pname: {
1880 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1881 unsigned target = instruction[i + 3].u.operand;
1882 emitCTICall(instruction + i, i, Machine::cti_op_next_pname);
1883 m_jit.testl_rr(X86::eax, X86::eax);
1884 X86Assembler::JmpSrc endOfIter = m_jit.emitUnlinkedJe();
1885 emitPutResult(instruction[i + 1].u.operand);
1886 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJmp(), i + 3 + target));
1887 m_jit.link(endOfIter, m_jit.label());
1888 i += 4;
1889 break;
1890 }
1891 case op_push_scope: {
1892 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
1893 emitCTICall(instruction + i, i, Machine::cti_op_push_scope);
1894 i += 2;
1895 break;
1896 }
1897 case op_pop_scope: {
1898 emitCTICall(instruction + i, i, Machine::cti_op_pop_scope);
1899 i += 1;
1900 break;
1901 }
1902 CTI_COMPILE_UNARY_OP(op_typeof)
1903 CTI_COMPILE_UNARY_OP(op_is_undefined)
1904 CTI_COMPILE_UNARY_OP(op_is_boolean)
1905 CTI_COMPILE_UNARY_OP(op_is_number)
1906 CTI_COMPILE_UNARY_OP(op_is_string)
1907 CTI_COMPILE_UNARY_OP(op_is_object)
1908 CTI_COMPILE_UNARY_OP(op_is_function)
1909 case op_stricteq: {
1910 compileOpStrictEq(instruction, i, OpStrictEq);
1911 i += 4;
1912 break;
1913 }
1914 case op_nstricteq: {
1915 compileOpStrictEq(instruction, i, OpNStrictEq);
1916 i += 4;
1917 break;
1918 }
1919 case op_to_jsnumber: {
1920 emitGetArg(instruction[i + 2].u.operand, X86::eax);
1921
1922 m_jit.testl_i32r(JSImmediate::TagBitTypeInteger, X86::eax);
1923 X86Assembler::JmpSrc wasImmediate = m_jit.emitUnlinkedJnz();
1924
1925 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
1926
1927 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
1928 m_jit.cmpl_i32m(NumberType, OBJECT_OFFSET(StructureID, m_typeInfo.m_type), X86::ecx);
1929
1930 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJne(), i));
1931
1932 m_jit.link(wasImmediate, m_jit.label());
1933
1934 emitPutResult(instruction[i + 1].u.operand);
1935 i += 3;
1936 break;
1937 }
1938 case op_in: {
1939 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
1940 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx);
1941 emitCTICall(instruction + i, i, Machine::cti_op_in);
1942 emitPutResult(instruction[i + 1].u.operand);
1943 i += 4;
1944 break;
1945 }
1946 case op_push_new_scope: {
1947 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
1948 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 0);
1949 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx);
1950 emitCTICall(instruction + i, i, Machine::cti_op_push_new_scope);
1951 emitPutResult(instruction[i + 1].u.operand);
1952 i += 4;
1953 break;
1954 }
1955 case op_catch: {
1956 emitGetCTIParam(CTI_ARGS_callFrame, X86::edi); // edi := r
1957 emitPutResult(instruction[i + 1].u.operand);
1958 i += 2;
1959 break;
1960 }
1961 case op_jmp_scopes: {
1962 unsigned count = instruction[i + 1].u.operand;
1963 emitPutArgConstant(count, 0);
1964 emitCTICall(instruction + i, i, Machine::cti_op_jmp_scopes);
1965 unsigned target = instruction[i + 2].u.operand;
1966 m_jmpTable.append(JmpTable(m_jit.emitUnlinkedJmp(), i + 2 + target));
1967 i += 3;
1968 break;
1969 }
1970 case op_put_by_index: {
1971 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
1972 emitPutArgConstant(instruction[i + 2].u.operand, 4);
1973 emitGetPutArg(instruction[i + 3].u.operand, 8, X86::ecx);
1974 emitCTICall(instruction + i, i, Machine::cti_op_put_by_index);
1975 i += 4;
1976 break;
1977 }
1978 case op_switch_imm: {
1979 unsigned tableIndex = instruction[i + 1].u.operand;
1980 unsigned defaultOffset = instruction[i + 2].u.operand;
1981 unsigned scrutinee = instruction[i + 3].u.operand;
1982
1983 // create jump table for switch destinations, track this switch statement.
1984 SimpleJumpTable* jumpTable = &m_codeBlock->immediateSwitchJumpTables[tableIndex];
1985 m_switches.append(SwitchRecord(jumpTable, i, defaultOffset, SwitchRecord::Immediate));
1986 jumpTable->ctiOffsets.grow(jumpTable->branchOffsets.size());
1987
1988 emitGetPutArg(scrutinee, 0, X86::ecx);
1989 emitPutArgConstant(tableIndex, 4);
1990 emitCTICall(instruction + i, i, Machine::cti_op_switch_imm);
1991 m_jit.jmp_r(X86::eax);
1992 i += 4;
1993 break;
1994 }
1995 case op_switch_char: {
1996 unsigned tableIndex = instruction[i + 1].u.operand;
1997 unsigned defaultOffset = instruction[i + 2].u.operand;
1998 unsigned scrutinee = instruction[i + 3].u.operand;
1999
2000 // create jump table for switch destinations, track this switch statement.
2001 SimpleJumpTable* jumpTable = &m_codeBlock->characterSwitchJumpTables[tableIndex];
2002 m_switches.append(SwitchRecord(jumpTable, i, defaultOffset, SwitchRecord::Character));
2003 jumpTable->ctiOffsets.grow(jumpTable->branchOffsets.size());
2004
2005 emitGetPutArg(scrutinee, 0, X86::ecx);
2006 emitPutArgConstant(tableIndex, 4);
2007 emitCTICall(instruction + i, i, Machine::cti_op_switch_char);
2008 m_jit.jmp_r(X86::eax);
2009 i += 4;
2010 break;
2011 }
2012 case op_switch_string: {
2013 unsigned tableIndex = instruction[i + 1].u.operand;
2014 unsigned defaultOffset = instruction[i + 2].u.operand;
2015 unsigned scrutinee = instruction[i + 3].u.operand;
2016
2017 // create jump table for switch destinations, track this switch statement.
2018 StringJumpTable* jumpTable = &m_codeBlock->stringSwitchJumpTables[tableIndex];
2019 m_switches.append(SwitchRecord(jumpTable, i, defaultOffset));
2020
2021 emitGetPutArg(scrutinee, 0, X86::ecx);
2022 emitPutArgConstant(tableIndex, 4);
2023 emitCTICall(instruction + i, i, Machine::cti_op_switch_string);
2024 m_jit.jmp_r(X86::eax);
2025 i += 4;
2026 break;
2027 }
2028 case op_del_by_val: {
2029 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
2030 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx);
2031 emitCTICall(instruction + i, i, Machine::cti_op_del_by_val);
2032 emitPutResult(instruction[i + 1].u.operand);
2033 i += 4;
2034 break;
2035 }
2036 case op_put_getter: {
2037 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
2038 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
2039 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
2040 emitGetPutArg(instruction[i + 3].u.operand, 8, X86::ecx);
2041 emitCTICall(instruction + i, i, Machine::cti_op_put_getter);
2042 i += 4;
2043 break;
2044 }
2045 case op_put_setter: {
2046 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::ecx);
2047 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
2048 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
2049 emitGetPutArg(instruction[i + 3].u.operand, 8, X86::ecx);
2050 emitCTICall(instruction + i, i, Machine::cti_op_put_setter);
2051 i += 4;
2052 break;
2053 }
2054 case op_new_error: {
2055 JSValue* message = m_codeBlock->unexpectedConstants[instruction[i + 3].u.operand];
2056 emitPutArgConstant(instruction[i + 2].u.operand, 0);
2057 emitPutArgConstant(asInteger(message), 4);
2058 emitPutArgConstant(m_codeBlock->lineNumberForVPC(&instruction[i]), 8);
2059 emitCTICall(instruction + i, i, Machine::cti_op_new_error);
2060 emitPutResult(instruction[i + 1].u.operand);
2061 i += 4;
2062 break;
2063 }
2064 case op_debug: {
2065 emitPutArgConstant(instruction[i + 1].u.operand, 0);
2066 emitPutArgConstant(instruction[i + 2].u.operand, 4);
2067 emitPutArgConstant(instruction[i + 3].u.operand, 8);
2068 emitCTICall(instruction + i, i, Machine::cti_op_debug);
2069 i += 4;
2070 break;
2071 }
2072 case op_eq_null: {
2073 unsigned dst = instruction[i + 1].u.operand;
2074 unsigned src1 = instruction[i + 2].u.operand;
2075
2076 emitGetArg(src1, X86::eax);
2077 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
2078 X86Assembler::JmpSrc isImmediate = m_jit.emitUnlinkedJnz();
2079
2080 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
2081 m_jit.testl_i32m(MasqueradesAsUndefined, OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::ecx);
2082 m_jit.setnz_r(X86::eax);
2083
2084 X86Assembler::JmpSrc wasNotImmediate = m_jit.emitUnlinkedJmp();
2085
2086 m_jit.link(isImmediate, m_jit.label());
2087
2088 m_jit.movl_i32r(~JSImmediate::ExtendedTagBitUndefined, X86::ecx);
2089 m_jit.andl_rr(X86::eax, X86::ecx);
2090 m_jit.cmpl_i32r(JSImmediate::FullTagTypeNull, X86::ecx);
2091 m_jit.sete_r(X86::eax);
2092
2093 m_jit.link(wasNotImmediate, m_jit.label());
2094
2095 m_jit.movzbl_rr(X86::eax, X86::eax);
2096 emitTagAsBoolImmediate(X86::eax);
2097 emitPutResult(dst);
2098
2099 i += 3;
2100 break;
2101 }
2102 case op_neq_null: {
2103 unsigned dst = instruction[i + 1].u.operand;
2104 unsigned src1 = instruction[i + 2].u.operand;
2105
2106 emitGetArg(src1, X86::eax);
2107 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
2108 X86Assembler::JmpSrc isImmediate = m_jit.emitUnlinkedJnz();
2109
2110 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
2111 m_jit.testl_i32m(MasqueradesAsUndefined, OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::ecx);
2112 m_jit.setz_r(X86::eax);
2113
2114 X86Assembler::JmpSrc wasNotImmediate = m_jit.emitUnlinkedJmp();
2115
2116 m_jit.link(isImmediate, m_jit.label());
2117
2118 m_jit.movl_i32r(~JSImmediate::ExtendedTagBitUndefined, X86::ecx);
2119 m_jit.andl_rr(X86::eax, X86::ecx);
2120 m_jit.cmpl_i32r(JSImmediate::FullTagTypeNull, X86::ecx);
2121 m_jit.setne_r(X86::eax);
2122
2123 m_jit.link(wasNotImmediate, m_jit.label());
2124
2125 m_jit.movzbl_rr(X86::eax, X86::eax);
2126 emitTagAsBoolImmediate(X86::eax);
2127 emitPutResult(dst);
2128
2129 i += 3;
2130 break;
2131 }
2132 case op_enter: {
2133 // Even though CTI doesn't use them, we initialize our constant
2134 // registers to zap stale pointers, to avoid unnecessarily prolonging
2135 // object lifetime and increasing GC pressure.
2136 size_t count = m_codeBlock->numVars + m_codeBlock->constantRegisters.size();
2137 for (size_t j = 0; j < count; ++j)
2138 emitInitRegister(j);
2139
2140 i+= 1;
2141 break;
2142 }
2143 case op_enter_with_activation: {
2144 // Even though CTI doesn't use them, we initialize our constant
2145 // registers to zap stale pointers, to avoid unnecessarily prolonging
2146 // object lifetime and increasing GC pressure.
2147 size_t count = m_codeBlock->numVars + m_codeBlock->constantRegisters.size();
2148 for (size_t j = 0; j < count; ++j)
2149 emitInitRegister(j);
2150
2151 emitCTICall(instruction + i, i, Machine::cti_op_push_activation);
2152 emitPutResult(instruction[i + 1].u.operand);
2153
2154 i+= 2;
2155 break;
2156 }
2157 case op_create_arguments: {
2158 emitCTICall(instruction + i, i, (m_codeBlock->numParameters == 1) ? Machine::cti_op_create_arguments_no_params : Machine::cti_op_create_arguments);
2159 i += 1;
2160 break;
2161 }
2162 case op_convert_this: {
2163 emitGetArg(instruction[i + 1].u.operand, X86::eax);
2164
2165 emitJumpSlowCaseIfNotJSCell(X86::eax, i);
2166 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::edx);
2167 m_jit.testl_i32m(NeedsThisConversion, OBJECT_OFFSET(StructureID, m_typeInfo.m_flags), X86::edx);
2168 m_slowCases.append(SlowCaseEntry(m_jit.emitUnlinkedJnz(), i));
2169
2170 i += 2;
2171 break;
2172 }
2173 case op_profile_will_call: {
2174 emitGetCTIParam(CTI_ARGS_profilerReference, X86::eax);
2175 m_jit.cmpl_i32m(0, X86::eax);
2176 X86Assembler::JmpSrc noProfiler = m_jit.emitUnlinkedJe();
2177 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::eax);
2178 emitCTICall(instruction + i, i, Machine::cti_op_profile_will_call);
2179 m_jit.link(noProfiler, m_jit.label());
2180
2181 i += 2;
2182 break;
2183 }
2184 case op_profile_did_call: {
2185 emitGetCTIParam(CTI_ARGS_profilerReference, X86::eax);
2186 m_jit.cmpl_i32m(0, X86::eax);
2187 X86Assembler::JmpSrc noProfiler = m_jit.emitUnlinkedJe();
2188 emitGetPutArg(instruction[i + 1].u.operand, 0, X86::eax);
2189 emitCTICall(instruction + i, i, Machine::cti_op_profile_did_call);
2190 m_jit.link(noProfiler, m_jit.label());
2191
2192 i += 2;
2193 break;
2194 }
2195 case op_get_array_length:
2196 case op_get_by_id_chain:
2197 case op_get_by_id_generic:
2198 case op_get_by_id_proto:
2199 case op_get_by_id_self:
2200 case op_get_string_length:
2201 case op_put_by_id_generic:
2202 case op_put_by_id_replace:
2203 case op_put_by_id_transition:
2204 ASSERT_NOT_REACHED();
2205 }
2206 }
2207
2208 ASSERT(propertyAccessInstructionIndex == m_codeBlock->propertyAccessInstructions.size());
2209 ASSERT(callLinkInfoIndex == m_codeBlock->callLinkInfos.size());
2210}
2211
2212
2213void CTI::privateCompileLinkPass()
2214{
2215 unsigned jmpTableCount = m_jmpTable.size();
2216 for (unsigned i = 0; i < jmpTableCount; ++i)
2217 m_jit.link(m_jmpTable[i].from, m_labels[m_jmpTable[i].to]);
2218 m_jmpTable.clear();
2219}
2220
2221#define CTI_COMPILE_BINARY_OP_SLOW_CASE(name) \
2222 case name: { \
2223 m_jit.link(iter->from, m_jit.label()); \
2224 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx); \
2225 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx); \
2226 emitCTICall(instruction + i, i, Machine::cti_##name); \
2227 emitPutResult(instruction[i + 1].u.operand); \
2228 i += 4; \
2229 break; \
2230 }
2231
2232void CTI::privateCompileSlowCases()
2233{
2234 unsigned propertyAccessInstructionIndex = 0;
2235 unsigned callLinkInfoIndex = 0;
2236
2237 Instruction* instruction = m_codeBlock->instructions.begin();
2238 for (Vector<SlowCaseEntry>::iterator iter = m_slowCases.begin(); iter != m_slowCases.end(); ++iter) {
2239 unsigned i = iter->to;
2240 switch (OpcodeID opcodeID = m_machine->getOpcodeID(instruction[i].u.opcode)) {
2241 case op_convert_this: {
2242 m_jit.link(iter->from, m_jit.label());
2243 m_jit.link((++iter)->from, m_jit.label());
2244 emitPutArg(X86::eax, 0);
2245 emitCTICall(instruction + i, i, Machine::cti_op_convert_this);
2246 emitPutResult(instruction[i + 1].u.operand);
2247 i += 2;
2248 break;
2249 }
2250 case op_add: {
2251 unsigned dst = instruction[i + 1].u.operand;
2252 unsigned src1 = instruction[i + 2].u.operand;
2253 unsigned src2 = instruction[i + 3].u.operand;
2254 if (JSValue* value = getConstantImmediateNumericArg(src1)) {
2255 X86Assembler::JmpSrc notImm = iter->from;
2256 m_jit.link((++iter)->from, m_jit.label());
2257 m_jit.subl_i32r(getDeTaggedConstantImmediate(value), X86::edx);
2258 m_jit.link(notImm, m_jit.label());
2259 emitGetPutArg(src1, 0, X86::ecx);
2260 emitPutArg(X86::edx, 4);
2261 emitCTICall(instruction + i, i, Machine::cti_op_add);
2262 emitPutResult(dst);
2263 } else if (JSValue* value = getConstantImmediateNumericArg(src2)) {
2264 X86Assembler::JmpSrc notImm = iter->from;
2265 m_jit.link((++iter)->from, m_jit.label());
2266 m_jit.subl_i32r(getDeTaggedConstantImmediate(value), X86::eax);
2267 m_jit.link(notImm, m_jit.label());
2268 emitPutArg(X86::eax, 0);
2269 emitGetPutArg(src2, 4, X86::ecx);
2270 emitCTICall(instruction + i, i, Machine::cti_op_add);
2271 emitPutResult(dst);
2272 } else {
2273 OperandTypes types = OperandTypes::fromInt(instruction[i + 4].u.operand);
2274 if (types.first().mightBeNumber() && types.second().mightBeNumber())
2275 compileBinaryArithOpSlowCase(instruction, op_add, iter, dst, src1, src2, types, i);
2276 else
2277 ASSERT_NOT_REACHED();
2278 }
2279
2280 i += 5;
2281 break;
2282 }
2283 case op_get_by_val: {
2284 // The slow case that handles accesses to arrays (below) may jump back up to here.
2285 X86Assembler::JmpDst beginGetByValSlow = m_jit.label();
2286
2287 X86Assembler::JmpSrc notImm = iter->from;
2288 m_jit.link((++iter)->from, m_jit.label());
2289 m_jit.link((++iter)->from, m_jit.label());
2290 emitFastArithIntToImmNoCheck(X86::edx);
2291 m_jit.link(notImm, m_jit.label());
2292 emitPutArg(X86::eax, 0);
2293 emitPutArg(X86::edx, 4);
2294 emitCTICall(instruction + i, i, Machine::cti_op_get_by_val);
2295 emitPutResult(instruction[i + 1].u.operand);
2296 m_jit.link(m_jit.emitUnlinkedJmp(), m_labels[i + 4]);
2297
2298 // This is slow case that handles accesses to arrays above the fast cut-off.
2299 // First, check if this is an access to the vector
2300 m_jit.link((++iter)->from, m_jit.label());
2301 m_jit.cmpl_rm(X86::edx, OBJECT_OFFSET(ArrayStorage, m_vectorLength), X86::ecx);
2302 m_jit.link(m_jit.emitUnlinkedJbe(), beginGetByValSlow);
2303
2304 // okay, missed the fast region, but it is still in the vector. Get the value.
2305 m_jit.movl_mr(OBJECT_OFFSET(ArrayStorage, m_vector[0]), X86::ecx, X86::edx, sizeof(JSValue*), X86::ecx);
2306 // Check whether the value loaded is zero; if so we need to return undefined.
2307 m_jit.testl_rr(X86::ecx, X86::ecx);
2308 m_jit.link(m_jit.emitUnlinkedJe(), beginGetByValSlow);
2309 emitPutResult(instruction[i + 1].u.operand, X86::ecx);
2310
2311 i += 4;
2312 break;
2313 }
2314 case op_sub: {
2315 compileBinaryArithOpSlowCase(instruction, op_sub, iter, instruction[i + 1].u.operand, instruction[i + 2].u.operand, instruction[i + 3].u.operand, OperandTypes::fromInt(instruction[i + 4].u.operand), i);
2316 i += 5;
2317 break;
2318 }
2319 case op_rshift: {
2320 m_jit.link(iter->from, m_jit.label());
2321 m_jit.link((++iter)->from, m_jit.label());
2322 emitPutArg(X86::eax, 0);
2323 emitPutArg(X86::ecx, 4);
2324 emitCTICall(instruction + i, i, Machine::cti_op_rshift);
2325 emitPutResult(instruction[i + 1].u.operand);
2326 i += 4;
2327 break;
2328 }
2329 case op_lshift: {
2330 X86Assembler::JmpSrc notImm1 = iter->from;
2331 X86Assembler::JmpSrc notImm2 = (++iter)->from;
2332 m_jit.link((++iter)->from, m_jit.label());
2333 emitGetArg(instruction[i + 2].u.operand, X86::eax);
2334 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
2335 m_jit.link(notImm1, m_jit.label());
2336 m_jit.link(notImm2, m_jit.label());
2337 emitPutArg(X86::eax, 0);
2338 emitPutArg(X86::ecx, 4);
2339 emitCTICall(instruction + i, i, Machine::cti_op_lshift);
2340 emitPutResult(instruction[i + 1].u.operand);
2341 i += 4;
2342 break;
2343 }
2344 case op_loop_if_less: {
2345 emitSlowScriptCheck(instruction, i);
2346
2347 unsigned target = instruction[i + 3].u.operand;
2348 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
2349 if (src2imm) {
2350 m_jit.link(iter->from, m_jit.label());
2351 emitPutArg(X86::edx, 0);
2352 emitGetPutArg(instruction[i + 2].u.operand, 4, X86::ecx);
2353 emitCTICall(instruction + i, i, Machine::cti_op_loop_if_less);
2354 m_jit.testl_rr(X86::eax, X86::eax);
2355 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 3 + target]);
2356 } else {
2357 m_jit.link(iter->from, m_jit.label());
2358 m_jit.link((++iter)->from, m_jit.label());
2359 emitPutArg(X86::eax, 0);
2360 emitPutArg(X86::edx, 4);
2361 emitCTICall(instruction + i, i, Machine::cti_op_loop_if_less);
2362 m_jit.testl_rr(X86::eax, X86::eax);
2363 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 3 + target]);
2364 }
2365 i += 4;
2366 break;
2367 }
2368 case op_put_by_id: {
2369 m_jit.link(iter->from, m_jit.label());
2370 m_jit.link((++iter)->from, m_jit.label());
2371
2372 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 2].u.operand]);
2373 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
2374 emitPutArg(X86::eax, 0);
2375 emitPutArg(X86::edx, 8);
2376 X86Assembler::JmpSrc call = emitCTICall(instruction + i, i, Machine::cti_op_put_by_id);
2377
2378 // Track the location of the call; this will be used to recover repatch information.
2379 ASSERT(m_codeBlock->propertyAccessInstructions[propertyAccessInstructionIndex].opcodeIndex == i);
2380 m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].callReturnLocation = call;
2381 ++propertyAccessInstructionIndex;
2382
2383 i += 8;
2384 break;
2385 }
2386 case op_get_by_id: {
2387 // As for the hot path of get_by_id, above, we ensure that we can use an architecture specific offset
2388 // so that we only need track one pointer into the slow case code - we track a pointer to the location
2389 // of the call (which we can use to look up the repatch information), but should a array-length or
2390 // prototype access trampoline fail we want to bail out back to here. To do so we can subtract back
2391 // the distance from the call to the head of the slow case.
2392
2393 m_jit.link(iter->from, m_jit.label());
2394 m_jit.link((++iter)->from, m_jit.label());
2395
2396#ifndef NDEBUG
2397 X86Assembler::JmpDst coldPathBegin = m_jit.label();
2398#endif
2399 emitPutArg(X86::eax, 0);
2400 Identifier* ident = &(m_codeBlock->identifiers[instruction[i + 3].u.operand]);
2401 emitPutArgConstant(reinterpret_cast<unsigned>(ident), 4);
2402 X86Assembler::JmpSrc call = emitCTICall(instruction + i, i, Machine::cti_op_get_by_id);
2403 ASSERT(X86Assembler::getDifferenceBetweenLabels(coldPathBegin, call) == repatchOffsetGetByIdSlowCaseCall);
2404 emitPutResult(instruction[i + 1].u.operand);
2405
2406 // Track the location of the call; this will be used to recover repatch information.
2407 ASSERT(m_codeBlock->propertyAccessInstructions[propertyAccessInstructionIndex].opcodeIndex == i);
2408 m_propertyAccessCompilationInfo[propertyAccessInstructionIndex].callReturnLocation = call;
2409 ++propertyAccessInstructionIndex;
2410
2411 i += 8;
2412 break;
2413 }
2414 case op_loop_if_lesseq: {
2415 emitSlowScriptCheck(instruction, i);
2416
2417 unsigned target = instruction[i + 3].u.operand;
2418 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
2419 if (src2imm) {
2420 m_jit.link(iter->from, m_jit.label());
2421 emitPutArg(X86::edx, 0);
2422 emitGetPutArg(instruction[i + 2].u.operand, 4, X86::ecx);
2423 emitCTICall(instruction + i, i, Machine::cti_op_loop_if_lesseq);
2424 m_jit.testl_rr(X86::eax, X86::eax);
2425 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 3 + target]);
2426 } else {
2427 m_jit.link(iter->from, m_jit.label());
2428 m_jit.link((++iter)->from, m_jit.label());
2429 emitPutArg(X86::eax, 0);
2430 emitPutArg(X86::edx, 4);
2431 emitCTICall(instruction + i, i, Machine::cti_op_loop_if_lesseq);
2432 m_jit.testl_rr(X86::eax, X86::eax);
2433 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 3 + target]);
2434 }
2435 i += 4;
2436 break;
2437 }
2438 case op_pre_inc: {
2439 unsigned srcDst = instruction[i + 1].u.operand;
2440 X86Assembler::JmpSrc notImm = iter->from;
2441 m_jit.link((++iter)->from, m_jit.label());
2442 m_jit.subl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::eax);
2443 m_jit.link(notImm, m_jit.label());
2444 emitPutArg(X86::eax, 0);
2445 emitCTICall(instruction + i, i, Machine::cti_op_pre_inc);
2446 emitPutResult(srcDst);
2447 i += 2;
2448 break;
2449 }
2450 case op_put_by_val: {
2451 // Normal slow cases - either is not an immediate imm, or is an array.
2452 X86Assembler::JmpSrc notImm = iter->from;
2453 m_jit.link((++iter)->from, m_jit.label());
2454 m_jit.link((++iter)->from, m_jit.label());
2455 emitFastArithIntToImmNoCheck(X86::edx);
2456 m_jit.link(notImm, m_jit.label());
2457 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
2458 emitPutArg(X86::eax, 0);
2459 emitPutArg(X86::edx, 4);
2460 emitPutArg(X86::ecx, 8);
2461 emitCTICall(instruction + i, i, Machine::cti_op_put_by_val);
2462 m_jit.link(m_jit.emitUnlinkedJmp(), m_labels[i + 4]);
2463
2464 // slow cases for immediate int accesses to arrays
2465 m_jit.link((++iter)->from, m_jit.label());
2466 m_jit.link((++iter)->from, m_jit.label());
2467 emitGetArg(instruction[i + 3].u.operand, X86::ecx);
2468 emitPutArg(X86::eax, 0);
2469 emitPutArg(X86::edx, 4);
2470 emitPutArg(X86::ecx, 8);
2471 emitCTICall(instruction + i, i, Machine::cti_op_put_by_val_array);
2472
2473 i += 4;
2474 break;
2475 }
2476 case op_loop_if_true: {
2477 emitSlowScriptCheck(instruction, i);
2478
2479 m_jit.link(iter->from, m_jit.label());
2480 emitPutArg(X86::eax, 0);
2481 emitCTICall(instruction + i, i, Machine::cti_op_jtrue);
2482 m_jit.testl_rr(X86::eax, X86::eax);
2483 unsigned target = instruction[i + 2].u.operand;
2484 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 2 + target]);
2485 i += 3;
2486 break;
2487 }
2488 case op_pre_dec: {
2489 unsigned srcDst = instruction[i + 1].u.operand;
2490 X86Assembler::JmpSrc notImm = iter->from;
2491 m_jit.link((++iter)->from, m_jit.label());
2492 m_jit.addl_i8r(getDeTaggedConstantImmediate(JSImmediate::oneImmediate()), X86::eax);
2493 m_jit.link(notImm, m_jit.label());
2494 emitPutArg(X86::eax, 0);
2495 emitCTICall(instruction + i, i, Machine::cti_op_pre_dec);
2496 emitPutResult(srcDst);
2497 i += 2;
2498 break;
2499 }
2500 case op_jnless: {
2501 unsigned target = instruction[i + 3].u.operand;
2502 JSValue* src2imm = getConstantImmediateNumericArg(instruction[i + 2].u.operand);
2503 if (src2imm) {
2504 m_jit.link(iter->from, m_jit.label());
2505 emitPutArg(X86::edx, 0);
2506 emitGetPutArg(instruction[i + 2].u.operand, 4, X86::ecx);
2507 emitCTICall(instruction + i, i, Machine::cti_op_jless);
2508 m_jit.testl_rr(X86::eax, X86::eax);
2509 m_jit.link(m_jit.emitUnlinkedJe(), m_labels[i + 3 + target]);
2510 } else {
2511 m_jit.link(iter->from, m_jit.label());
2512 m_jit.link((++iter)->from, m_jit.label());
2513 emitPutArg(X86::eax, 0);
2514 emitPutArg(X86::edx, 4);
2515 emitCTICall(instruction + i, i, Machine::cti_op_jless);
2516 m_jit.testl_rr(X86::eax, X86::eax);
2517 m_jit.link(m_jit.emitUnlinkedJe(), m_labels[i + 3 + target]);
2518 }
2519 i += 4;
2520 break;
2521 }
2522 case op_not: {
2523 m_jit.link(iter->from, m_jit.label());
2524 m_jit.xorl_i8r(JSImmediate::FullTagTypeBool, X86::eax);
2525 emitPutArg(X86::eax, 0);
2526 emitCTICall(instruction + i, i, Machine::cti_op_not);
2527 emitPutResult(instruction[i + 1].u.operand);
2528 i += 3;
2529 break;
2530 }
2531 case op_jfalse: {
2532 m_jit.link(iter->from, m_jit.label());
2533 emitPutArg(X86::eax, 0);
2534 emitCTICall(instruction + i, i, Machine::cti_op_jtrue);
2535 m_jit.testl_rr(X86::eax, X86::eax);
2536 unsigned target = instruction[i + 2].u.operand;
2537 m_jit.link(m_jit.emitUnlinkedJe(), m_labels[i + 2 + target]); // inverted!
2538 i += 3;
2539 break;
2540 }
2541 case op_post_inc: {
2542 unsigned srcDst = instruction[i + 2].u.operand;
2543 m_jit.link(iter->from, m_jit.label());
2544 m_jit.link((++iter)->from, m_jit.label());
2545 emitPutArg(X86::eax, 0);
2546 emitCTICall(instruction + i, i, Machine::cti_op_post_inc);
2547 emitPutResult(instruction[i + 1].u.operand);
2548 emitPutResult(srcDst, X86::edx);
2549 i += 3;
2550 break;
2551 }
2552 case op_bitnot: {
2553 m_jit.link(iter->from, m_jit.label());
2554 emitPutArg(X86::eax, 0);
2555 emitCTICall(instruction + i, i, Machine::cti_op_bitnot);
2556 emitPutResult(instruction[i + 1].u.operand);
2557 i += 3;
2558 break;
2559 }
2560 case op_bitand: {
2561 unsigned src1 = instruction[i + 2].u.operand;
2562 unsigned src2 = instruction[i + 3].u.operand;
2563 unsigned dst = instruction[i + 1].u.operand;
2564 if (getConstantImmediateNumericArg(src1)) {
2565 m_jit.link(iter->from, m_jit.label());
2566 emitGetPutArg(src1, 0, X86::ecx);
2567 emitPutArg(X86::eax, 4);
2568 emitCTICall(instruction + i, i, Machine::cti_op_bitand);
2569 emitPutResult(dst);
2570 } else if (getConstantImmediateNumericArg(src2)) {
2571 m_jit.link(iter->from, m_jit.label());
2572 emitPutArg(X86::eax, 0);
2573 emitGetPutArg(src2, 4, X86::ecx);
2574 emitCTICall(instruction + i, i, Machine::cti_op_bitand);
2575 emitPutResult(dst);
2576 } else {
2577 m_jit.link(iter->from, m_jit.label());
2578 emitGetPutArg(src1, 0, X86::ecx);
2579 emitPutArg(X86::edx, 4);
2580 emitCTICall(instruction + i, i, Machine::cti_op_bitand);
2581 emitPutResult(dst);
2582 }
2583 i += 5;
2584 break;
2585 }
2586 case op_jtrue: {
2587 m_jit.link(iter->from, m_jit.label());
2588 emitPutArg(X86::eax, 0);
2589 emitCTICall(instruction + i, i, Machine::cti_op_jtrue);
2590 m_jit.testl_rr(X86::eax, X86::eax);
2591 unsigned target = instruction[i + 2].u.operand;
2592 m_jit.link(m_jit.emitUnlinkedJne(), m_labels[i + 2 + target]);
2593 i += 3;
2594 break;
2595 }
2596 case op_post_dec: {
2597 unsigned srcDst = instruction[i + 2].u.operand;
2598 m_jit.link(iter->from, m_jit.label());
2599 m_jit.link((++iter)->from, m_jit.label());
2600 emitPutArg(X86::eax, 0);
2601 emitCTICall(instruction + i, i, Machine::cti_op_post_dec);
2602 emitPutResult(instruction[i + 1].u.operand);
2603 emitPutResult(srcDst, X86::edx);
2604 i += 3;
2605 break;
2606 }
2607 case op_bitxor: {
2608 m_jit.link(iter->from, m_jit.label());
2609 emitPutArg(X86::eax, 0);
2610 emitPutArg(X86::edx, 4);
2611 emitCTICall(instruction + i, i, Machine::cti_op_bitxor);
2612 emitPutResult(instruction[i + 1].u.operand);
2613 i += 5;
2614 break;
2615 }
2616 case op_bitor: {
2617 m_jit.link(iter->from, m_jit.label());
2618 emitPutArg(X86::eax, 0);
2619 emitPutArg(X86::edx, 4);
2620 emitCTICall(instruction + i, i, Machine::cti_op_bitor);
2621 emitPutResult(instruction[i + 1].u.operand);
2622 i += 5;
2623 break;
2624 }
2625 case op_eq: {
2626 m_jit.link(iter->from, m_jit.label());
2627 emitPutArg(X86::eax, 0);
2628 emitPutArg(X86::edx, 4);
2629 emitCTICall(instruction + i, i, Machine::cti_op_eq);
2630 emitPutResult(instruction[i + 1].u.operand);
2631 i += 4;
2632 break;
2633 }
2634 case op_neq: {
2635 m_jit.link(iter->from, m_jit.label());
2636 emitPutArg(X86::eax, 0);
2637 emitPutArg(X86::edx, 4);
2638 emitCTICall(instruction + i, i, Machine::cti_op_neq);
2639 emitPutResult(instruction[i + 1].u.operand);
2640 i += 4;
2641 break;
2642 }
2643 CTI_COMPILE_BINARY_OP_SLOW_CASE(op_stricteq);
2644 CTI_COMPILE_BINARY_OP_SLOW_CASE(op_nstricteq);
2645 case op_instanceof: {
2646 m_jit.link(iter->from, m_jit.label());
2647 emitGetPutArg(instruction[i + 2].u.operand, 0, X86::ecx);
2648 emitGetPutArg(instruction[i + 3].u.operand, 4, X86::ecx);
2649 emitGetPutArg(instruction[i + 4].u.operand, 8, X86::ecx);
2650 emitCTICall(instruction + i, i, Machine::cti_op_instanceof);
2651 emitPutResult(instruction[i + 1].u.operand);
2652 i += 5;
2653 break;
2654 }
2655 case op_mod: {
2656 X86Assembler::JmpSrc notImm1 = iter->from;
2657 X86Assembler::JmpSrc notImm2 = (++iter)->from;
2658 m_jit.link((++iter)->from, m_jit.label());
2659 emitFastArithReTagImmediate(X86::eax);
2660 emitFastArithReTagImmediate(X86::ecx);
2661 m_jit.link(notImm1, m_jit.label());
2662 m_jit.link(notImm2, m_jit.label());
2663 emitPutArg(X86::eax, 0);
2664 emitPutArg(X86::ecx, 4);
2665 emitCTICall(instruction + i, i, Machine::cti_op_mod);
2666 emitPutResult(instruction[i + 1].u.operand);
2667 i += 4;
2668 break;
2669 }
2670 case op_mul: {
2671 int dst = instruction[i + 1].u.operand;
2672 int src1 = instruction[i + 2].u.operand;
2673 int src2 = instruction[i + 3].u.operand;
2674 JSValue* src1Value = getConstantImmediateNumericArg(src1);
2675 JSValue* src2Value = getConstantImmediateNumericArg(src2);
2676 int32_t value;
2677 if (src1Value && ((value = JSImmediate::intValue(src1Value)) > 0)) {
2678 m_jit.link(iter->from, m_jit.label());
2679 // There is an extra slow case for (op1 * -N) or (-N * op2), to check for 0 since this should produce a result of -0.
2680 emitGetPutArg(src1, 0, X86::ecx);
2681 emitGetPutArg(src2, 4, X86::ecx);
2682 emitCTICall(instruction + i, i, Machine::cti_op_mul);
2683 emitPutResult(dst);
2684 } else if (src2Value && ((value = JSImmediate::intValue(src2Value)) > 0)) {
2685 m_jit.link(iter->from, m_jit.label());
2686 // There is an extra slow case for (op1 * -N) or (-N * op2), to check for 0 since this should produce a result of -0.
2687 emitGetPutArg(src1, 0, X86::ecx);
2688 emitGetPutArg(src2, 4, X86::ecx);
2689 emitCTICall(instruction + i, i, Machine::cti_op_mul);
2690 emitPutResult(dst);
2691 } else
2692 compileBinaryArithOpSlowCase(instruction, op_mul, iter, dst, src1, src2, OperandTypes::fromInt(instruction[i + 4].u.operand), i);
2693 i += 5;
2694 break;
2695 }
2696
2697 case op_call:
2698 case op_call_eval:
2699 case op_construct: {
2700 int dst = instruction[i + 1].u.operand;
2701 int callee = instruction[i + 2].u.operand;
2702 int argCount = instruction[i + 5].u.operand;
2703
2704 m_jit.link(iter->from, m_jit.label());
2705
2706 // The arguments have been set up on the hot path for op_call_eval
2707 if (opcodeID != op_call_eval)
2708 compileOpCallSetupArgs(instruction + i, (opcodeID == op_construct), false);
2709
2710 // Fast check for JS function.
2711 m_jit.testl_i32r(JSImmediate::TagMask, X86::ecx);
2712 X86Assembler::JmpSrc callLinkFailNotObject = m_jit.emitUnlinkedJne();
2713 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsFunctionVptr), X86::ecx);
2714 X86Assembler::JmpSrc callLinkFailNotJSFunction = m_jit.emitUnlinkedJne();
2715
2716 // This handles JSFunctions
2717 emitCTICall(instruction + i, i, (opcodeID == op_construct) ? Machine::cti_op_construct_JSConstruct : Machine::cti_op_call_JSFunction);
2718 // initialize the new call frame (pointed to by edx, after the last call), then set edi to point to it.
2719 compileOpCallInitializeCallFrame(callee, argCount);
2720 m_jit.movl_rr(X86::edx, X86::edi);
2721
2722 // Try to link & repatch this call.
2723 CallLinkInfo* info = &(m_codeBlock->callLinkInfos[callLinkInfoIndex]);
2724 emitPutArgConstant(reinterpret_cast<unsigned>(info), 4);
2725 m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation =
2726 emitCTICall(instruction + i, i, Machine::cti_vm_lazyLinkCall);
2727 emitNakedCall(i, X86::eax);
2728 X86Assembler::JmpSrc storeResultForFirstRun = m_jit.emitUnlinkedJmp();
2729
2730 // This is the address for the cold path *after* the first run (which tries to link the call).
2731 m_callStructureStubCompilationInfo[callLinkInfoIndex].coldPathOther = m_jit.label();
2732
2733 // The arguments have been set up on the hot path for op_call_eval
2734 if (opcodeID != op_call_eval)
2735 compileOpCallSetupArgs(instruction + i, (opcodeID == op_construct), false);
2736
2737 // Check for JSFunctions.
2738 m_jit.testl_i32r(JSImmediate::TagMask, X86::ecx);
2739 X86Assembler::JmpSrc isNotObject = m_jit.emitUnlinkedJne();
2740 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsFunctionVptr), X86::ecx);
2741 X86Assembler::JmpSrc isJSFunction = m_jit.emitUnlinkedJe();
2742
2743 // This handles host functions
2744 X86Assembler::JmpDst notJSFunctionlabel = m_jit.label();
2745 m_jit.link(isNotObject, notJSFunctionlabel);
2746 m_jit.link(callLinkFailNotObject, notJSFunctionlabel);
2747 m_jit.link(callLinkFailNotJSFunction, notJSFunctionlabel);
2748 emitCTICall(instruction + i, i, ((opcodeID == op_construct) ? Machine::cti_op_construct_NotJSConstruct : Machine::cti_op_call_NotJSFunction));
2749 X86Assembler::JmpSrc wasNotJSFunction = m_jit.emitUnlinkedJmp();
2750
2751 // Next, handle JSFunctions...
2752 m_jit.link(isJSFunction, m_jit.label());
2753 emitCTICall(instruction + i, i, (opcodeID == op_construct) ? Machine::cti_op_construct_JSConstruct : Machine::cti_op_call_JSFunction);
2754 // initialize the new call frame (pointed to by edx, after the last call).
2755 compileOpCallInitializeCallFrame(callee, argCount);
2756 m_jit.movl_rr(X86::edx, X86::edi);
2757
2758 // load ctiCode from the new codeBlock.
2759 m_jit.movl_mr(OBJECT_OFFSET(CodeBlock, ctiCode), X86::eax, X86::eax);
2760
2761 // Move the new callframe into edi.
2762 m_jit.movl_rr(X86::edx, X86::edi);
2763
2764 // Check the ctiCode has been generated (if not compile it now), and make the call.
2765 m_jit.testl_rr(X86::eax, X86::eax);
2766 X86Assembler::JmpSrc hasCode = m_jit.emitUnlinkedJne();
2767 emitCTICall(instruction + i, i, Machine::cti_vm_compile);
2768 m_jit.link(hasCode, m_jit.label());
2769
2770 emitNakedCall(i, X86::eax);
2771
2772 // Put the return value in dst. In the interpreter, op_ret does this.
2773 X86Assembler::JmpDst storeResult = m_jit.label();
2774 m_jit.link(wasNotJSFunction, storeResult);
2775 m_jit.link(storeResultForFirstRun, storeResult);
2776 emitPutResult(dst);
2777
2778#if ENABLE(CODEBLOCK_SAMPLING)
2779 m_jit.movl_i32m(reinterpret_cast<unsigned>(m_codeBlock), m_machine->sampler()->codeBlockSlot());
2780#endif
2781 ++callLinkInfoIndex;
2782
2783 i += 7;
2784 break;
2785 }
2786 case op_to_jsnumber: {
2787 m_jit.link(iter->from, m_jit.label());
2788 m_jit.link(iter->from, m_jit.label());
2789
2790 emitPutArg(X86::eax, 0);
2791 emitCTICall(instruction + i, i, Machine::cti_op_to_jsnumber);
2792
2793 emitPutResult(instruction[i + 1].u.operand);
2794 i += 3;
2795 break;
2796 }
2797
2798 default:
2799 ASSERT_NOT_REACHED();
2800 break;
2801 }
2802
2803 m_jit.link(m_jit.emitUnlinkedJmp(), m_labels[i]);
2804 }
2805
2806 ASSERT(propertyAccessInstructionIndex == m_codeBlock->propertyAccessInstructions.size());
2807 ASSERT(callLinkInfoIndex == m_codeBlock->callLinkInfos.size());
2808}
2809
2810void CTI::privateCompile()
2811{
2812#if ENABLE(CODEBLOCK_SAMPLING)
2813 m_jit.movl_i32m(reinterpret_cast<unsigned>(m_codeBlock), m_machine->sampler()->codeBlockSlot());
2814#endif
2815#if ENABLE(OPCODE_SAMPLING)
2816 m_jit.movl_i32m(m_machine->sampler()->encodeSample(m_codeBlock->instructions.begin()), m_machine->sampler()->sampleSlot());
2817#endif
2818
2819 // Could use a popl_m, but would need to offset the following instruction if so.
2820 m_jit.popl_r(X86::ecx);
2821 emitPutToCallFrameHeader(X86::ecx, RegisterFile::ReturnPC);
2822
2823 X86Assembler::JmpSrc slowRegisterFileCheck;
2824 X86Assembler::JmpDst afterRegisterFileCheck;
2825 if (m_codeBlock->codeType == FunctionCode) {
2826 // In the case of a fast linked call, we do not set this up in the caller.
2827 m_jit.movl_i32m(reinterpret_cast<unsigned>(m_codeBlock), RegisterFile::CodeBlock * static_cast<int>(sizeof(Register)), X86::edi);
2828
2829 emitGetCTIParam(CTI_ARGS_registerFile, X86::eax);
2830 m_jit.leal_mr(m_codeBlock->numCalleeRegisters * sizeof(Register), X86::edi, X86::edx);
2831 m_jit.cmpl_mr(OBJECT_OFFSET(RegisterFile, m_end), X86::eax, X86::edx);
2832 slowRegisterFileCheck = m_jit.emitUnlinkedJg();
2833 afterRegisterFileCheck = m_jit.label();
2834 }
2835
2836 privateCompileMainPass();
2837 privateCompileLinkPass();
2838 privateCompileSlowCases();
2839
2840 if (m_codeBlock->codeType == FunctionCode) {
2841 m_jit.link(slowRegisterFileCheck, m_jit.label());
2842 emitCTICall(m_codeBlock->instructions.begin(), 0, Machine::cti_register_file_check);
2843 X86Assembler::JmpSrc backToBody = m_jit.emitUnlinkedJmp();
2844 m_jit.link(backToBody, afterRegisterFileCheck);
2845 }
2846
2847 ASSERT(m_jmpTable.isEmpty());
2848
2849 void* code = m_jit.copy();
2850 ASSERT(code);
2851
2852 // Translate vPC offsets into addresses in JIT generated code, for switch tables.
2853 for (unsigned i = 0; i < m_switches.size(); ++i) {
2854 SwitchRecord record = m_switches[i];
2855 unsigned opcodeIndex = record.m_opcodeIndex;
2856
2857 if (record.m_type != SwitchRecord::String) {
2858 ASSERT(record.m_type == SwitchRecord::Immediate || record.m_type == SwitchRecord::Character);
2859 ASSERT(record.m_jumpTable.m_simpleJumpTable->branchOffsets.size() == record.m_jumpTable.m_simpleJumpTable->ctiOffsets.size());
2860
2861 record.m_jumpTable.m_simpleJumpTable->ctiDefault = m_jit.getRelocatedAddress(code, m_labels[opcodeIndex + 3 + record.m_defaultOffset]);
2862
2863 for (unsigned j = 0; j < record.m_jumpTable.m_simpleJumpTable->branchOffsets.size(); ++j) {
2864 unsigned offset = record.m_jumpTable.m_simpleJumpTable->branchOffsets[j];
2865 record.m_jumpTable.m_simpleJumpTable->ctiOffsets[j] = offset ? m_jit.getRelocatedAddress(code, m_labels[opcodeIndex + 3 + offset]) : record.m_jumpTable.m_simpleJumpTable->ctiDefault;
2866 }
2867 } else {
2868 ASSERT(record.m_type == SwitchRecord::String);
2869
2870 record.m_jumpTable.m_stringJumpTable->ctiDefault = m_jit.getRelocatedAddress(code, m_labels[opcodeIndex + 3 + record.m_defaultOffset]);
2871
2872 StringJumpTable::StringOffsetTable::iterator end = record.m_jumpTable.m_stringJumpTable->offsetTable.end();
2873 for (StringJumpTable::StringOffsetTable::iterator it = record.m_jumpTable.m_stringJumpTable->offsetTable.begin(); it != end; ++it) {
2874 unsigned offset = it->second.branchOffset;
2875 it->second.ctiOffset = offset ? m_jit.getRelocatedAddress(code, m_labels[opcodeIndex + 3 + offset]) : record.m_jumpTable.m_stringJumpTable->ctiDefault;
2876 }
2877 }
2878 }
2879
2880 for (Vector<HandlerInfo>::iterator iter = m_codeBlock->exceptionHandlers.begin(); iter != m_codeBlock->exceptionHandlers.end(); ++iter)
2881 iter->nativeCode = m_jit.getRelocatedAddress(code, m_labels[iter->target]);
2882
2883 for (Vector<CallRecord>::iterator iter = m_calls.begin(); iter != m_calls.end(); ++iter) {
2884 if (iter->to)
2885 X86Assembler::link(code, iter->from, iter->to);
2886 m_codeBlock->ctiReturnAddressVPCMap.add(m_jit.getRelocatedAddress(code, iter->from), iter->opcodeIndex);
2887 }
2888
2889 // Link absolute addresses for jsr
2890 for (Vector<JSRInfo>::iterator iter = m_jsrSites.begin(); iter != m_jsrSites.end(); ++iter)
2891 X86Assembler::linkAbsoluteAddress(code, iter->addrPosition, iter->target);
2892
2893 for (unsigned i = 0; i < m_codeBlock->propertyAccessInstructions.size(); ++i) {
2894 StructureStubInfo& info = m_codeBlock->propertyAccessInstructions[i];
2895 info.callReturnLocation = X86Assembler::getRelocatedAddress(code, m_propertyAccessCompilationInfo[i].callReturnLocation);
2896 info.hotPathBegin = X86Assembler::getRelocatedAddress(code, m_propertyAccessCompilationInfo[i].hotPathBegin);
2897 }
2898 for (unsigned i = 0; i < m_codeBlock->callLinkInfos.size(); ++i) {
2899 CallLinkInfo& info = m_codeBlock->callLinkInfos[i];
2900 info.callReturnLocation = X86Assembler::getRelocatedAddress(code, m_callStructureStubCompilationInfo[i].callReturnLocation);
2901 info.hotPathBegin = X86Assembler::getRelocatedAddress(code, m_callStructureStubCompilationInfo[i].hotPathBegin);
2902 info.hotPathOther = X86Assembler::getRelocatedAddress(code, m_callStructureStubCompilationInfo[i].hotPathOther);
2903 info.coldPathOther = X86Assembler::getRelocatedAddress(code, m_callStructureStubCompilationInfo[i].coldPathOther);
2904 }
2905
2906 m_codeBlock->ctiCode = code;
2907}
2908
2909void CTI::privateCompileGetByIdSelf(StructureID* structureID, size_t cachedOffset, void* returnAddress)
2910{
2911 // Check eax is an object of the right StructureID.
2912 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
2913 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
2914 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(structureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
2915 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
2916
2917 // Checks out okay! - getDirectOffset
2918 m_jit.movl_mr(OBJECT_OFFSET(JSObject, m_propertyStorage), X86::eax, X86::eax);
2919 m_jit.movl_mr(cachedOffset * sizeof(JSValue*), X86::eax, X86::eax);
2920 m_jit.ret();
2921
2922 void* code = m_jit.copy();
2923 ASSERT(code);
2924
2925 X86Assembler::link(code, failureCases1, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
2926 X86Assembler::link(code, failureCases2, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
2927
2928 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
2929
2930 ctiRepatchCallByReturnAddress(returnAddress, code);
2931}
2932
2933void CTI::privateCompileGetByIdProto(StructureID* structureID, StructureID* prototypeStructureID, size_t cachedOffset, void* returnAddress)
2934{
2935#if USE(CTI_REPATCH_PIC)
2936 StructureStubInfo& info = m_codeBlock->getStubInfo(returnAddress);
2937
2938 // We don't want to repatch more than once - in future go to cti_op_put_by_id_generic.
2939 ctiRepatchCallByReturnAddress(returnAddress, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
2940
2941 // The prototype object definitely exists (if this stub exists the CodeBlock is referencing a StructureID that is
2942 // referencing the prototype object - let's speculatively load it's table nice and early!)
2943 JSObject* protoObject = asObject(structureID->prototypeForLookup(m_callFrame));
2944 PropertyStorage* protoPropertyStorage = &protoObject->m_propertyStorage;
2945 m_jit.movl_mr(static_cast<void*>(protoPropertyStorage), X86::edx);
2946
2947 // check eax is an object of the right StructureID.
2948 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
2949 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
2950 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(structureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
2951 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
2952
2953 // Check the prototype object's StructureID had not changed.
2954 StructureID** protoStructureIDAddress = &(protoObject->m_structureID);
2955 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(prototypeStructureID), static_cast<void*>(protoStructureIDAddress));
2956 X86Assembler::JmpSrc failureCases3 = m_jit.emitUnlinkedJne();
2957
2958 // Checks out okay! - getDirectOffset
2959 m_jit.movl_mr(cachedOffset * sizeof(JSValue*), X86::edx, X86::ecx);
2960
2961 X86Assembler::JmpSrc success = m_jit.emitUnlinkedJmp();
2962
2963 void* code = m_jit.copy();
2964 ASSERT(code);
2965
2966 // Use the repatch information to link the failure cases back to the original slow case routine.
2967 void* slowCaseBegin = reinterpret_cast<char*>(info.callReturnLocation) - repatchOffsetGetByIdSlowCaseCall;
2968 X86Assembler::link(code, failureCases1, slowCaseBegin);
2969 X86Assembler::link(code, failureCases2, slowCaseBegin);
2970 X86Assembler::link(code, failureCases3, slowCaseBegin);
2971
2972 // On success return back to the hot patch code, at a point it will perform the store to dest for us.
2973 intptr_t successDest = (intptr_t)(info.hotPathBegin) + repatchOffsetGetByIdPropertyMapOffset;
2974 X86Assembler::link(code, success, reinterpret_cast<void*>(successDest));
2975
2976 // Track the stub we have created so that it will be deleted later.
2977 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
2978
2979 // Finally repatch the jump to sow case back in the hot path to jump here instead.
2980 // FIXME: should revert this repatching, on failure.
2981 intptr_t jmpLocation = reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetGetByIdBranchToSlowCase;
2982 X86Assembler::repatchBranchOffset(jmpLocation, code);
2983#else
2984 // The prototype object definitely exists (if this stub exists the CodeBlock is referencing a StructureID that is
2985 // referencing the prototype object - let's speculatively load it's table nice and early!)
2986 JSObject* protoObject = asObject(structureID->prototypeForLookup(m_callFrame));
2987 PropertyStorage* protoPropertyStorage = &protoObject->m_propertyStorage;
2988 m_jit.movl_mr(static_cast<void*>(protoPropertyStorage), X86::edx);
2989
2990 // check eax is an object of the right StructureID.
2991 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
2992 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
2993 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(structureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
2994 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
2995
2996 // Check the prototype object's StructureID had not changed.
2997 StructureID** protoStructureIDAddress = &(protoObject->m_structureID);
2998 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(prototypeStructureID), static_cast<void*>(protoStructureIDAddress));
2999 X86Assembler::JmpSrc failureCases3 = m_jit.emitUnlinkedJne();
3000
3001 // Checks out okay! - getDirectOffset
3002 m_jit.movl_mr(cachedOffset * sizeof(JSValue*), X86::edx, X86::eax);
3003
3004 m_jit.ret();
3005
3006 void* code = m_jit.copy();
3007 ASSERT(code);
3008
3009 X86Assembler::link(code, failureCases1, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3010 X86Assembler::link(code, failureCases2, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3011 X86Assembler::link(code, failureCases3, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3012
3013 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
3014
3015 ctiRepatchCallByReturnAddress(returnAddress, code);
3016#endif
3017}
3018
3019void CTI::privateCompileGetByIdChain(StructureID* structureID, StructureIDChain* chain, size_t count, size_t cachedOffset, void* returnAddress)
3020{
3021 ASSERT(count);
3022
3023 Vector<X86Assembler::JmpSrc> bucketsOfFail;
3024
3025 // Check eax is an object of the right StructureID.
3026 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3027 bucketsOfFail.append(m_jit.emitUnlinkedJne());
3028 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(structureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
3029 bucketsOfFail.append(m_jit.emitUnlinkedJne());
3030
3031 StructureID* currStructureID = structureID;
3032 RefPtr<StructureID>* chainEntries = chain->head();
3033 JSObject* protoObject = 0;
3034 for (unsigned i = 0; i<count; ++i) {
3035 protoObject = asObject(currStructureID->prototypeForLookup(m_callFrame));
3036 currStructureID = chainEntries[i].get();
3037
3038 // Check the prototype object's StructureID had not changed.
3039 StructureID** protoStructureIDAddress = &(protoObject->m_structureID);
3040 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(currStructureID), static_cast<void*>(protoStructureIDAddress));
3041 bucketsOfFail.append(m_jit.emitUnlinkedJne());
3042 }
3043 ASSERT(protoObject);
3044
3045 PropertyStorage* protoPropertyStorage = &protoObject->m_propertyStorage;
3046 m_jit.movl_mr(static_cast<void*>(protoPropertyStorage), X86::edx);
3047 m_jit.movl_mr(cachedOffset * sizeof(JSValue*), X86::edx, X86::eax);
3048 m_jit.ret();
3049
3050 bucketsOfFail.append(m_jit.emitUnlinkedJmp());
3051
3052 void* code = m_jit.copy();
3053 ASSERT(code);
3054
3055 for (unsigned i = 0; i < bucketsOfFail.size(); ++i)
3056 X86Assembler::link(code, bucketsOfFail[i], reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3057
3058 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
3059
3060 ctiRepatchCallByReturnAddress(returnAddress, code);
3061}
3062
3063void CTI::privateCompilePutByIdReplace(StructureID* structureID, size_t cachedOffset, void* returnAddress)
3064{
3065 // check eax is an object of the right StructureID.
3066 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3067 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
3068 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(structureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
3069 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
3070
3071 // checks out okay! - putDirectOffset
3072 m_jit.movl_mr(OBJECT_OFFSET(JSObject, m_propertyStorage), X86::eax, X86::eax);
3073 m_jit.movl_rm(X86::edx, cachedOffset * sizeof(JSValue*), X86::eax);
3074 m_jit.ret();
3075
3076 void* code = m_jit.copy();
3077 ASSERT(code);
3078
3079 X86Assembler::link(code, failureCases1, reinterpret_cast<void*>(Machine::cti_op_put_by_id_fail));
3080 X86Assembler::link(code, failureCases2, reinterpret_cast<void*>(Machine::cti_op_put_by_id_fail));
3081
3082 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
3083
3084 ctiRepatchCallByReturnAddress(returnAddress, code);
3085}
3086
3087extern "C" {
3088
3089 static JSObject* resizePropertyStorage(JSObject* baseObject, size_t oldSize, size_t newSize)
3090 {
3091 baseObject->allocatePropertyStorageInline(oldSize, newSize);
3092 return baseObject;
3093 }
3094
3095}
3096
3097static inline bool transitionWillNeedStorageRealloc(StructureID* oldStructureID, StructureID* newStructureID)
3098{
3099 return oldStructureID->propertyStorageCapacity() != newStructureID->propertyStorageCapacity();
3100}
3101
3102void CTI::privateCompilePutByIdTransition(StructureID* oldStructureID, StructureID* newStructureID, size_t cachedOffset, StructureIDChain* sIDC, void* returnAddress)
3103{
3104 Vector<X86Assembler::JmpSrc, 16> failureCases;
3105 // check eax is an object of the right StructureID.
3106 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3107 failureCases.append(m_jit.emitUnlinkedJne());
3108 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(oldStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
3109 failureCases.append(m_jit.emitUnlinkedJne());
3110 Vector<X86Assembler::JmpSrc> successCases;
3111
3112 // ecx = baseObject
3113 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::eax, X86::ecx);
3114 // proto(ecx) = baseObject->structureID()->prototype()
3115 m_jit.cmpl_i32m(ObjectType, OBJECT_OFFSET(StructureID, m_typeInfo) + OBJECT_OFFSET(TypeInfo, m_type), X86::ecx);
3116 failureCases.append(m_jit.emitUnlinkedJne());
3117 m_jit.movl_mr(OBJECT_OFFSET(StructureID, m_prototype), X86::ecx, X86::ecx);
3118
3119 // ecx = baseObject->m_structureID
3120 for (RefPtr<StructureID>* it = sIDC->head(); *it; ++it) {
3121 // null check the prototype
3122 m_jit.cmpl_i32r(asInteger(jsNull()), X86::ecx);
3123 successCases.append(m_jit.emitUnlinkedJe());
3124
3125 // Check the structure id
3126 m_jit.cmpl_i32m(reinterpret_cast<uint32_t>(it->get()), OBJECT_OFFSET(JSCell, m_structureID), X86::ecx);
3127 failureCases.append(m_jit.emitUnlinkedJne());
3128
3129 m_jit.movl_mr(OBJECT_OFFSET(JSCell, m_structureID), X86::ecx, X86::ecx);
3130 m_jit.cmpl_i32m(ObjectType, OBJECT_OFFSET(StructureID, m_typeInfo) + OBJECT_OFFSET(TypeInfo, m_type), X86::ecx);
3131 failureCases.append(m_jit.emitUnlinkedJne());
3132 m_jit.movl_mr(OBJECT_OFFSET(StructureID, m_prototype), X86::ecx, X86::ecx);
3133 }
3134
3135 failureCases.append(m_jit.emitUnlinkedJne());
3136 for (unsigned i = 0; i < successCases.size(); ++i)
3137 m_jit.link(successCases[i], m_jit.label());
3138
3139 X86Assembler::JmpSrc callTarget;
3140
3141 // emit a call only if storage realloc is needed
3142 if (transitionWillNeedStorageRealloc(oldStructureID, newStructureID)) {
3143 m_jit.pushl_r(X86::edx);
3144 m_jit.pushl_i32(newStructureID->propertyStorageCapacity());
3145 m_jit.pushl_i32(oldStructureID->propertyStorageCapacity());
3146 m_jit.pushl_r(X86::eax);
3147 callTarget = m_jit.emitCall();
3148 m_jit.addl_i32r(3 * sizeof(void*), X86::esp);
3149 m_jit.popl_r(X86::edx);
3150 }
3151
3152 // Assumes m_refCount can be decremented easily, refcount decrement is safe as
3153 // codeblock should ensure oldStructureID->m_refCount > 0
3154 m_jit.subl_i8m(1, reinterpret_cast<void*>(oldStructureID));
3155 m_jit.addl_i8m(1, reinterpret_cast<void*>(newStructureID));
3156 m_jit.movl_i32m(reinterpret_cast<uint32_t>(newStructureID), OBJECT_OFFSET(JSCell, m_structureID), X86::eax);
3157
3158 // write the value
3159 m_jit.movl_mr(OBJECT_OFFSET(JSObject, m_propertyStorage), X86::eax, X86::eax);
3160 m_jit.movl_rm(X86::edx, cachedOffset * sizeof(JSValue*), X86::eax);
3161
3162 m_jit.ret();
3163
3164 X86Assembler::JmpSrc failureJump;
3165 if (failureCases.size()) {
3166 for (unsigned i = 0; i < failureCases.size(); ++i)
3167 m_jit.link(failureCases[i], m_jit.label());
3168 m_jit.emitRestoreArgumentReferenceForTrampoline();
3169 failureJump = m_jit.emitUnlinkedJmp();
3170 }
3171
3172 void* code = m_jit.copy();
3173 ASSERT(code);
3174
3175 if (failureCases.size())
3176 X86Assembler::link(code, failureJump, reinterpret_cast<void*>(Machine::cti_op_put_by_id_fail));
3177
3178 if (transitionWillNeedStorageRealloc(oldStructureID, newStructureID))
3179 X86Assembler::link(code, callTarget, reinterpret_cast<void*>(resizePropertyStorage));
3180
3181 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
3182
3183 ctiRepatchCallByReturnAddress(returnAddress, code);
3184}
3185
3186void CTI::unlinkCall(CallLinkInfo* callLinkInfo)
3187{
3188 // When the JSFunction is deleted the pointer embedded in the instruction stream will no longer be valid
3189 // (and, if a new JSFunction happened to be constructed at the same location, we could get a false positive
3190 // match). Reset the check so it no longer matches.
3191 reinterpret_cast<void**>(callLinkInfo->hotPathBegin)[-1] = asPointer(JSImmediate::impossibleValue());
3192}
3193
3194void CTI::linkCall(JSFunction* callee, CodeBlock* calleeCodeBlock, void* ctiCode, CallLinkInfo* callLinkInfo, int callerArgCount)
3195{
3196 // Currently we only link calls with the exact number of arguments.
3197 if (callerArgCount == calleeCodeBlock->numParameters) {
3198 ASSERT(!callLinkInfo->isLinked());
3199
3200 calleeCodeBlock->addCaller(callLinkInfo);
3201
3202 reinterpret_cast<void**>(callLinkInfo->hotPathBegin)[-1] = callee;
3203 ctiRepatchCallByReturnAddress(callLinkInfo->hotPathOther, ctiCode);
3204 }
3205
3206 // repatch the instruction that jumps out to the cold path, so that we only try to link once.
3207 void* repatchCheck = reinterpret_cast<void*>(reinterpret_cast<ptrdiff_t>(callLinkInfo->hotPathBegin) + repatchOffsetOpCallCall);
3208 ctiRepatchCallByReturnAddress(repatchCheck, callLinkInfo->coldPathOther);
3209}
3210
3211void* CTI::privateCompileArrayLengthTrampoline()
3212{
3213 // Check eax is an array
3214 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3215 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
3216 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsArrayVptr), X86::eax);
3217 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
3218
3219 // Checks out okay! - get the length from the storage
3220 m_jit.movl_mr(OBJECT_OFFSET(JSArray, m_storage), X86::eax, X86::eax);
3221 m_jit.movl_mr(OBJECT_OFFSET(ArrayStorage, m_length), X86::eax, X86::eax);
3222
3223 m_jit.addl_rr(X86::eax, X86::eax);
3224 X86Assembler::JmpSrc failureCases3 = m_jit.emitUnlinkedJo();
3225 m_jit.addl_i8r(1, X86::eax);
3226
3227 m_jit.ret();
3228
3229 void* code = m_jit.copy();
3230 ASSERT(code);
3231
3232 X86Assembler::link(code, failureCases1, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3233 X86Assembler::link(code, failureCases2, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3234 X86Assembler::link(code, failureCases3, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3235
3236 return code;
3237}
3238
3239void* CTI::privateCompileStringLengthTrampoline()
3240{
3241 // Check eax is a string
3242 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3243 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
3244 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsStringVptr), X86::eax);
3245 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
3246
3247 // Checks out okay! - get the length from the Ustring.
3248 m_jit.movl_mr(OBJECT_OFFSET(JSString, m_value) + OBJECT_OFFSET(UString, m_rep), X86::eax, X86::eax);
3249 m_jit.movl_mr(OBJECT_OFFSET(UString::Rep, len), X86::eax, X86::eax);
3250
3251 m_jit.addl_rr(X86::eax, X86::eax);
3252 X86Assembler::JmpSrc failureCases3 = m_jit.emitUnlinkedJo();
3253 m_jit.addl_i8r(1, X86::eax);
3254
3255 m_jit.ret();
3256
3257 void* code = m_jit.copy();
3258 ASSERT(code);
3259
3260 X86Assembler::link(code, failureCases1, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3261 X86Assembler::link(code, failureCases2, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3262 X86Assembler::link(code, failureCases3, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3263
3264 return code;
3265}
3266
3267void CTI::patchGetByIdSelf(CodeBlock* codeBlock, StructureID* structureID, size_t cachedOffset, void* returnAddress)
3268{
3269 StructureStubInfo& info = codeBlock->getStubInfo(returnAddress);
3270
3271 // We don't want to repatch more than once - in future go to cti_op_get_by_id_generic.
3272 // Should probably go to Machine::cti_op_get_by_id_fail, but that doesn't do anything interesting right now.
3273 ctiRepatchCallByReturnAddress(returnAddress, reinterpret_cast<void*>(Machine::cti_op_get_by_id_generic));
3274
3275 // Repatch the offset into the propoerty map to load from, then repatch the StructureID to look for.
3276 X86Assembler::repatchDisplacement(reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetGetByIdPropertyMapOffset, cachedOffset * sizeof(JSValue*));
3277 X86Assembler::repatchImmediate(reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetGetByIdStructureID, reinterpret_cast<uint32_t>(structureID));
3278}
3279
3280void CTI::patchPutByIdReplace(CodeBlock* codeBlock, StructureID* structureID, size_t cachedOffset, void* returnAddress)
3281{
3282 StructureStubInfo& info = codeBlock->getStubInfo(returnAddress);
3283
3284 // We don't want to repatch more than once - in future go to cti_op_put_by_id_generic.
3285 // Should probably go to Machine::cti_op_put_by_id_fail, but that doesn't do anything interesting right now.
3286 ctiRepatchCallByReturnAddress(returnAddress, reinterpret_cast<void*>(Machine::cti_op_put_by_id_generic));
3287
3288 // Repatch the offset into the propoerty map to load from, then repatch the StructureID to look for.
3289 X86Assembler::repatchDisplacement(reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetPutByIdPropertyMapOffset, cachedOffset * sizeof(JSValue*));
3290 X86Assembler::repatchImmediate(reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetPutByIdStructureID, reinterpret_cast<uint32_t>(structureID));
3291}
3292
3293void CTI::privateCompilePatchGetArrayLength(void* returnAddress)
3294{
3295 StructureStubInfo& info = m_codeBlock->getStubInfo(returnAddress);
3296
3297 // We don't want to repatch more than once - in future go to cti_op_put_by_id_generic.
3298 ctiRepatchCallByReturnAddress(returnAddress, reinterpret_cast<void*>(Machine::cti_op_get_by_id_fail));
3299
3300 // Check eax is an array
3301 m_jit.testl_i32r(JSImmediate::TagMask, X86::eax);
3302 X86Assembler::JmpSrc failureCases1 = m_jit.emitUnlinkedJne();
3303 m_jit.cmpl_i32m(reinterpret_cast<unsigned>(m_machine->m_jsArrayVptr), X86::eax);
3304 X86Assembler::JmpSrc failureCases2 = m_jit.emitUnlinkedJne();
3305
3306 // Checks out okay! - get the length from the storage
3307 m_jit.movl_mr(OBJECT_OFFSET(JSArray, m_storage), X86::eax, X86::ecx);
3308 m_jit.movl_mr(OBJECT_OFFSET(ArrayStorage, m_length), X86::ecx, X86::ecx);
3309
3310 m_jit.addl_rr(X86::ecx, X86::ecx);
3311 X86Assembler::JmpSrc failureClobberedECX = m_jit.emitUnlinkedJo();
3312 m_jit.addl_i8r(1, X86::ecx);
3313
3314 X86Assembler::JmpSrc success = m_jit.emitUnlinkedJmp();
3315
3316 m_jit.link(failureClobberedECX, m_jit.label());
3317 m_jit.emitRestoreArgumentReference();
3318 X86Assembler::JmpSrc failureCases3 = m_jit.emitUnlinkedJmp();
3319
3320 void* code = m_jit.copy();
3321 ASSERT(code);
3322
3323 // Use the repatch information to link the failure cases back to the original slow case routine.
3324 void* slowCaseBegin = reinterpret_cast<char*>(info.callReturnLocation) - repatchOffsetGetByIdSlowCaseCall;
3325 X86Assembler::link(code, failureCases1, slowCaseBegin);
3326 X86Assembler::link(code, failureCases2, slowCaseBegin);
3327 X86Assembler::link(code, failureCases3, slowCaseBegin);
3328
3329 // On success return back to the hot patch code, at a point it will perform the store to dest for us.
3330 intptr_t successDest = (intptr_t)(info.hotPathBegin) + repatchOffsetGetByIdPropertyMapOffset;
3331 X86Assembler::link(code, success, reinterpret_cast<void*>(successDest));
3332
3333 // Track the stub we have created so that it will be deleted later.
3334 m_codeBlock->getStubInfo(returnAddress).stubRoutine = code;
3335
3336 // Finally repatch the jump to sow case back in the hot path to jump here instead.
3337 // FIXME: should revert this repatching, on failure.
3338 intptr_t jmpLocation = reinterpret_cast<intptr_t>(info.hotPathBegin) + repatchOffsetGetByIdBranchToSlowCase;
3339 X86Assembler::repatchBranchOffset(jmpLocation, code);
3340}
3341
3342void CTI::emitGetVariableObjectRegister(X86Assembler::RegisterID variableObject, int index, X86Assembler::RegisterID dst)
3343{
3344 m_jit.movl_mr(JSVariableObject::offsetOf_d(), variableObject, dst);
3345 m_jit.movl_mr(JSVariableObject::offsetOf_Data_registers(), dst, dst);
3346 m_jit.movl_mr(index * sizeof(Register), dst, dst);
3347}
3348
3349void CTI::emitPutVariableObjectRegister(X86Assembler::RegisterID src, X86Assembler::RegisterID variableObject, int index)
3350{
3351 m_jit.movl_mr(JSVariableObject::offsetOf_d(), variableObject, variableObject);
3352 m_jit.movl_mr(JSVariableObject::offsetOf_Data_registers(), variableObject, variableObject);
3353 m_jit.movl_rm(src, index * sizeof(Register), variableObject);
3354}
3355
3356#if ENABLE(WREC)
3357
3358void* CTI::compileRegExp(Machine* machine, const UString& pattern, unsigned* numSubpatterns_ptr, const char** error_ptr, bool ignoreCase, bool multiline)
3359{
3360 // TODO: better error messages
3361 if (pattern.size() > MaxPatternSize) {
3362 *error_ptr = "regular expression too large";
3363 return 0;
3364 }
3365
3366 X86Assembler jit(machine->jitCodeBuffer());
3367 WRECParser parser(pattern, ignoreCase, multiline, jit);
3368
3369 jit.emitConvertToFastCall();
3370 // (0) Setup:
3371 // Preserve regs & initialize outputRegister.
3372 jit.pushl_r(WRECGenerator::outputRegister);
3373 jit.pushl_r(WRECGenerator::currentValueRegister);
3374 // push pos onto the stack, both to preserve and as a parameter available to parseDisjunction
3375 jit.pushl_r(WRECGenerator::currentPositionRegister);
3376 // load output pointer
3377 jit.movl_mr(16
3378#if COMPILER(MSVC)
3379 + 3 * sizeof(void*)
3380#endif
3381 , X86::esp, WRECGenerator::outputRegister);
3382
3383 // restart point on match fail.
3384 WRECGenerator::JmpDst nextLabel = jit.label();
3385
3386 // (1) Parse Disjunction:
3387
3388 // Parsing the disjunction should fully consume the pattern.
3389 JmpSrcVector failures;
3390 parser.parseDisjunction(failures);
3391 if (parser.isEndOfPattern()) {
3392 parser.m_err = WRECParser::Error_malformedPattern;
3393 }
3394 if (parser.m_err) {
3395 // TODO: better error messages
3396 *error_ptr = "TODO: better error messages";
3397 return 0;
3398 }
3399
3400 // (2) Success:
3401 // Set return value & pop registers from the stack.
3402
3403 jit.testl_rr(WRECGenerator::outputRegister, WRECGenerator::outputRegister);
3404 WRECGenerator::JmpSrc noOutput = jit.emitUnlinkedJe();
3405
3406 jit.movl_rm(WRECGenerator::currentPositionRegister, 4, WRECGenerator::outputRegister);
3407 jit.popl_r(X86::eax);
3408 jit.movl_rm(X86::eax, WRECGenerator::outputRegister);
3409 jit.popl_r(WRECGenerator::currentValueRegister);
3410 jit.popl_r(WRECGenerator::outputRegister);
3411 jit.ret();
3412
3413 jit.link(noOutput, jit.label());
3414
3415 jit.popl_r(X86::eax);
3416 jit.movl_rm(X86::eax, WRECGenerator::outputRegister);
3417 jit.popl_r(WRECGenerator::currentValueRegister);
3418 jit.popl_r(WRECGenerator::outputRegister);
3419 jit.ret();
3420
3421 // (3) Failure:
3422 // All fails link to here. Progress the start point & if it is within scope, loop.
3423 // Otherwise, return fail value.
3424 WRECGenerator::JmpDst here = jit.label();
3425 for (unsigned i = 0; i < failures.size(); ++i)
3426 jit.link(failures[i], here);
3427 failures.clear();
3428
3429 jit.movl_mr(X86::esp, WRECGenerator::currentPositionRegister);
3430 jit.addl_i8r(1, WRECGenerator::currentPositionRegister);
3431 jit.movl_rm(WRECGenerator::currentPositionRegister, X86::esp);
3432 jit.cmpl_rr(WRECGenerator::lengthRegister, WRECGenerator::currentPositionRegister);
3433 jit.link(jit.emitUnlinkedJle(), nextLabel);
3434
3435 jit.addl_i8r(4, X86::esp);
3436
3437 jit.movl_i32r(-1, X86::eax);
3438 jit.popl_r(WRECGenerator::currentValueRegister);
3439 jit.popl_r(WRECGenerator::outputRegister);
3440 jit.ret();
3441
3442 *numSubpatterns_ptr = parser.m_numSubpatterns;
3443
3444 void* code = jit.copy();
3445 ASSERT(code);
3446 return code;
3447}
3448
3449#endif // ENABLE(WREC)
3450
3451} // namespace JSC
3452
3453#endif // ENABLE(CTI)
Note: See TracBrowser for help on using the repository browser.