Changeset 172940 in webkit for trunk/Source/JavaScriptCore
- Timestamp:
- Aug 25, 2014, 3:35:40 PM (11 years ago)
- Location:
- trunk/Source/JavaScriptCore
- Files:
-
- 10 added
- 54 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/CMakeLists.txt
r172930 r172940 66 66 bytecode/BytecodeBasicBlock.cpp 67 67 bytecode/BytecodeLivenessAnalysis.cpp 68 bytecode/CallEdge.cpp 69 bytecode/CallEdgeProfile.cpp 68 70 bytecode/CallLinkInfo.cpp 69 71 bytecode/CallLinkStatus.cpp 72 bytecode/CallVariant.cpp 70 73 bytecode/CodeBlock.cpp 71 74 bytecode/CodeBlockHash.cpp -
trunk/Source/JavaScriptCore/ChangeLog
r172932 r172940 1 2014-08-24 Filip Pizlo <[email protected]> 2 3 FTL should be able to do polymorphic call inlining 4 https://bugs.webkit.org/show_bug.cgi?id=135145 5 6 Reviewed by Geoffrey Garen. 7 8 Added a log-based high-fidelity call edge profiler that runs in DFG JIT (and optionally 9 baseline JIT) code. Used it to do precise polymorphic inlining in the FTL. Potential 10 inlining sites use the call edge profile if it is available, but they will still fall back 11 on the call inline cache and rare case counts if it's not. Polymorphic inlining means that 12 multiple possible callees can be inlined with a switch to guard them. The slow path may 13 either be an OSR exit or a virtual call. 14 15 The call edge profiling added in this patch is very precise - it will tell you about every 16 call that has ever happened. It took some effort to reduce the overhead of this profiling. 17 This mostly involved ensuring that we don't do it unnecessarily. For example, we avoid it 18 in the baseline JIT (you can conditionally enable it but it's off by default) and we only do 19 it in the DFG JIT if we know that the regular inline cache profiling wasn't precise enough. 20 I also experimented with reducing the precision of the profiling. This led to a significant 21 reduction in the speed-up, so I avoided this approach. I also explored making log processing 22 concurrent, but that didn't help. Also, I tested the overhead of the log processing and 23 found that most of the overhead of this profiling is actually in putting things into the log 24 rather than in processing the log - that part appears to be surprisingly cheap. 25 26 Polymorphic inlining could be enabled in the DFG if we enabled baseline call edge profiling, 27 and if we guarded such inlining sites with some profiling mechanism to detect 28 polyvariant monomorphisation opportunities (where the callsite being inlined reveals that 29 it's actually monomorphic). 30 31 This is a ~28% speed-up on deltablue and a ~7% speed-up on richards, with small speed-ups on 32 other programs as well. It's about a 2% speed-up on Octane version 2, and never a regression 33 on anything we care about. Some aggregates, like V8Spider, see a regression. This is 34 highlighting the increase in profiling overhead. But since this doesn't show up on any major 35 score (code-load or SunSpider), it's probably not relevant. 36 37 * CMakeLists.txt: 38 * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: 39 * JavaScriptCore.xcodeproj/project.pbxproj: 40 * bytecode/CallEdge.cpp: Added. 41 (JSC::CallEdge::dump): 42 * bytecode/CallEdge.h: Added. 43 (JSC::CallEdge::operator!): 44 (JSC::CallEdge::callee): 45 (JSC::CallEdge::count): 46 (JSC::CallEdge::despecifiedClosure): 47 (JSC::CallEdge::CallEdge): 48 * bytecode/CallEdgeProfile.cpp: Added. 49 (JSC::CallEdgeProfile::callEdges): 50 (JSC::CallEdgeProfile::numCallsToKnownCells): 51 (JSC::worthDespecifying): 52 (JSC::CallEdgeProfile::worthDespecifying): 53 (JSC::CallEdgeProfile::visitWeak): 54 (JSC::CallEdgeProfile::addSlow): 55 (JSC::CallEdgeProfile::mergeBack): 56 (JSC::CallEdgeProfile::fadeByHalf): 57 (JSC::CallEdgeLog::CallEdgeLog): 58 (JSC::CallEdgeLog::~CallEdgeLog): 59 (JSC::CallEdgeLog::isEnabled): 60 (JSC::operationProcessCallEdgeLog): 61 (JSC::CallEdgeLog::emitLogCode): 62 (JSC::CallEdgeLog::processLog): 63 * bytecode/CallEdgeProfile.h: Added. 64 (JSC::CallEdgeProfile::numCallsToNotCell): 65 (JSC::CallEdgeProfile::numCallsToUnknownCell): 66 (JSC::CallEdgeProfile::totalCalls): 67 * bytecode/CallEdgeProfileInlines.h: Added. 68 (JSC::CallEdgeProfile::CallEdgeProfile): 69 (JSC::CallEdgeProfile::add): 70 * bytecode/CallLinkInfo.cpp: 71 (JSC::CallLinkInfo::visitWeak): 72 * bytecode/CallLinkInfo.h: 73 * bytecode/CallLinkStatus.cpp: 74 (JSC::CallLinkStatus::CallLinkStatus): 75 (JSC::CallLinkStatus::computeFromLLInt): 76 (JSC::CallLinkStatus::computeFor): 77 (JSC::CallLinkStatus::computeExitSiteData): 78 (JSC::CallLinkStatus::computeFromCallLinkInfo): 79 (JSC::CallLinkStatus::computeFromCallEdgeProfile): 80 (JSC::CallLinkStatus::computeDFGStatuses): 81 (JSC::CallLinkStatus::isClosureCall): 82 (JSC::CallLinkStatus::makeClosureCall): 83 (JSC::CallLinkStatus::dump): 84 (JSC::CallLinkStatus::function): Deleted. 85 (JSC::CallLinkStatus::internalFunction): Deleted. 86 (JSC::CallLinkStatus::intrinsicFor): Deleted. 87 * bytecode/CallLinkStatus.h: 88 (JSC::CallLinkStatus::CallLinkStatus): 89 (JSC::CallLinkStatus::isSet): 90 (JSC::CallLinkStatus::couldTakeSlowPath): 91 (JSC::CallLinkStatus::edges): 92 (JSC::CallLinkStatus::size): 93 (JSC::CallLinkStatus::at): 94 (JSC::CallLinkStatus::operator[]): 95 (JSC::CallLinkStatus::canOptimize): 96 (JSC::CallLinkStatus::canTrustCounts): 97 (JSC::CallLinkStatus::isClosureCall): Deleted. 98 (JSC::CallLinkStatus::callTarget): Deleted. 99 (JSC::CallLinkStatus::executable): Deleted. 100 (JSC::CallLinkStatus::makeClosureCall): Deleted. 101 * bytecode/CallVariant.cpp: Added. 102 (JSC::CallVariant::dump): 103 * bytecode/CallVariant.h: Added. 104 (JSC::CallVariant::CallVariant): 105 (JSC::CallVariant::operator!): 106 (JSC::CallVariant::despecifiedClosure): 107 (JSC::CallVariant::rawCalleeCell): 108 (JSC::CallVariant::internalFunction): 109 (JSC::CallVariant::function): 110 (JSC::CallVariant::isClosureCall): 111 (JSC::CallVariant::executable): 112 (JSC::CallVariant::nonExecutableCallee): 113 (JSC::CallVariant::intrinsicFor): 114 (JSC::CallVariant::functionExecutable): 115 (JSC::CallVariant::isHashTableDeletedValue): 116 (JSC::CallVariant::operator==): 117 (JSC::CallVariant::operator!=): 118 (JSC::CallVariant::operator<): 119 (JSC::CallVariant::operator>): 120 (JSC::CallVariant::operator<=): 121 (JSC::CallVariant::operator>=): 122 (JSC::CallVariant::hash): 123 (JSC::CallVariant::deletedToken): 124 (JSC::CallVariantHash::hash): 125 (JSC::CallVariantHash::equal): 126 * bytecode/CodeOrigin.h: 127 (JSC::InlineCallFrame::isNormalCall): 128 * bytecode/ExitKind.cpp: 129 (JSC::exitKindToString): 130 * bytecode/ExitKind.h: 131 * bytecode/GetByIdStatus.cpp: 132 (JSC::GetByIdStatus::computeForStubInfo): 133 * bytecode/PutByIdStatus.cpp: 134 (JSC::PutByIdStatus::computeForStubInfo): 135 * dfg/DFGAbstractInterpreterInlines.h: 136 (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects): 137 * dfg/DFGBackwardsPropagationPhase.cpp: 138 (JSC::DFG::BackwardsPropagationPhase::propagate): 139 * dfg/DFGBasicBlock.cpp: 140 (JSC::DFG::BasicBlock::~BasicBlock): 141 * dfg/DFGBasicBlock.h: 142 (JSC::DFG::BasicBlock::takeLast): 143 (JSC::DFG::BasicBlock::didLink): 144 * dfg/DFGByteCodeParser.cpp: 145 (JSC::DFG::ByteCodeParser::processSetLocalQueue): 146 (JSC::DFG::ByteCodeParser::removeLastNodeFromGraph): 147 (JSC::DFG::ByteCodeParser::addCallWithoutSettingResult): 148 (JSC::DFG::ByteCodeParser::addCall): 149 (JSC::DFG::ByteCodeParser::handleCall): 150 (JSC::DFG::ByteCodeParser::emitFunctionChecks): 151 (JSC::DFG::ByteCodeParser::undoFunctionChecks): 152 (JSC::DFG::ByteCodeParser::inliningCost): 153 (JSC::DFG::ByteCodeParser::inlineCall): 154 (JSC::DFG::ByteCodeParser::cancelLinkingForBlock): 155 (JSC::DFG::ByteCodeParser::attemptToInlineCall): 156 (JSC::DFG::ByteCodeParser::handleInlining): 157 (JSC::DFG::ByteCodeParser::handleConstantInternalFunction): 158 (JSC::DFG::ByteCodeParser::prepareToParseBlock): 159 (JSC::DFG::ByteCodeParser::clearCaches): 160 (JSC::DFG::ByteCodeParser::parseBlock): 161 (JSC::DFG::ByteCodeParser::linkBlock): 162 (JSC::DFG::ByteCodeParser::linkBlocks): 163 (JSC::DFG::ByteCodeParser::parseCodeBlock): 164 * dfg/DFGCPSRethreadingPhase.cpp: 165 (JSC::DFG::CPSRethreadingPhase::freeUnnecessaryNodes): 166 * dfg/DFGClobberize.h: 167 (JSC::DFG::clobberize): 168 * dfg/DFGCommon.h: 169 * dfg/DFGConstantFoldingPhase.cpp: 170 (JSC::DFG::ConstantFoldingPhase::foldConstants): 171 * dfg/DFGDoesGC.cpp: 172 (JSC::DFG::doesGC): 173 * dfg/DFGDriver.cpp: 174 (JSC::DFG::compileImpl): 175 * dfg/DFGFixupPhase.cpp: 176 (JSC::DFG::FixupPhase::fixupNode): 177 * dfg/DFGGraph.cpp: 178 (JSC::DFG::Graph::dump): 179 (JSC::DFG::Graph::visitChildren): 180 * dfg/DFGJITCompiler.cpp: 181 (JSC::DFG::JITCompiler::link): 182 * dfg/DFGLazyJSValue.cpp: 183 (JSC::DFG::LazyJSValue::switchLookupValue): 184 * dfg/DFGLazyJSValue.h: 185 (JSC::DFG::LazyJSValue::switchLookupValue): Deleted. 186 * dfg/DFGNode.cpp: 187 (WTF::printInternal): 188 * dfg/DFGNode.h: 189 (JSC::DFG::OpInfo::OpInfo): 190 (JSC::DFG::Node::hasHeapPrediction): 191 (JSC::DFG::Node::hasCellOperand): 192 (JSC::DFG::Node::cellOperand): 193 (JSC::DFG::Node::setCellOperand): 194 (JSC::DFG::Node::canBeKnownFunction): Deleted. 195 (JSC::DFG::Node::hasKnownFunction): Deleted. 196 (JSC::DFG::Node::knownFunction): Deleted. 197 (JSC::DFG::Node::giveKnownFunction): Deleted. 198 (JSC::DFG::Node::hasFunction): Deleted. 199 (JSC::DFG::Node::function): Deleted. 200 (JSC::DFG::Node::hasExecutable): Deleted. 201 (JSC::DFG::Node::executable): Deleted. 202 * dfg/DFGNodeType.h: 203 * dfg/DFGPhantomCanonicalizationPhase.cpp: 204 (JSC::DFG::PhantomCanonicalizationPhase::run): 205 * dfg/DFGPhantomRemovalPhase.cpp: 206 (JSC::DFG::PhantomRemovalPhase::run): 207 * dfg/DFGPredictionPropagationPhase.cpp: 208 (JSC::DFG::PredictionPropagationPhase::propagate): 209 * dfg/DFGSafeToExecute.h: 210 (JSC::DFG::safeToExecute): 211 * dfg/DFGSpeculativeJIT.cpp: 212 (JSC::DFG::SpeculativeJIT::emitSwitch): 213 * dfg/DFGSpeculativeJIT32_64.cpp: 214 (JSC::DFG::SpeculativeJIT::emitCall): 215 (JSC::DFG::SpeculativeJIT::compile): 216 * dfg/DFGSpeculativeJIT64.cpp: 217 (JSC::DFG::SpeculativeJIT::emitCall): 218 (JSC::DFG::SpeculativeJIT::compile): 219 * dfg/DFGStructureRegistrationPhase.cpp: 220 (JSC::DFG::StructureRegistrationPhase::run): 221 * dfg/DFGTierUpCheckInjectionPhase.cpp: 222 (JSC::DFG::TierUpCheckInjectionPhase::run): 223 (JSC::DFG::TierUpCheckInjectionPhase::removeFTLProfiling): 224 * dfg/DFGValidate.cpp: 225 (JSC::DFG::Validate::validate): 226 * dfg/DFGWatchpointCollectionPhase.cpp: 227 (JSC::DFG::WatchpointCollectionPhase::handle): 228 * ftl/FTLCapabilities.cpp: 229 (JSC::FTL::canCompile): 230 * ftl/FTLLowerDFGToLLVM.cpp: 231 (JSC::FTL::ftlUnreachable): 232 (JSC::FTL::LowerDFGToLLVM::lower): 233 (JSC::FTL::LowerDFGToLLVM::compileNode): 234 (JSC::FTL::LowerDFGToLLVM::compileCheckCell): 235 (JSC::FTL::LowerDFGToLLVM::compileCheckBadCell): 236 (JSC::FTL::LowerDFGToLLVM::compileGetExecutable): 237 (JSC::FTL::LowerDFGToLLVM::compileNativeCallOrConstruct): 238 (JSC::FTL::LowerDFGToLLVM::compileSwitch): 239 (JSC::FTL::LowerDFGToLLVM::buildSwitch): 240 (JSC::FTL::LowerDFGToLLVM::compileCheckFunction): Deleted. 241 (JSC::FTL::LowerDFGToLLVM::compileCheckExecutable): Deleted. 242 * heap/Heap.cpp: 243 (JSC::Heap::collect): 244 * jit/AssemblyHelpers.h: 245 (JSC::AssemblyHelpers::storeValue): 246 (JSC::AssemblyHelpers::loadValue): 247 * jit/CCallHelpers.h: 248 (JSC::CCallHelpers::setupArguments): 249 * jit/GPRInfo.h: 250 (JSC::JSValueRegs::uses): 251 * jit/JITCall.cpp: 252 (JSC::JIT::compileOpCall): 253 * jit/JITCall32_64.cpp: 254 (JSC::JIT::compileOpCall): 255 * runtime/Options.h: 256 * runtime/VM.cpp: 257 (JSC::VM::ensureCallEdgeLog): 258 * runtime/VM.h: 259 * tests/stress/new-array-then-exit.js: Added. 260 (foo): 261 * tests/stress/poly-call-exit-this.js: Added. 262 * tests/stress/poly-call-exit.js: Added. 263 1 264 2014-08-22 Michael Saboff <[email protected]> 2 265 -
trunk/Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
r172930 r172940 315 315 <ClCompile Include="..\bytecode\BytecodeBasicBlock.cpp" /> 316 316 <ClCompile Include="..\bytecode\BytecodeLivenessAnalysis.cpp" /> 317 <ClCompile Include="..\bytecode\CallEdge.cpp" /> 318 <ClCompile Include="..\bytecode\CallEdgeProfile.cpp" /> 317 319 <ClCompile Include="..\bytecode\CallLinkInfo.cpp" /> 318 320 <ClCompile Include="..\bytecode\CallLinkStatus.cpp" /> 321 <ClCompile Include="..\bytecode\CallVariant.cpp" /> 319 322 <ClCompile Include="..\bytecode\CodeBlock.cpp" /> 320 323 <ClCompile Include="..\bytecode\CodeBlockHash.cpp" /> … … 906 909 <ClInclude Include="..\bytecode\BytecodeLivenessAnalysis.h" /> 907 910 <ClInclude Include="..\bytecode\BytecodeUseDef.h" /> 911 <ClInclude Include="..\bytecode\CallEdge.h" /> 912 <ClInclude Include="..\bytecode\CallEdgeProfile.h" /> 913 <ClInclude Include="..\bytecode\CallEdgeProfileInlines.h" /> 908 914 <ClInclude Include="..\bytecode\CallLinkInfo.h" /> 909 915 <ClInclude Include="..\bytecode\CallLinkStatus.h" /> 910 916 <ClInclude Include="..\bytecode\CallReturnOffsetToBytecodeOffset.h" /> 917 <ClInclude Include="..\bytecode\CallVariant.h" /> 911 918 <ClInclude Include="..\bytecode\CodeBlock.h" /> 912 919 <ClInclude Include="..\bytecode\CodeBlockHash.h" /> -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r172930 r172940 264 264 0F3B3A2B15475000003ED0FF /* DFGValidate.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B3A2915474FF4003ED0FF /* DFGValidate.cpp */; }; 265 265 0F3B3A2C15475002003ED0FF /* DFGValidate.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B3A2A15474FF4003ED0FF /* DFGValidate.h */; settings = {ATTRIBUTES = (Private, ); }; }; 266 0F3B7E2619A11B8000D9BC56 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2019A11B8000D9BC56 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; }; 267 0F3B7E2719A11B8000D9BC56 /* CallEdgeProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */; }; 268 0F3B7E2819A11B8000D9BC56 /* CallEdgeProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */; settings = {ATTRIBUTES = (Private, ); }; }; 269 0F3B7E2919A11B8000D9BC56 /* CallEdgeProfileInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; 270 0F3B7E2A19A11B8000D9BC56 /* CallVariant.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */; }; 271 0F3B7E2B19A11B8000D9BC56 /* CallVariant.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2519A11B8000D9BC56 /* CallVariant.h */; settings = {ATTRIBUTES = (Private, ); }; }; 272 0F3B7E2D19A12AAE00D9BC56 /* CallEdge.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */; }; 266 273 0F3D0BBC194A414300FC9CF9 /* ConstantStructureCheck.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3D0BBA194A414300FC9CF9 /* ConstantStructureCheck.cpp */; }; 267 274 0F3D0BBD194A414300FC9CF9 /* ConstantStructureCheck.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3D0BBB194A414300FC9CF9 /* ConstantStructureCheck.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 2170 2177 0F3B3A2915474FF4003ED0FF /* DFGValidate.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValidate.cpp; path = dfg/DFGValidate.cpp; sourceTree = "<group>"; }; 2171 2178 0F3B3A2A15474FF4003ED0FF /* DFGValidate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGValidate.h; path = dfg/DFGValidate.h; sourceTree = "<group>"; }; 2179 0F3B7E2019A11B8000D9BC56 /* CallEdge.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdge.h; sourceTree = "<group>"; }; 2180 0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdgeProfile.cpp; sourceTree = "<group>"; }; 2181 0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdgeProfile.h; sourceTree = "<group>"; }; 2182 0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdgeProfileInlines.h; sourceTree = "<group>"; }; 2183 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallVariant.cpp; sourceTree = "<group>"; }; 2184 0F3B7E2519A11B8000D9BC56 /* CallVariant.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallVariant.h; sourceTree = "<group>"; }; 2185 0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdge.cpp; sourceTree = "<group>"; }; 2172 2186 0F3D0BBA194A414300FC9CF9 /* ConstantStructureCheck.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ConstantStructureCheck.cpp; sourceTree = "<group>"; }; 2173 2187 0F3D0BBB194A414300FC9CF9 /* ConstantStructureCheck.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConstantStructureCheck.h; sourceTree = "<group>"; }; … … 5147 5161 0F885E101849A3BE00F1E3FA /* BytecodeUseDef.h */, 5148 5162 0F8023E91613832300A0BA45 /* ByValInfo.h */, 5163 0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */, 5164 0F3B7E2019A11B8000D9BC56 /* CallEdge.h */, 5165 0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */, 5166 0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */, 5167 0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */, 5149 5168 0F0B83AE14BCF71400885B4F /* CallLinkInfo.cpp */, 5150 5169 0F0B83AF14BCF71400885B4F /* CallLinkInfo.h */, … … 5152 5171 0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */, 5153 5172 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */, 5173 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */, 5174 0F3B7E2519A11B8000D9BC56 /* CallVariant.h */, 5154 5175 969A07900ED1D3AE00F1F681 /* CodeBlock.cpp */, 5155 5176 969A07910ED1D3AE00F1F681 /* CodeBlock.h */, … … 5880 5901 0F2FC77316E12F740038D976 /* DFGDCEPhase.h in Headers */, 5881 5902 0F8F2B9A172F0501007DBDA5 /* DFGDesiredIdentifiers.h in Headers */, 5903 0F3B7E2819A11B8000D9BC56 /* CallEdgeProfile.h in Headers */, 5882 5904 C2C0F7CE17BBFC5B00464FE4 /* DFGDesiredTransitions.h in Headers */, 5883 5905 0FE8534C1723CDA500B618F5 /* DFGDesiredWatchpoints.h in Headers */, … … 6148 6170 0F766D2C15A8CC3A008F363E /* JITStubRoutineSet.h in Headers */, 6149 6171 14C5242B0F5355E900BA3D04 /* JITStubs.h in Headers */, 6172 0F3B7E2B19A11B8000D9BC56 /* CallVariant.h in Headers */, 6150 6173 FEF6835E174343CC00A32E25 /* JITStubsARM.h in Headers */, 6151 6174 FEF6835F174343CC00A32E25 /* JITStubsARMv7.h in Headers */, … … 6159 6182 BC18C4160E16F5CD00B34460 /* JSActivation.h in Headers */, 6160 6183 840480131021A1D9008E7F01 /* JSAPIValueWrapper.h in Headers */, 6184 0F3B7E2919A11B8000D9BC56 /* CallEdgeProfileInlines.h in Headers */, 6161 6185 C2CF39C216E15A8100DD69BE /* JSAPIWrapperObject.h in Headers */, 6162 6186 A76140D2182982CB00750624 /* JSArgumentsIterator.h in Headers */, … … 6474 6498 E49DC16D12EF295300184A1F /* SourceProviderCacheItem.h in Headers */, 6475 6499 0FB7F39E15ED8E4600F167B2 /* SparseArrayValueMap.h in Headers */, 6500 0F3B7E2619A11B8000D9BC56 /* CallEdge.h in Headers */, 6476 6501 A7386554118697B400540279 /* SpecializedThunkJIT.h in Headers */, 6477 6502 0F5541B21613C1FB00CE3E25 /* SpecialPointer.h in Headers */, … … 7328 7353 0F235BD817178E1C00690C7F /* FTLExitThunkGenerator.cpp in Sources */, 7329 7354 0F235BDA17178E1C00690C7F /* FTLExitValue.cpp in Sources */, 7355 0F3B7E2719A11B8000D9BC56 /* CallEdgeProfile.cpp in Sources */, 7330 7356 A7F2996B17A0BB670010417A /* FTLFail.cpp in Sources */, 7331 7357 0FD8A31917D51F2200CA2C40 /* FTLForOSREntryJITCode.cpp in Sources */, … … 7514 7540 0F38B01117CF078000B144D3 /* LLIntEntrypoint.cpp in Sources */, 7515 7541 0F4680A814BA7FAB00BFE272 /* LLIntExceptions.cpp in Sources */, 7542 0F3B7E2D19A12AAE00D9BC56 /* CallEdge.cpp in Sources */, 7516 7543 0F4680A414BA7F8D00BFE272 /* LLIntSlowPaths.cpp in Sources */, 7517 7544 0F0B839C14BCF46300885B4F /* LLIntThunks.cpp in Sources */, … … 7675 7702 2A4EC90B1860D6C20094F782 /* WriteBarrierBuffer.cpp in Sources */, 7676 7703 0FC8150B14043C0E00CFA603 /* WriteBarrierSupport.cpp in Sources */, 7704 0F3B7E2A19A11B8000D9BC56 /* CallVariant.cpp in Sources */, 7677 7705 A7E5AB3A1799E4B200D2833D /* X86Disassembler.cpp in Sources */, 7678 7706 863C6D9C1521111A00585E4E /* YarrCanonicalizeUCS2.cpp in Sources */, -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
r172176 r172940 84 84 if (!!lastSeenCallee && !Heap::isMarked(lastSeenCallee.get())) 85 85 lastSeenCallee.clear(); 86 87 if (callEdgeProfile) { 88 WTF::loadLoadFence(); 89 callEdgeProfile->visitWeak(); 90 } 86 91 } 87 92 -
trunk/Source/JavaScriptCore/bytecode/CallLinkInfo.h
r166392 r172940 27 27 #define CallLinkInfo_h 28 28 29 #include "CallEdgeProfile.h" 29 30 #include "ClosureCallStubRoutine.h" 30 31 #include "CodeLocation.h" … … 34 35 #include "Opcode.h" 35 36 #include "WriteBarrier.h" 37 #include <wtf/OwnPtr.h> 36 38 #include <wtf/SentinelLinkedList.h> 37 39 … … 89 91 unsigned slowPathCount; 90 92 CodeOrigin codeOrigin; 93 OwnPtr<CallEdgeProfile> callEdgeProfile; 91 94 92 95 bool isLinked() { return stub || callee; } -
trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
r172176 r172940 33 33 #include "JSCInlines.h" 34 34 #include <wtf/CommaPrinter.h> 35 #include <wtf/ListDump.h> 35 36 36 37 namespace JSC { … … 39 40 40 41 CallLinkStatus::CallLinkStatus(JSValue value) 41 : m_callTarget(value) 42 , m_executable(0) 43 , m_couldTakeSlowPath(false) 42 : m_couldTakeSlowPath(false) 44 43 , m_isProved(false) 45 44 { 46 if (!value || !value.isCell()) 45 if (!value || !value.isCell()) { 46 m_couldTakeSlowPath = true; 47 47 return; 48 49 if (!value.asCell()->inherits(JSFunction::info())) 50 return; 51 52 m_executable = jsCast<JSFunction*>(value.asCell())->executable(); 53 } 54 55 JSFunction* CallLinkStatus::function() const 56 { 57 if (!m_callTarget || !m_callTarget.isCell()) 58 return 0; 59 60 if (!m_callTarget.asCell()->inherits(JSFunction::info())) 61 return 0; 62 63 return jsCast<JSFunction*>(m_callTarget.asCell()); 64 } 65 66 InternalFunction* CallLinkStatus::internalFunction() const 67 { 68 if (!m_callTarget || !m_callTarget.isCell()) 69 return 0; 70 71 if (!m_callTarget.asCell()->inherits(InternalFunction::info())) 72 return 0; 73 74 return jsCast<InternalFunction*>(m_callTarget.asCell()); 75 } 76 77 Intrinsic CallLinkStatus::intrinsicFor(CodeSpecializationKind kind) const 78 { 79 if (!m_executable) 80 return NoIntrinsic; 81 82 return m_executable->intrinsicFor(kind); 48 } 49 50 m_edges.append(CallEdge(CallVariant(value.asCell()), 1)); 83 51 } 84 52 … … 88 56 UNUSED_PARAM(bytecodeIndex); 89 57 #if ENABLE(DFG_JIT) 90 if (profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, Bad Function))) {58 if (profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCell))) { 91 59 // We could force this to be a closure call, but instead we'll just assume that it 92 60 // takes slow path. … … 126 94 return computeFromLLInt(locker, profiledBlock, bytecodeIndex); 127 95 128 return computeFor(locker, *callLinkInfo, exitSiteData);96 return computeFor(locker, profiledBlock, *callLinkInfo, exitSiteData); 129 97 #else 130 98 return CallLinkStatus(); … … 140 108 #if ENABLE(DFG_JIT) 141 109 exitSiteData.m_takesSlowPath = 142 profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, Bad Cache, exitingJITType))110 profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadType, exitingJITType)) 143 111 || profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadExecutable, exitingJITType)); 144 112 exitSiteData.m_badFunction = 145 profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, Bad Function, exitingJITType));113 profiledBlock->hasExitSite(locker, DFG::FrequentExitSite(bytecodeIndex, BadCell, exitingJITType)); 146 114 #else 147 115 UNUSED_PARAM(locker); … … 155 123 156 124 #if ENABLE(JIT) 157 CallLinkStatus CallLinkStatus::computeFor(const ConcurrentJITLocker&, CallLinkInfo& callLinkInfo) 125 CallLinkStatus CallLinkStatus::computeFor( 126 const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, CallLinkInfo& callLinkInfo) 127 { 128 // We don't really need this, but anytime we have to debug this code, it becomes indispensable. 129 UNUSED_PARAM(profiledBlock); 130 131 if (Options::callStatusShouldUseCallEdgeProfile()) { 132 // Always trust the call edge profile over anything else since this has precise counts. 133 // It can make the best possible decision because it never "forgets" what happened for any 134 // call, with the exception of fading out the counts of old calls (for example if the 135 // counter type is 16-bit then calls that happened more than 2^16 calls ago are given half 136 // weight, and this compounds for every 2^15 [sic] calls after that). The combination of 137 // high fidelity for recent calls and fading for older calls makes this the most useful 138 // mechamism of choosing how to optimize future calls. 139 CallEdgeProfile* edgeProfile = callLinkInfo.callEdgeProfile.get(); 140 WTF::loadLoadFence(); 141 if (edgeProfile) { 142 CallLinkStatus result = computeFromCallEdgeProfile(edgeProfile); 143 if (!!result) 144 return result; 145 } 146 } 147 148 return computeFromCallLinkInfo(locker, callLinkInfo); 149 } 150 151 CallLinkStatus CallLinkStatus::computeFromCallLinkInfo( 152 const ConcurrentJITLocker&, CallLinkInfo& callLinkInfo) 158 153 { 159 154 // Note that despite requiring that the locker is held, this code is racy with respect … … 178 173 JSFunction* target = callLinkInfo.lastSeenCallee.get(); 179 174 if (!target) 180 return CallLinkStatus();175 return takesSlowPath(); 181 176 182 177 if (callLinkInfo.hasSeenClosure) … … 186 181 } 187 182 183 CallLinkStatus CallLinkStatus::computeFromCallEdgeProfile(CallEdgeProfile* edgeProfile) 184 { 185 // In cases where the call edge profile saw nothing, use the CallLinkInfo instead. 186 if (!edgeProfile->totalCalls()) 187 return CallLinkStatus(); 188 189 // To do anything meaningful, we require that the majority of calls are to something we 190 // know how to handle. 191 unsigned numCallsToKnown = edgeProfile->numCallsToKnownCells(); 192 unsigned numCallsToUnknown = edgeProfile->numCallsToNotCell() + edgeProfile->numCallsToUnknownCell(); 193 194 // We require that the majority of calls were to something that we could possibly inline. 195 if (numCallsToKnown <= numCallsToUnknown) 196 return takesSlowPath(); 197 198 // We require that the number of such calls is greater than some minimal threshold, so that we 199 // avoid inlining completely cold calls. 200 if (numCallsToKnown < Options::frequentCallThreshold()) 201 return takesSlowPath(); 202 203 CallLinkStatus result; 204 result.m_edges = edgeProfile->callEdges(); 205 result.m_couldTakeSlowPath = !!numCallsToUnknown; 206 result.m_canTrustCounts = true; 207 208 return result; 209 } 210 188 211 CallLinkStatus CallLinkStatus::computeFor( 189 const ConcurrentJITLocker& locker, CallLinkInfo& callLinkInfo, ExitSiteData exitSiteData) 190 { 191 if (exitSiteData.m_takesSlowPath) 192 return takesSlowPath(); 193 194 CallLinkStatus result = computeFor(locker, callLinkInfo); 212 const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, CallLinkInfo& callLinkInfo, 213 ExitSiteData exitSiteData) 214 { 215 CallLinkStatus result = computeFor(locker, profiledBlock, callLinkInfo); 195 216 if (exitSiteData.m_badFunction) 196 217 result.makeClosureCall(); 218 if (exitSiteData.m_takesSlowPath) 219 result.m_couldTakeSlowPath = true; 197 220 198 221 return result; … … 228 251 { 229 252 ConcurrentJITLocker locker(dfgCodeBlock->m_lock); 230 map.add(info.codeOrigin, computeFor(locker, info, exitSiteData));253 map.add(info.codeOrigin, computeFor(locker, dfgCodeBlock, info, exitSiteData)); 231 254 } 232 255 } … … 257 280 } 258 281 282 bool CallLinkStatus::isClosureCall() const 283 { 284 for (unsigned i = m_edges.size(); i--;) { 285 if (m_edges[i].callee().isClosureCall()) 286 return true; 287 } 288 return false; 289 } 290 291 void CallLinkStatus::makeClosureCall() 292 { 293 ASSERT(!m_isProved); 294 for (unsigned i = m_edges.size(); i--;) 295 m_edges[i] = m_edges[i].despecifiedClosure(); 296 297 if (!ASSERT_DISABLED) { 298 // Doing this should not have created duplicates, because the CallEdgeProfile 299 // should despecify closures if doing so would reduce the number of known callees. 300 for (unsigned i = 0; i < m_edges.size(); ++i) { 301 for (unsigned j = i + 1; j < m_edges.size(); ++j) 302 ASSERT(m_edges[i].callee() != m_edges[j].callee()); 303 } 304 } 305 } 306 259 307 void CallLinkStatus::dump(PrintStream& out) const 260 308 { … … 272 320 out.print(comma, "Could Take Slow Path"); 273 321 274 if (m_callTarget) 275 out.print(comma, "Known target: ", m_callTarget); 276 277 if (m_executable) { 278 out.print(comma, "Executable/CallHash: ", RawPointer(m_executable)); 279 if (!isCompilationThread()) 280 out.print("/", m_executable->hashFor(CodeForCall)); 281 } 322 out.print(listDump(m_edges)); 282 323 } 283 324 -
trunk/Source/JavaScriptCore/bytecode/CallLinkStatus.h
r172176 r172940 47 47 public: 48 48 CallLinkStatus() 49 : m_executable(0) 50 , m_couldTakeSlowPath(false) 49 : m_couldTakeSlowPath(false) 51 50 , m_isProved(false) 51 , m_canTrustCounts(false) 52 52 { 53 53 } … … 62 62 explicit CallLinkStatus(JSValue); 63 63 64 CallLinkStatus( ExecutableBase* executable)65 : m_e xecutable(executable)64 CallLinkStatus(CallVariant variant) 65 : m_edges(1, CallEdge(variant, 1)) 66 66 , m_couldTakeSlowPath(false) 67 67 , m_isProved(false) 68 , m_canTrustCounts(false) 68 69 { 69 70 } … … 93 94 // Computes the status assuming that we never took slow path and never previously 94 95 // exited. 95 static CallLinkStatus computeFor(const ConcurrentJITLocker&, CallLinkInfo&); 96 static CallLinkStatus computeFor(const ConcurrentJITLocker&, CallLinkInfo&, ExitSiteData); 96 static CallLinkStatus computeFor(const ConcurrentJITLocker&, CodeBlock*, CallLinkInfo&); 97 static CallLinkStatus computeFor( 98 const ConcurrentJITLocker&, CodeBlock*, CallLinkInfo&, ExitSiteData); 97 99 #endif 98 100 … … 108 110 CodeBlock*, CodeOrigin, const CallLinkInfoMap&, const ContextMap&); 109 111 110 bool isSet() const { return m_callTarget || m_executable|| m_couldTakeSlowPath; }112 bool isSet() const { return !m_edges.isEmpty() || m_couldTakeSlowPath; } 111 113 112 114 bool operator!() const { return !isSet(); } 113 115 114 116 bool couldTakeSlowPath() const { return m_couldTakeSlowPath; } 115 bool isClosureCall() const { return m_executable && !m_callTarget; }116 117 117 JSValue callTarget() const { return m_callTarget; } 118 JSFunction* function() const; 119 InternalFunction* internalFunction() const; 120 Intrinsic intrinsicFor(CodeSpecializationKind) const; 121 ExecutableBase* executable() const { return m_executable; } 118 CallEdgeList edges() const { return m_edges; } 119 unsigned size() const { return m_edges.size(); } 120 CallEdge at(unsigned i) const { return m_edges[i]; } 121 CallEdge operator[](unsigned i) const { return at(i); } 122 122 bool isProved() const { return m_isProved; } 123 bool canOptimize() const { return (m_callTarget || m_executable) && !m_couldTakeSlowPath; } 123 bool canOptimize() const { return !m_edges.isEmpty(); } 124 bool canTrustCounts() const { return m_canTrustCounts; } 125 126 bool isClosureCall() const; // Returns true if any callee is a closure call. 124 127 125 128 void dump(PrintStream&) const; 126 129 127 130 private: 128 void makeClosureCall() 129 { 130 ASSERT(!m_isProved); 131 // Turn this into a closure call. 132 m_callTarget = JSValue(); 133 } 131 void makeClosureCall(); 134 132 135 133 static CallLinkStatus computeFromLLInt(const ConcurrentJITLocker&, CodeBlock*, unsigned bytecodeIndex); 134 #if ENABLE(JIT) 135 static CallLinkStatus computeFromCallEdgeProfile(CallEdgeProfile*); 136 static CallLinkStatus computeFromCallLinkInfo( 137 const ConcurrentJITLocker&, CallLinkInfo&); 138 #endif 136 139 137 JSValue m_callTarget; 138 ExecutableBase* m_executable; 140 CallEdgeList m_edges; 139 141 bool m_couldTakeSlowPath; 140 142 bool m_isProved; 143 bool m_canTrustCounts; 141 144 }; 142 145 -
trunk/Source/JavaScriptCore/bytecode/CodeOrigin.h
r172853 r172940 155 155 } 156 156 157 static bool isNormalCall(Kind kind) 158 { 159 switch (kind) { 160 case Call: 161 case Construct: 162 return true; 163 default: 164 return false; 165 } 166 } 167 157 168 Vector<ValueRecovery> arguments; // Includes 'this'. 158 169 WriteBarrier<ScriptExecutable> executable; -
trunk/Source/JavaScriptCore/bytecode/ExitKind.cpp
r171613 r172940 39 39 case BadType: 40 40 return "BadType"; 41 case Bad Function:42 return "Bad Function";41 case BadCell: 42 return "BadCell"; 43 43 case BadExecutable: 44 44 return "BadExecutable"; -
trunk/Source/JavaScriptCore/bytecode/ExitKind.h
r171613 r172940 32 32 ExitKindUnset, 33 33 BadType, // We exited because a type prediction was wrong. 34 Bad Function, // We exited because we made an incorrect assumption about what function we would see.34 BadCell, // We exited because we made an incorrect assumption about what cell we would see. Usually used for function checks. 35 35 BadExecutable, // We exited because we made an incorrect assumption about what executable we would see. 36 36 BadCache, // We exited because an inline cache was wrong. -
trunk/Source/JavaScriptCore/bytecode/GetByIdStatus.cpp
r172129 r172940 188 188 list->at(listIndex).stubRoutine()); 189 189 callLinkStatus = std::make_unique<CallLinkStatus>( 190 CallLinkStatus::computeFor(locker, *stub->m_callLinkInfo, callExitSiteData)); 190 CallLinkStatus::computeFor( 191 locker, profiledBlock, *stub->m_callLinkInfo, callExitSiteData)); 191 192 break; 192 193 } -
trunk/Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
r172129 r172940 248 248 std::make_unique<CallLinkStatus>( 249 249 CallLinkStatus::computeFor( 250 locker, *stub->m_callLinkInfo, callExitSiteData));250 locker, profiledBlock, *stub->m_callLinkInfo, callExitSiteData)); 251 251 252 252 variant = PutByIdVariant::setter( -
trunk/Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
r172808 r172940 1460 1460 break; 1461 1461 1462 case CheckExecutable: {1463 // FIXME: We could track executables in AbstractValue, which would allow us to get rid of these checks1464 // more thoroughly. https://bugs.webkit.org/show_bug.cgi?id=1062001465 // FIXME: We could eliminate these entirely if we know the exact value that flows into this.1466 // https://bugs.webkit.org/show_bug.cgi?id=1062011467 break;1468 }1469 1470 1462 case CheckStructure: { 1471 1463 // FIXME: We should be able to propagate the structure sets of constants (i.e. prototypes). … … 1727 1719 break; 1728 1720 } 1721 1722 case GetExecutable: { 1723 JSValue value = forNode(node->child1()).value(); 1724 if (value) { 1725 JSFunction* function = jsDynamicCast<JSFunction*>(value); 1726 if (function) { 1727 setConstant(node, *m_graph.freeze(function->executable())); 1728 break; 1729 } 1730 } 1731 forNode(node).setType(SpecCellOther); 1732 break; 1733 } 1729 1734 1730 case Check Function: {1735 case CheckCell: { 1731 1736 JSValue value = forNode(node->child1()).value(); 1732 if (value == node-> function()->value()) {1737 if (value == node->cellOperand()->value()) { 1733 1738 m_state.setFoundConstants(true); 1734 1739 ASSERT(value); … … 1736 1741 } 1737 1742 1738 filterByValue(node->child1(), *node-> function());1743 filterByValue(node->child1(), *node->cellOperand()); 1739 1744 break; 1740 1745 } … … 1860 1865 case VariableWatchpoint: 1861 1866 case VarInjectionWatchpoint: 1862 break;1863 1864 1867 case PutGlobalVar: 1865 1868 case NotifyWrite: … … 1901 1904 break; 1902 1905 1906 case ProfiledCall: 1907 case ProfiledConstruct: 1908 if (forNode(m_graph.varArgChild(node, 0)).m_value) 1909 m_state.setFoundConstants(true); 1910 clobberWorld(node->origin.semantic, clobberLimit); 1911 forNode(node).makeHeapTop(); 1912 break; 1913 1903 1914 case ForceOSRExit: 1915 case CheckBadCell: 1904 1916 m_state.setIsValid(false); 1905 1917 break; … … 1956 1968 case ArithIMul: 1957 1969 case FiatInt52: 1958 RELEASE_ASSERT_NOT_REACHED(); 1970 case BottomValue: 1971 DFG_CRASH(m_graph, node, "Unexpected node type"); 1959 1972 break; 1960 1973 } -
trunk/Source/JavaScriptCore/dfg/DFGBackwardsPropagationPhase.cpp
r171613 r172940 390 390 node->child1()->mergeFlags(NodeBytecodeUsesAsNumber | NodeBytecodeUsesAsOther); 391 391 break; 392 case SwitchCell: 393 // There is currently no point to being clever here since this is used for switching 394 // on objects. 395 mergeDefaultFlags(node); 396 break; 392 397 } 393 398 break; -
trunk/Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
r171613 r172940 59 59 } 60 60 61 BasicBlock::~BasicBlock() { } 61 BasicBlock::~BasicBlock() 62 { 63 } 62 64 63 65 void BasicBlock::ensureLocals(unsigned newNumLocals) -
trunk/Source/JavaScriptCore/dfg/DFGBasicBlock.h
r172129 r172940 63 63 Node* operator[](size_t i) const { return at(i); } 64 64 Node* last() const { return at(size() - 1); } 65 Node* takeLast() { return m_nodes.takeLast(); } 65 66 void resize(size_t size) { m_nodes.resize(size); } 66 67 void grow(size_t size) { m_nodes.grow(size); } … … 106 107 107 108 void dump(PrintStream& out) const; 109 110 void didLink() 111 { 112 #if !ASSERT_DISABLED 113 isLinked = true; 114 #endif 115 } 108 116 109 117 // This value is used internally for block linking and OSR entry. It is mostly meaningless -
trunk/Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
r172853 r172940 51 51 namespace JSC { namespace DFG { 52 52 53 static const bool verbose = false; 54 53 55 class ConstantBufferKey { 54 56 public: … … 179 181 void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset); 180 182 void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind); 181 void emitFunctionChecks(const CallLinkStatus&, Node* callTarget, int registerOffset, CodeSpecializationKind); 183 void emitFunctionChecks(CallVariant, Node* callTarget, int registerOffset, CodeSpecializationKind); 184 void undoFunctionChecks(CallVariant); 182 185 void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind); 186 unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CodeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1. 183 187 // Handle inlining. Return true if it succeeded, false if we need to plant a call. 184 bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind); 188 bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction); 189 enum CallerLinkability { CallerDoesNormalLinking, CallerLinksManually }; 190 bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance); 191 void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability); 192 void cancelLinkingForBlock(InlineStackEntry*, BasicBlock*); // Only works when the given block is the last one to have been added for that inline stack entry. 185 193 // Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call. 186 194 bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction); 187 195 bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType); 188 bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction,CodeSpecializationKind);196 bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind); 189 197 Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value); 190 198 Node* handleGetByOffset(SpeculatedType, Node* base, const StructureSet&, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset); … … 201 209 Node* getScope(unsigned skipCount); 202 210 203 // Prepare to parse a block.204 211 void prepareToParseBlock(); 212 void clearCaches(); 213 205 214 // Parse a single basic block of bytecode instructions. 206 215 bool parseBlock(unsigned limit); … … 297 306 return delayed.execute(this, setMode); 298 307 } 308 309 void processSetLocalQueue() 310 { 311 for (unsigned i = 0; i < m_setLocalQueue.size(); ++i) 312 m_setLocalQueue[i].execute(this); 313 m_setLocalQueue.resize(0); 314 } 299 315 300 316 Node* set(VirtualRegister operand, Node* value, SetMode setMode = NormalSet) … … 638 654 return result; 639 655 } 656 657 void removeLastNodeFromGraph(NodeType expectedNodeType) 658 { 659 Node* node = m_currentBlock->takeLast(); 660 RELEASE_ASSERT(node->op() == expectedNodeType); 661 m_graph.m_allocator.free(node); 662 } 640 663 641 664 void addVarArgChild(Node* child) … … 646 669 647 670 Node* addCallWithoutSettingResult( 648 NodeType op, Node* callee, int argCount, int registerOffset,671 NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset, 649 672 SpeculatedType prediction) 650 673 { … … 654 677 m_parameterSlots = parameterSlots; 655 678 656 int dummyThisArgument = op == Call || op == NativeCall ? 0 : 1;679 int dummyThisArgument = op == Call || op == NativeCall || op == ProfiledCall ? 0 : 1; 657 680 for (int i = 0 + dummyThisArgument; i < argCount; ++i) 658 681 addVarArgChild(get(virtualRegisterForArgument(i, registerOffset))); 659 682 660 return addToGraph(Node::VarArg, op, OpInfo(0), OpInfo(prediction));683 return addToGraph(Node::VarArg, op, opInfo, OpInfo(prediction)); 661 684 } 662 685 663 686 Node* addCall( 664 int result, NodeType op, Node* callee, int argCount, int registerOffset,687 int result, NodeType op, OpInfo opInfo, Node* callee, int argCount, int registerOffset, 665 688 SpeculatedType prediction) 666 689 { 667 690 Node* call = addCallWithoutSettingResult( 668 op, callee, argCount, registerOffset, prediction);691 op, opInfo, callee, argCount, registerOffset, prediction); 669 692 VirtualRegister resultReg(result); 670 693 if (resultReg.isValid()) … … 872 895 873 896 // Potential block linking targets. Must be sorted by bytecodeBegin, and 874 // cannot have two blocks that have the same bytecodeBegin. For this very 875 // reason, this is not equivalent to 897 // cannot have two blocks that have the same bytecodeBegin. 876 898 Vector<BasicBlock*> m_blockLinkingTargets; 877 899 … … 1020 1042 { 1021 1043 ASSERT(registerOffset <= 0); 1022 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);1023 1044 1024 1045 if (callTarget->hasConstant()) 1025 1046 callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true); 1047 1048 if ((!callLinkStatus.canOptimize() || callLinkStatus.size() != 1) 1049 && !isFTL(m_graph.m_plan.mode) && Options::useFTLJIT() 1050 && InlineCallFrame::isNormalCall(kind) 1051 && CallEdgeLog::isEnabled() 1052 && Options::dfgDoesCallEdgeProfiling()) { 1053 ASSERT(op == Call || op == Construct); 1054 if (op == Call) 1055 op = ProfiledCall; 1056 else 1057 op = ProfiledConstruct; 1058 } 1026 1059 1027 1060 if (!callLinkStatus.canOptimize()) { … … 1029 1062 // that we cannot optimize them. 1030 1063 1031 addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction);1064 addCall(result, op, OpInfo(), callTarget, argumentCountIncludingThis, registerOffset, prediction); 1032 1065 return; 1033 1066 } 1034 1067 1035 1068 unsigned nextOffset = m_currentIndex + instructionSize; 1036 1037 if (InternalFunction* function = callLinkStatus.internalFunction()) { 1038 if (handleConstantInternalFunction(result, function, registerOffset, argumentCountIncludingThis, prediction, specializationKind)) { 1039 // This phantoming has to be *after* the code for the intrinsic, to signify that 1040 // the inputs must be kept alive whatever exits the intrinsic may do. 1041 addToGraph(Phantom, callTarget); 1042 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind); 1043 return; 1044 } 1045 1046 // Can only handle this using the generic call handler. 1047 addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction); 1048 return; 1049 } 1050 1051 Intrinsic intrinsic = callLinkStatus.intrinsicFor(specializationKind); 1052 1053 JSFunction* knownFunction = nullptr; 1054 if (intrinsic != NoIntrinsic) { 1055 emitFunctionChecks(callLinkStatus, callTarget, registerOffset, specializationKind); 1056 1057 if (handleIntrinsic(result, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) { 1058 // This phantoming has to be *after* the code for the intrinsic, to signify that 1059 // the inputs must be kept alive whatever exits the intrinsic may do. 1060 addToGraph(Phantom, callTarget); 1061 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind); 1062 if (m_graph.compilation()) 1063 m_graph.compilation()->noticeInlinedCall(); 1064 return; 1065 } 1066 } else if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, kind)) { 1069 1070 OpInfo callOpInfo; 1071 1072 if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, op, kind, prediction)) { 1067 1073 if (m_graph.compilation()) 1068 1074 m_graph.compilation()->noticeInlinedCall(); 1069 1075 return; 1076 } 1077 1070 1078 #if ENABLE(FTL_NATIVE_CALL_INLINING) 1071 } else if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls()) { 1072 JSFunction* function = callLinkStatus.function(); 1079 if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls() && callLinkStatus.size() == 1 && !callLinkStatus.couldTakeSlowPath()) { 1080 CallVariant callee = callLinkStatus[0].callee(); 1081 JSFunction* function = callee.function(); 1082 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind); 1073 1083 if (function && function->isHostFunction()) { 1074 emitFunctionChecks(call LinkStatus, callTarget, registerOffset, specializationKind);1075 knownFunction = function;1076 1077 if (op == Call )1084 emitFunctionChecks(callee, callTarget, registerOffset, specializationKind); 1085 callOpInfo = OpInfo(m_graph.freeze(function)); 1086 1087 if (op == Call || op == ProfiledCall) 1078 1088 op = NativeCall; 1079 1089 else { 1080 ASSERT(op == Construct );1090 ASSERT(op == Construct || op == ProfiledConstruct); 1081 1091 op = NativeConstruct; 1082 1092 } 1083 1093 } 1094 } 1084 1095 #endif 1085 } 1086 Node* call = addCall(result, op, callTarget, argumentCountIncludingThis, registerOffset, prediction); 1087 1088 if (knownFunction) 1089 call->giveKnownFunction(knownFunction); 1096 1097 addCall(result, op, callOpInfo, callTarget, argumentCountIncludingThis, registerOffset, prediction); 1090 1098 } 1091 1099 1092 void ByteCodeParser::emitFunctionChecks( const CallLinkStatus& callLinkStatus, Node* callTarget, int registerOffset, CodeSpecializationKind kind)1100 void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, int registerOffset, CodeSpecializationKind kind) 1093 1101 { 1094 1102 Node* thisArgument; … … 1098 1106 thisArgument = 0; 1099 1107 1100 if (callLinkStatus.isProved()) { 1101 addToGraph(Phantom, callTarget, thisArgument); 1102 return; 1103 } 1104 1105 ASSERT(callLinkStatus.canOptimize()); 1106 1107 if (JSFunction* function = callLinkStatus.function()) 1108 addToGraph(CheckFunction, OpInfo(m_graph.freeze(function)), callTarget, thisArgument); 1109 else { 1110 ASSERT(callLinkStatus.executable()); 1111 1112 addToGraph(CheckExecutable, OpInfo(callLinkStatus.executable()), callTarget, thisArgument); 1113 } 1108 JSCell* calleeCell; 1109 Node* callTargetForCheck; 1110 if (callee.isClosureCall()) { 1111 calleeCell = callee.executable(); 1112 callTargetForCheck = addToGraph(GetExecutable, callTarget); 1113 } else { 1114 calleeCell = callee.nonExecutableCallee(); 1115 callTargetForCheck = callTarget; 1116 } 1117 1118 ASSERT(calleeCell); 1119 addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck, thisArgument); 1120 } 1121 1122 void ByteCodeParser::undoFunctionChecks(CallVariant callee) 1123 { 1124 removeLastNodeFromGraph(CheckCell); 1125 if (callee.isClosureCall()) 1126 removeLastNodeFromGraph(GetExecutable); 1114 1127 } 1115 1128 … … 1120 1133 } 1121 1134 1122 bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind)1135 unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountIncludingThis, CodeSpecializationKind kind) 1123 1136 { 1124 static const bool verbose = false;1125 1126 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);1127 1128 1137 if (verbose) 1129 dataLog("Considering inlining ", call LinkStatus, " into ", currentCodeOrigin(), "\n");1130 1131 // First, the really simple checks: do we have an actual JS function?1132 if (! callLinkStatus.executable()) {1138 dataLog("Considering inlining ", callee, " into ", currentCodeOrigin(), "\n"); 1139 1140 FunctionExecutable* executable = callee.functionExecutable(); 1141 if (!executable) { 1133 1142 if (verbose) 1134 dataLog(" Failing because there is no executable.\n"); 1135 return false; 1136 } 1137 if (callLinkStatus.executable()->isHostFunction()) { 1138 if (verbose) 1139 dataLog(" Failing because it's a host function.\n"); 1140 return false; 1141 } 1142 1143 FunctionExecutable* executable = jsCast<FunctionExecutable*>(callLinkStatus.executable()); 1143 dataLog(" Failing because there is no function executable."); 1144 return UINT_MAX; 1145 } 1144 1146 1145 1147 // Does the number of arguments we're passing match the arity of the target? We currently … … 1149 1151 if (verbose) 1150 1152 dataLog(" Failing because of arity mismatch.\n"); 1151 return false;1153 return UINT_MAX; 1152 1154 } 1153 1155 … … 1158 1160 // global function, where watchpointing gives us static information. Overall, it's a rare case 1159 1161 // because we expect that any hot callees would have already been compiled. 1160 CodeBlock* codeBlock = executable->baselineCodeBlockFor( specializationKind);1162 CodeBlock* codeBlock = executable->baselineCodeBlockFor(kind); 1161 1163 if (!codeBlock) { 1162 1164 if (verbose) 1163 1165 dataLog(" Failing because no code block available.\n"); 1164 return false;1166 return UINT_MAX; 1165 1167 } 1166 1168 CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel( 1167 codeBlock, specializationKind, callLinkStatus.isClosureCall());1169 codeBlock, kind, callee.isClosureCall()); 1168 1170 if (!canInline(capabilityLevel)) { 1169 1171 if (verbose) 1170 1172 dataLog(" Failing because the function is not inlineable.\n"); 1171 return false;1173 return UINT_MAX; 1172 1174 } 1173 1175 … … 1179 1181 if (verbose) 1180 1182 dataLog(" Failing because the caller is too large.\n"); 1181 return false;1183 return UINT_MAX; 1182 1184 } 1183 1185 … … 1198 1200 if (verbose) 1199 1201 dataLog(" Failing because depth exceeded.\n"); 1200 return false;1202 return UINT_MAX; 1201 1203 } 1202 1204 … … 1206 1208 if (verbose) 1207 1209 dataLog(" Failing because recursion detected.\n"); 1208 return false;1210 return UINT_MAX; 1209 1211 } 1210 1212 } … … 1212 1214 1213 1215 if (verbose) 1214 dataLog(" Committing to inlining.\n"); 1215 1216 // Now we know without a doubt that we are committed to inlining. So begin the process 1217 // by checking the callee (if necessary) and making sure that arguments and the callee 1218 // are flushed. 1219 emitFunctionChecks(callLinkStatus, callTargetNode, registerOffset, specializationKind); 1220 1216 dataLog(" Inlining should be possible.\n"); 1217 1218 // It might be possible to inline. 1219 return codeBlock->instructionCount(); 1220 } 1221 1222 void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability) 1223 { 1224 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind); 1225 1226 ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX); 1227 1228 CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind); 1229 1221 1230 // FIXME: Don't flush constants! 1222 1231 … … 1234 1243 1235 1244 InlineStackEntry inlineStackEntry( 1236 this, codeBlock, codeBlock, m_graph.lastBlock(), call LinkStatus.function(), resultReg,1245 this, codeBlock, codeBlock, m_graph.lastBlock(), callee.function(), resultReg, 1237 1246 (VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind); 1238 1247 … … 1248 1257 RELEASE_ASSERT( 1249 1258 m_inlineStackTop->m_inlineCallFrame->isClosureCall 1250 == call LinkStatus.isClosureCall());1251 if (call LinkStatus.isClosureCall()) {1259 == callee.isClosureCall()); 1260 if (callee.isClosureCall()) { 1252 1261 VariableAccessData* calleeVariable = 1253 1262 set(VirtualRegister(JSStack::Callee), callTargetNode, ImmediateNakedSet)->variableAccessData(); … … 1264 1273 1265 1274 parseCodeBlock(); 1266 prepareToParseBlock(); // Reset our state now that we're back to the outer code.1275 clearCaches(); // Reset our state now that we're back to the outer code. 1267 1276 1268 1277 m_currentIndex = oldIndex; … … 1277 1286 ASSERT(inlineStackEntry.m_callsiteBlockHead->isLinked); 1278 1287 1279 // It's possible that the callsite block head is not owned by the caller. 1280 if (!inlineStackEntry.m_caller->m_unlinkedBlocks.isEmpty()) { 1281 // It's definitely owned by the caller, because the caller created new blocks. 1282 // Assert that this all adds up. 1283 ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_block == inlineStackEntry.m_callsiteBlockHead); 1284 ASSERT(inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking); 1285 inlineStackEntry.m_caller->m_unlinkedBlocks.last().m_needsNormalLinking = false; 1286 } else { 1287 // It's definitely not owned by the caller. Tell the caller that he does not 1288 // need to link his callsite block head, because we did it for him. 1289 ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking); 1290 ASSERT(inlineStackEntry.m_caller->m_callsiteBlockHead == inlineStackEntry.m_callsiteBlockHead); 1291 inlineStackEntry.m_caller->m_callsiteBlockHeadNeedsLinking = false; 1292 } 1288 if (callerLinkability == CallerDoesNormalLinking) 1289 cancelLinkingForBlock(inlineStackEntry.m_caller, inlineStackEntry.m_callsiteBlockHead); 1293 1290 1294 1291 linkBlocks(inlineStackEntry.m_unlinkedBlocks, inlineStackEntry.m_blockLinkingTargets); … … 1309 1306 // in the linker's binary search. 1310 1307 lastBlock->bytecodeBegin = m_currentIndex; 1311 m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock())); 1308 if (callerLinkability == CallerDoesNormalLinking) { 1309 if (verbose) 1310 dataLog("Adding unlinked block ", RawPointer(m_graph.lastBlock()), " (one return)\n"); 1311 m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock())); 1312 } 1312 1313 } 1313 1314 1314 1315 m_currentBlock = m_graph.lastBlock(); 1315 return true;1316 return; 1316 1317 } 1317 1318 1318 1319 // If we get to this point then all blocks must end in some sort of terminals. 1319 1320 ASSERT(lastBlock->last()->isTerminal()); 1320 1321 1321 1322 1322 // Need to create a new basic block for the continuation at the caller. … … 1334 1334 node->targetBlock() = block.get(); 1335 1335 inlineStackEntry.m_unlinkedBlocks[i].m_needsEarlyReturnLinking = false; 1336 #if !ASSERT_DISABLED 1337 blockToLink->isLinked = true;1338 #endif 1336 if (verbose) 1337 dataLog("Marking ", RawPointer(blockToLink), " as linked (jumps to return)\n"); 1338 blockToLink->didLink(); 1339 1339 } 1340 1340 1341 1341 m_currentBlock = block.get(); 1342 1342 ASSERT(m_inlineStackTop->m_caller->m_blockLinkingTargets.isEmpty() || m_inlineStackTop->m_caller->m_blockLinkingTargets.last()->bytecodeBegin < nextOffset); 1343 m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get())); 1344 m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get()); 1343 if (verbose) 1344 dataLog("Adding unlinked block ", RawPointer(block.get()), " (many returns)\n"); 1345 if (callerLinkability == CallerDoesNormalLinking) { 1346 m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.get())); 1347 m_inlineStackTop->m_caller->m_blockLinkingTargets.append(block.get()); 1348 } 1345 1349 m_graph.appendBlock(block); 1346 1350 prepareToParseBlock(); 1347 1348 // At this point we return and continue to generate code for the caller, but 1349 // in the new basic block. 1351 } 1352 1353 void ByteCodeParser::cancelLinkingForBlock(InlineStackEntry* inlineStackEntry, BasicBlock* block) 1354 { 1355 // It's possible that the callsite block head is not owned by the caller. 1356 if (!inlineStackEntry->m_unlinkedBlocks.isEmpty()) { 1357 // It's definitely owned by the caller, because the caller created new blocks. 1358 // Assert that this all adds up. 1359 ASSERT_UNUSED(block, inlineStackEntry->m_unlinkedBlocks.last().m_block == block); 1360 ASSERT(inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking); 1361 inlineStackEntry->m_unlinkedBlocks.last().m_needsNormalLinking = false; 1362 } else { 1363 // It's definitely not owned by the caller. Tell the caller that he does not 1364 // need to link his callsite block head, because we did it for him. 1365 ASSERT(inlineStackEntry->m_callsiteBlockHeadNeedsLinking); 1366 ASSERT_UNUSED(block, inlineStackEntry->m_callsiteBlockHead == block); 1367 inlineStackEntry->m_callsiteBlockHeadNeedsLinking = false; 1368 } 1369 } 1370 1371 bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance) 1372 { 1373 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind); 1374 1375 if (!inliningBalance) 1376 return false; 1377 1378 if (InternalFunction* function = callee.internalFunction()) { 1379 if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind)) { 1380 addToGraph(Phantom, callTargetNode); 1381 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind); 1382 inliningBalance--; 1383 return true; 1384 } 1385 return false; 1386 } 1387 1388 Intrinsic intrinsic = callee.intrinsicFor(specializationKind); 1389 if (intrinsic != NoIntrinsic) { 1390 if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) { 1391 addToGraph(Phantom, callTargetNode); 1392 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind); 1393 inliningBalance--; 1394 return true; 1395 } 1396 return false; 1397 } 1398 1399 unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind); 1400 if (myInliningCost > inliningBalance) 1401 return false; 1402 1403 inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability); 1404 inliningBalance -= myInliningCost; 1405 return true; 1406 } 1407 1408 bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction) 1409 { 1410 if (verbose) { 1411 dataLog("Handling inlining...\n"); 1412 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1413 } 1414 CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind); 1415 1416 if (!callLinkStatus.size()) { 1417 if (verbose) 1418 dataLog("Bailing inlining.\n"); 1419 return false; 1420 } 1421 1422 unsigned inliningBalance = Options::maximumFunctionForCallInlineCandidateInstructionCount(); 1423 if (specializationKind == CodeForConstruct) 1424 inliningBalance = std::min(inliningBalance, Options::maximumFunctionForConstructInlineCandidateInstructionCount()); 1425 if (callLinkStatus.isClosureCall()) 1426 inliningBalance = std::min(inliningBalance, Options::maximumFunctionForClosureCallInlineCandidateInstructionCount()); 1427 1428 // First check if we can avoid creating control flow. Our inliner does some CFG 1429 // simplification on the fly and this helps reduce compile times, but we can only leverage 1430 // this in cases where we don't need control flow diamonds to check the callee. 1431 if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) { 1432 emitFunctionChecks( 1433 callLinkStatus[0].callee(), callTargetNode, registerOffset, specializationKind); 1434 bool result = attemptToInlineCall( 1435 callTargetNode, resultOperand, callLinkStatus[0].callee(), registerOffset, 1436 argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction, 1437 inliningBalance); 1438 if (!result && !callLinkStatus.isProved()) 1439 undoFunctionChecks(callLinkStatus[0].callee()); 1440 if (verbose) { 1441 dataLog("Done inlining (simple).\n"); 1442 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1443 } 1444 return result; 1445 } 1446 1447 // We need to create some kind of switch over callee. For now we only do this if we believe that 1448 // we're in the top tier. We have two reasons for this: first, it provides us an opportunity to 1449 // do more detailed polyvariant/polymorphic profiling; and second, it reduces compile times in 1450 // the DFG. And by polyvariant profiling we mean polyvariant profiling of *this* call. Note that 1451 // we could improve that aspect of this by doing polymorphic inlining but having the profiling 1452 // also. Currently we opt against this, but it could be interesting. That would require having a 1453 // separate node for call edge profiling. 1454 // FIXME: Introduce the notion of a separate call edge profiling node. 1455 // https://bugs.webkit.org/show_bug.cgi?id=136033 1456 if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()) { 1457 if (verbose) { 1458 dataLog("Bailing inlining (hard).\n"); 1459 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1460 } 1461 return false; 1462 } 1463 1464 unsigned oldOffset = m_currentIndex; 1465 1466 bool allAreClosureCalls = true; 1467 bool allAreDirectCalls = true; 1468 for (unsigned i = callLinkStatus.size(); i--;) { 1469 if (callLinkStatus[i].callee().isClosureCall()) 1470 allAreDirectCalls = false; 1471 else 1472 allAreClosureCalls = false; 1473 } 1474 1475 Node* thingToSwitchOn; 1476 if (allAreDirectCalls) 1477 thingToSwitchOn = callTargetNode; 1478 else if (allAreClosureCalls) 1479 thingToSwitchOn = addToGraph(GetExecutable, callTargetNode); 1480 else { 1481 // FIXME: We should be able to handle this case, but it's tricky and we don't know of cases 1482 // where it would be beneficial. Also, CallLinkStatus would make all callees appear like 1483 // closure calls if any calls were closure calls - except for calls to internal functions. 1484 // So this will only arise if some callees are internal functions and others are closures. 1485 // https://bugs.webkit.org/show_bug.cgi?id=136020 1486 if (verbose) { 1487 dataLog("Bailing inlining (mix).\n"); 1488 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1489 } 1490 return false; 1491 } 1492 1493 if (verbose) { 1494 dataLog("Doing hard inlining...\n"); 1495 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1496 } 1497 1498 // This makes me wish that we were in SSA all the time. We need to pick a variable into which to 1499 // store the callee so that it will be accessible to all of the blocks we're about to create. We 1500 // get away with doing an immediate-set here because we wouldn't have performed any side effects 1501 // yet. 1502 if (verbose) 1503 dataLog("Register offset: ", registerOffset); 1504 VirtualRegister calleeReg(registerOffset + JSStack::Callee); 1505 calleeReg = m_inlineStackTop->remapOperand(calleeReg); 1506 if (verbose) 1507 dataLog("Callee is going to be ", calleeReg, "\n"); 1508 setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush); 1509 1510 SwitchData& data = *m_graph.m_switchData.add(); 1511 data.kind = SwitchCell; 1512 addToGraph(Switch, OpInfo(&data), thingToSwitchOn); 1513 1514 BasicBlock* originBlock = m_currentBlock; 1515 if (verbose) 1516 dataLog("Marking ", RawPointer(originBlock), " as linked (origin of poly inline)\n"); 1517 originBlock->didLink(); 1518 cancelLinkingForBlock(m_inlineStackTop, originBlock); 1519 1520 // Each inlined callee will have a landing block that it returns at. They should all have jumps 1521 // to the continuation block, which we create last. 1522 Vector<BasicBlock*> landingBlocks; 1523 1524 // We make force this true if we give up on inlining any of the edges. 1525 bool couldTakeSlowPath = callLinkStatus.couldTakeSlowPath(); 1526 1527 if (verbose) 1528 dataLog("About to loop over functions at ", currentCodeOrigin(), ".\n"); 1529 1530 for (unsigned i = 0; i < callLinkStatus.size(); ++i) { 1531 m_currentIndex = oldOffset; 1532 RefPtr<BasicBlock> block = adoptRef(new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN)); 1533 m_currentBlock = block.get(); 1534 m_graph.appendBlock(block); 1535 prepareToParseBlock(); 1536 1537 Node* myCallTargetNode = getDirect(calleeReg); 1538 1539 bool inliningResult = attemptToInlineCall( 1540 myCallTargetNode, resultOperand, callLinkStatus[i].callee(), registerOffset, 1541 argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction, 1542 inliningBalance); 1543 1544 if (!inliningResult) { 1545 // That failed so we let the block die. Nothing interesting should have been added to 1546 // the block. We also give up on inlining any of the (less frequent) callees. 1547 ASSERT(m_currentBlock == block.get()); 1548 ASSERT(m_graph.m_blocks.last() == block); 1549 m_graph.killBlockAndItsContents(block.get()); 1550 m_graph.m_blocks.removeLast(); 1551 1552 // The fact that inlining failed means we need a slow path. 1553 couldTakeSlowPath = true; 1554 break; 1555 } 1556 1557 JSCell* thingToCaseOn; 1558 if (allAreDirectCalls) 1559 thingToCaseOn = callLinkStatus[i].callee().nonExecutableCallee(); 1560 else { 1561 ASSERT(allAreClosureCalls); 1562 thingToCaseOn = callLinkStatus[i].callee().executable(); 1563 } 1564 data.cases.append(SwitchCase(m_graph.freeze(thingToCaseOn), block.get())); 1565 m_currentIndex = nextOffset; 1566 processSetLocalQueue(); // This only comes into play for intrinsics, since normal inlined code will leave an empty queue. 1567 addToGraph(Jump); 1568 if (verbose) 1569 dataLog("Marking ", RawPointer(m_currentBlock), " as linked (tail of poly inlinee)\n"); 1570 m_currentBlock->didLink(); 1571 landingBlocks.append(m_currentBlock); 1572 1573 if (verbose) 1574 dataLog("Finished inlining ", callLinkStatus[i].callee(), " at ", currentCodeOrigin(), ".\n"); 1575 } 1576 1577 RefPtr<BasicBlock> slowPathBlock = adoptRef( 1578 new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN)); 1579 m_currentIndex = oldOffset; 1580 data.fallThrough = BranchTarget(slowPathBlock.get()); 1581 m_graph.appendBlock(slowPathBlock); 1582 if (verbose) 1583 dataLog("Marking ", RawPointer(slowPathBlock.get()), " as linked (slow path block)\n"); 1584 slowPathBlock->didLink(); 1585 prepareToParseBlock(); 1586 m_currentBlock = slowPathBlock.get(); 1587 Node* myCallTargetNode = getDirect(calleeReg); 1588 if (couldTakeSlowPath) { 1589 addCall( 1590 resultOperand, callOp, OpInfo(), myCallTargetNode, argumentCountIncludingThis, 1591 registerOffset, prediction); 1592 } else { 1593 addToGraph(CheckBadCell); 1594 addToGraph(Phantom, myCallTargetNode); 1595 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind); 1596 1597 set(VirtualRegister(resultOperand), addToGraph(BottomValue)); 1598 } 1599 1600 m_currentIndex = nextOffset; 1601 processSetLocalQueue(); 1602 addToGraph(Jump); 1603 landingBlocks.append(m_currentBlock); 1604 1605 RefPtr<BasicBlock> continuationBlock = adoptRef( 1606 new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, PNaN)); 1607 m_graph.appendBlock(continuationBlock); 1608 if (verbose) 1609 dataLog("Adding unlinked block ", RawPointer(continuationBlock.get()), " (continuation)\n"); 1610 m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(continuationBlock.get())); 1611 prepareToParseBlock(); 1612 m_currentBlock = continuationBlock.get(); 1613 1614 for (unsigned i = landingBlocks.size(); i--;) 1615 landingBlocks[i]->last()->targetBlock() = continuationBlock.get(); 1616 1617 m_currentIndex = oldOffset; 1618 1619 if (verbose) { 1620 dataLog("Done inlining (hard).\n"); 1621 dataLog("Stack: ", currentCodeOrigin(), "\n"); 1622 } 1350 1623 return true; 1351 1624 } … … 1646 1919 bool ByteCodeParser::handleConstantInternalFunction( 1647 1920 int resultOperand, InternalFunction* function, int registerOffset, 1648 int argumentCountIncludingThis, SpeculatedType prediction,CodeSpecializationKind kind)1921 int argumentCountIncludingThis, CodeSpecializationKind kind) 1649 1922 { 1650 1923 // If we ever find that we have a lot of internal functions that we specialize for, … … 1654 1927 // we know about is small enough, that having just a linear cascade of if statements 1655 1928 // is good enough. 1656 1657 UNUSED_PARAM(prediction); // Remove this once we do more things.1658 1929 1659 1930 if (function->classInfo() == ArrayConstructor::info()) { … … 2021 2292 void ByteCodeParser::prepareToParseBlock() 2022 2293 { 2294 clearCaches(); 2295 ASSERT(m_setLocalQueue.isEmpty()); 2296 } 2297 2298 void ByteCodeParser::clearCaches() 2299 { 2023 2300 m_constants.resize(0); 2024 2301 } … … 2060 2337 2061 2338 while (true) { 2062 for (unsigned i = 0; i < m_setLocalQueue.size(); ++i) 2063 m_setLocalQueue[i].execute(this); 2064 m_setLocalQueue.resize(0); 2339 processSetLocalQueue(); 2065 2340 2066 2341 // Don't extend over jump destinations. … … 2206 2481 if (!cachedFunction 2207 2482 || m_inlineStackTop->m_profiledBlock->couldTakeSlowCase(m_currentIndex) 2208 || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Bad Function)) {2483 || m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell)) { 2209 2484 set(VirtualRegister(currentInstruction[1].u.operand), get(VirtualRegister(JSStack::Callee))); 2210 2485 } else { … … 2212 2487 ASSERT(cachedFunction->inherits(JSFunction::info())); 2213 2488 Node* actualCallee = get(VirtualRegister(JSStack::Callee)); 2214 addToGraph(Check Function, OpInfo(frozen), actualCallee);2489 addToGraph(CheckCell, OpInfo(frozen), actualCallee); 2215 2490 set(VirtualRegister(currentInstruction[1].u.operand), addToGraph(JSConstant, OpInfo(frozen))); 2216 2491 } … … 2894 3169 ASSERT(pointerIsFunction(currentInstruction[2].u.specialPointer)); 2895 3170 addToGraph( 2896 Check Function,3171 CheckCell, 2897 3172 OpInfo(m_graph.freeze(static_cast<JSCell*>(actualPointerFor( 2898 3173 m_inlineStackTop->m_codeBlock, currentInstruction[2].u.specialPointer)))), … … 3318 3593 } 3319 3594 3320 #if !ASSERT_DISABLED 3321 block->isLinked = true;3322 #endif 3595 if (verbose) 3596 dataLog("Marking ", RawPointer(block), " as linked (actually did linking)\n"); 3597 block->didLink(); 3323 3598 } 3324 3599 … … 3326 3601 { 3327 3602 for (size_t i = 0; i < unlinkedBlocks.size(); ++i) { 3603 if (verbose) 3604 dataLog("Attempting to link ", RawPointer(unlinkedBlocks[i].m_block), "\n"); 3328 3605 if (unlinkedBlocks[i].m_needsNormalLinking) { 3606 if (verbose) 3607 dataLog(" Does need normal linking.\n"); 3329 3608 linkBlock(unlinkedBlocks[i].m_block, possibleTargets); 3330 3609 unlinkedBlocks[i].m_needsNormalLinking = false; … … 3493 3772 void ByteCodeParser::parseCodeBlock() 3494 3773 { 3495 prepareToParseBlock();3774 clearCaches(); 3496 3775 3497 3776 CodeBlock* codeBlock = m_inlineStackTop->m_codeBlock; … … 3559 3838 // a peephole coalescing of this block in the if statement above. So, we're 3560 3839 // generating suboptimal code and leaving more work for the CFG simplifier. 3561 ASSERT(m_inlineStackTop->m_unlinkedBlocks.isEmpty() || m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin < m_currentIndex); 3840 if (!m_inlineStackTop->m_unlinkedBlocks.isEmpty()) { 3841 unsigned lastBegin = 3842 m_inlineStackTop->m_unlinkedBlocks.last().m_block->bytecodeBegin; 3843 ASSERT_UNUSED( 3844 lastBegin, lastBegin == UINT_MAX || lastBegin < m_currentIndex); 3845 } 3562 3846 m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(block.get())); 3563 3847 m_inlineStackTop->m_blockLinkingTargets.append(block.get()); -
trunk/Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
r172129 r172940 91 91 break; 92 92 case Phantom: 93 if (!node->child1()) 93 if (!node->child1()) { 94 m_graph.m_allocator.free(node); 94 95 continue; 96 } 95 97 switch (node->child1()->op()) { 96 98 case Phi: -
trunk/Source/JavaScriptCore/dfg/DFGClobberize.h
r172808 r172940 145 145 case MakeRope: 146 146 case ValueToInt32: 147 case GetExecutable: 148 case BottomValue: 147 149 def(PureValue(node)); 148 150 return; … … 240 242 return; 241 243 242 case CheckFunction: 243 def(PureValue(CheckFunction, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->function())); 244 return; 245 246 case CheckExecutable: 247 def(PureValue(node, node->executable())); 244 case CheckCell: 245 def(PureValue(CheckCell, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->cellOperand())); 248 246 return; 249 247 … … 264 262 case Throw: 265 263 case ForceOSRExit: 264 case CheckBadCell: 266 265 case Return: 267 266 case Unreachable: … … 359 358 case Call: 360 359 case Construct: 360 case ProfiledCall: 361 case ProfiledConstruct: 361 362 case NativeCall: 362 363 case NativeConstruct: -
trunk/Source/JavaScriptCore/dfg/DFGCommon.h
r172737 r172940 62 62 RefNode, 63 63 DontRefNode 64 }; 65 66 enum SwitchKind { 67 SwitchImm, 68 SwitchChar, 69 SwitchString, 70 SwitchCell 64 71 }; 65 72 -
trunk/Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
r172737 r172940 143 143 } 144 144 145 case Check Function: {146 if (m_state.forNode(node->child1()).value() != node-> function()->value())145 case CheckCell: { 146 if (m_state.forNode(node->child1()).value() != node->cellOperand()->value()) 147 147 break; 148 148 node->convertToPhantom(); … … 385 385 break; 386 386 } 387 388 case ProfiledCall: 389 case ProfiledConstruct: { 390 if (!m_state.forNode(m_graph.varArgChild(node, 0)).m_value) 391 break; 392 393 // If we were able to prove that the callee is a constant then the normal call 394 // inline cache will record this callee. This means that there is no need to do any 395 // additional profiling. 396 node->setOp(node->op() == ProfiledCall ? Call : Construct); 397 eliminated = true; 398 break; 399 } 387 400 388 401 default: -
trunk/Source/JavaScriptCore/dfg/DFGDoesGC.cpp
r172808 r172940 92 92 case PutByIdDirect: 93 93 case CheckStructure: 94 case CheckExecutable:94 case GetExecutable: 95 95 case GetButterfly: 96 96 case CheckArray: … … 105 105 case VariableWatchpoint: 106 106 case VarInjectionWatchpoint: 107 case Check Function:107 case CheckCell: 108 108 case AllocationProfileWatchpoint: 109 109 case RegExpExec: … … 120 120 case NativeCall: 121 121 case NativeConstruct: 122 case ProfiledCall: 123 case ProfiledConstruct: 122 124 case Breakpoint: 123 125 case ProfileWillCall: … … 196 198 case FiatInt52: 197 199 case BooleanToNumber: 200 case CheckBadCell: 201 case BottomValue: 198 202 return false; 199 203 -
trunk/Source/JavaScriptCore/dfg/DFGDriver.cpp
r165405 r172940 90 90 } 91 91 92 if (CallEdgeLog::isEnabled()) 93 vm.ensureCallEdgeLog().processLog(); 94 92 95 RefPtr<Plan> plan = adoptRef( 93 96 new Plan(codeBlock, profiledDFGCodeBlock, mode, osrEntryBytecodeIndex, mustHandleValues)); -
trunk/Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
r172808 r172940 737 737 fixEdge<StringUse>(node->child1()); 738 738 break; 739 case SwitchCell: 740 if (node->child1()->shouldSpeculateCell()) 741 fixEdge<CellUse>(node->child1()); 742 // else it's fine for this to have UntypedUse; we will handle this by just making 743 // non-cells take the default case. 744 break; 739 745 } 740 746 break; … … 898 904 } 899 905 900 case CheckExecutable: {906 case GetExecutable: { 901 907 fixEdge<FunctionUse>(node->child1()); 902 908 break; … … 904 910 905 911 case CheckStructure: 906 case Check Function:912 case CheckCell: 907 913 case CheckHasInstance: 908 914 case CreateThis: … … 1121 1127 case Call: 1122 1128 case Construct: 1129 case ProfiledCall: 1130 case ProfiledConstruct: 1123 1131 case NativeCall: 1124 1132 case NativeConstruct: … … 1150 1158 case CountExecution: 1151 1159 case ForceOSRExit: 1160 case CheckBadCell: 1152 1161 case CheckWatchdogTimer: 1153 1162 case Unreachable: … … 1160 1169 case MovHint: 1161 1170 case ZombieHint: 1171 case BottomValue: 1162 1172 break; 1163 1173 #else -
trunk/Source/JavaScriptCore/dfg/DFGGraph.cpp
r172737 r172940 223 223 if (node->hasTransition()) 224 224 out.print(comma, pointerDumpInContext(node->transition(), context)); 225 if (node->hasFunction()) { 226 out.print(comma, "function(", pointerDump(node->function()), ", "); 227 if (node->function()->value().isCell() 228 && node->function()->value().asCell()->inherits(JSFunction::info())) { 229 JSFunction* function = jsCast<JSFunction*>(node->function()->value().asCell()); 230 if (function->isHostFunction()) 231 out.print("<host function>"); 232 else 233 out.print(FunctionExecutableDump(function->jsExecutable())); 234 } else 235 out.print("<not JSFunction>"); 236 out.print(")"); 237 } 238 if (node->hasExecutable()) { 239 if (node->executable()->inherits(FunctionExecutable::info())) 240 out.print(comma, "executable(", FunctionExecutableDump(jsCast<FunctionExecutable*>(node->executable())), ")"); 241 else 242 out.print(comma, "executable(not function: ", RawPointer(node->executable()), ")"); 225 if (node->hasCellOperand()) { 226 if (!node->cellOperand()->value() || !node->cellOperand()->value().isCell()) 227 out.print(comma, "invalid cell operand: ", node->cellOperand()->value()); 228 else { 229 out.print(comma, pointerDump(node->cellOperand()->value().asCell())); 230 if (node->cellOperand()->value().isCell()) { 231 CallVariant variant(node->cellOperand()->value().asCell()); 232 if (ExecutableBase* executable = variant.executable()) { 233 if (executable->isHostFunction()) 234 out.print(comma, "<host function>"); 235 else if (FunctionExecutable* functionExecutable = jsDynamicCast<FunctionExecutable*>(executable)) 236 out.print(comma, FunctionExecutableDump(functionExecutable)); 237 else 238 out.print(comma, "<non-function executable>"); 239 } 240 } 241 } 243 242 } 244 243 if (node->hasFunctionDeclIndex()) { … … 986 985 987 986 switch (node->op()) { 988 case CheckExecutable:989 visitor.appendUnbarrieredReadOnlyPointer(node->executable());990 break;991 992 987 case CheckStructure: 993 988 for (unsigned i = node->structureSet().size(); i--;) -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r172867 r172940 189 189 for (unsigned j = data.cases.size(); j--;) { 190 190 SwitchCase& myCase = data.cases[j]; 191 table.ctiOffsets[myCase.value.switchLookupValue( ) - table.min] =191 table.ctiOffsets[myCase.value.switchLookupValue(data.kind) - table.min] = 192 192 linkBuffer.locationOf(m_blockHeads[myCase.target.block->index]); 193 193 } -
trunk/Source/JavaScriptCore/dfg/DFGLazyJSValue.cpp
r171613 r172940 114 114 } 115 115 116 uintptr_t LazyJSValue::switchLookupValue(SwitchKind kind) const 117 { 118 // NB. Not every kind of JSValue will be able to give you a switch lookup 119 // value, and this method will assert, or do bad things, if you use it 120 // for a kind of value that can't. 121 switch (m_kind) { 122 case KnownValue: 123 switch (kind) { 124 case SwitchImm: 125 return value()->value().asInt32(); 126 case SwitchCell: 127 return bitwise_cast<uintptr_t>(value()->value().asCell()); 128 default: 129 RELEASE_ASSERT_NOT_REACHED(); 130 return 0; 131 } 132 case SingleCharacterString: 133 switch (kind) { 134 case SwitchChar: 135 return character(); 136 default: 137 RELEASE_ASSERT_NOT_REACHED(); 138 return 0; 139 } 140 default: 141 RELEASE_ASSERT_NOT_REACHED(); 142 return 0; 143 } 144 } 145 116 146 void LazyJSValue::dumpInContext(PrintStream& out, DumpContext* context) const 117 147 { -
trunk/Source/JavaScriptCore/dfg/DFGLazyJSValue.h
r171613 r172940 29 29 #if ENABLE(DFG_JIT) 30 30 31 #include "DFGCommon.h" 31 32 #include "DFGFrozenValue.h" 32 33 #include <wtf/text/StringImpl.h> … … 96 97 TriState strictEqual(const LazyJSValue& other) const; 97 98 98 unsigned switchLookupValue() const 99 { 100 // NB. Not every kind of JSValue will be able to give you a switch lookup 101 // value, and this method will assert, or do bad things, if you use it 102 // for a kind of value that can't. 103 switch (m_kind) { 104 case KnownValue: 105 return value()->value().asInt32(); 106 case SingleCharacterString: 107 return character(); 108 default: 109 RELEASE_ASSERT_NOT_REACHED(); 110 return 0; 111 } 112 } 99 uintptr_t switchLookupValue(SwitchKind) const; 113 100 114 101 void dump(PrintStream&) const; -
trunk/Source/JavaScriptCore/dfg/DFGNode.cpp
r171660 r172940 114 114 out.print("SwitchString"); 115 115 return; 116 case SwitchCell: 117 out.print("SwitchCell"); 118 return; 116 119 } 117 120 RELEASE_ASSERT_NOT_REACHED(); -
trunk/Source/JavaScriptCore/dfg/DFGNode.h
r172176 r172940 158 158 }; 159 159 160 enum SwitchKind {161 SwitchImm,162 SwitchChar,163 SwitchString164 };165 166 160 struct SwitchData { 167 161 // Initializes most fields to obviously invalid values. Anyone … … 186 180 // a constant index, argument, or identifier) from a Node*. 187 181 struct OpInfo { 182 OpInfo() : m_value(0) { } 188 183 explicit OpInfo(int32_t value) : m_value(static_cast<uintptr_t>(value)) { } 189 184 explicit OpInfo(uint32_t value) : m_value(static_cast<uintptr_t>(value)) { } … … 1010 1005 case Call: 1011 1006 case Construct: 1007 case ProfiledCall: 1008 case ProfiledConstruct: 1012 1009 case NativeCall: 1013 1010 case NativeConstruct: … … 1045 1042 } 1046 1043 1047 bool canBeKnownFunction() 1048 { 1049 switch (op()) { 1044 bool hasCellOperand() 1045 { 1046 switch (op()) { 1047 case AllocationProfileWatchpoint: 1048 case CheckCell: 1050 1049 case NativeConstruct: 1051 1050 case NativeCall: … … 1056 1055 } 1057 1056 1058 bool hasKnownFunction() 1059 { 1060 switch (op()) { 1061 case NativeConstruct: 1062 case NativeCall: 1063 return (bool)m_opInfo; 1064 default: 1065 return false; 1066 } 1067 } 1068 1069 JSFunction* knownFunction() 1070 { 1071 ASSERT(canBeKnownFunction()); 1072 return bitwise_cast<JSFunction*>(m_opInfo); 1073 } 1074 1075 void giveKnownFunction(JSFunction* callData) 1076 { 1077 ASSERT(canBeKnownFunction()); 1078 m_opInfo = bitwise_cast<uintptr_t>(callData); 1079 } 1080 1081 bool hasFunction() 1082 { 1083 switch (op()) { 1084 case CheckFunction: 1085 case AllocationProfileWatchpoint: 1086 return true; 1087 default: 1088 return false; 1089 } 1090 } 1091 1092 FrozenValue* function() 1093 { 1094 ASSERT(hasFunction()); 1057 FrozenValue* cellOperand() 1058 { 1059 ASSERT(hasCellOperand()); 1095 1060 return reinterpret_cast<FrozenValue*>(m_opInfo); 1096 1061 } 1097 1062 1098 bool hasExecutable() 1099 { 1100 return op() == CheckExecutable; 1101 } 1102 1103 ExecutableBase* executable() 1104 { 1105 return jsCast<ExecutableBase*>(reinterpret_cast<JSCell*>(m_opInfo)); 1063 void setCellOperand(FrozenValue* value) 1064 { 1065 ASSERT(hasCellOperand()); 1066 m_opInfo = bitwise_cast<uintptr_t>(value); 1106 1067 } 1107 1068 -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r172808 r172940 154 154 macro(PutByIdDirect, NodeMustGenerate | NodeClobbersWorld) \ 155 155 macro(CheckStructure, NodeMustGenerate) \ 156 macro( CheckExecutable, NodeMustGenerate) \156 macro(GetExecutable, NodeResultJS) \ 157 157 macro(PutStructure, NodeMustGenerate) \ 158 158 macro(AllocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \ … … 186 186 macro(VarInjectionWatchpoint, NodeMustGenerate) \ 187 187 macro(FunctionReentryWatchpoint, NodeMustGenerate) \ 188 macro(CheckFunction, NodeMustGenerate) \ 188 macro(CheckCell, NodeMustGenerate) \ 189 macro(CheckBadCell, NodeMustGenerate) \ 189 190 macro(AllocationProfileWatchpoint, NodeMustGenerate) \ 190 191 macro(CheckInBounds, NodeMustGenerate) \ … … 215 216 macro(Call, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ 216 217 macro(Construct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ 218 macro(ProfiledCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ 219 macro(ProfiledConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ 217 220 macro(NativeCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ 218 221 macro(NativeConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \ … … 287 290 macro(ForceOSRExit, NodeMustGenerate) \ 288 291 \ 292 /* Vends a bottom JS value. It is invalid to ever execute this. Useful for cases */\ 293 /* where we know that we would have exited but we'd like to still track the control */\ 294 /* flow. */\ 295 macro(BottomValue, NodeResultJS) \ 296 \ 289 297 /* Checks the watchdog timer. If the timer has fired, we OSR exit to the */ \ 290 298 /* baseline JIT to redo the watchdog timer check, and service the timer. */ \ -
trunk/Source/JavaScriptCore/dfg/DFGPhantomCanonicalizationPhase.cpp
r172176 r172940 93 93 } 94 94 95 if (node->children.isEmpty()) 95 if (node->children.isEmpty()) { 96 m_graph.m_allocator.free(node); 96 97 continue; 98 } 97 99 98 100 node->convertToCheck(); -
trunk/Source/JavaScriptCore/dfg/DFGPhantomRemovalPhase.cpp
r172176 r172940 126 126 127 127 if (node->children.isEmpty()) { 128 m_graph.m_allocator.free(node); 128 129 changed = true; 129 130 continue; … … 143 144 } 144 145 if (node->children.isEmpty()) { 146 m_graph.m_allocator.free(node); 145 147 changed = true; 146 148 continue; … … 150 152 151 153 case HardPhantom: { 152 if (node->children.isEmpty()) 154 if (node->children.isEmpty()) { 155 m_graph.m_allocator.free(node); 153 156 continue; 157 } 154 158 break; 155 159 } -
trunk/Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
r172808 r172940 189 189 case Call: 190 190 case Construct: 191 case ProfiledCall: 192 case ProfiledConstruct: 191 193 case NativeCall: 192 194 case NativeConstruct: … … 197 199 } 198 200 199 case GetGetterSetterByOffset: { 201 case GetGetterSetterByOffset: 202 case GetExecutable: { 200 203 changed |= setPrediction(SpecCellOther); 201 204 break; … … 643 646 case SetArgument: 644 647 case CheckStructure: 645 case Check Executable:646 case Check Function:648 case CheckCell: 649 case CheckBadCell: 647 650 case PutStructure: 648 651 case TearOffActivation: … … 666 669 break; 667 670 671 // This gets ignored because it only pretends to produce a value. 672 case BottomValue: 673 break; 674 668 675 // This gets ignored because it already has a prediction. 669 676 case ExtractOSREntryLocal: -
trunk/Source/JavaScriptCore/dfg/DFGSafeToExecute.h
r172808 r172940 160 160 case PutByIdDirect: 161 161 case CheckStructure: 162 case CheckExecutable:162 case GetExecutable: 163 163 case GetButterfly: 164 164 case CheckArray: … … 175 175 case VariableWatchpoint: 176 176 case VarInjectionWatchpoint: 177 case CheckFunction: 177 case CheckCell: 178 case CheckBadCell: 178 179 case AllocationProfileWatchpoint: 179 180 case RegExpExec: … … 188 189 case Call: 189 190 case Construct: 191 case ProfiledCall: 192 case ProfiledConstruct: 190 193 case NewObject: 191 194 case NewArray: … … 274 277 return false; // TODO: add a check for already checked. https://bugs.webkit.org/show_bug.cgi?id=133769 275 278 279 case BottomValue: 280 // If in doubt, assume that this isn't safe to execute, just because we have no way of 281 // compiling this node. 282 return false; 283 276 284 case GetByVal: 277 285 case GetIndexedPropertyStorage: -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r172176 r172940 5355 5355 emitSwitchString(node, data); 5356 5356 return; 5357 } 5358 case SwitchCell: { 5359 DFG_CRASH(m_jit.graph(), node, "Bad switch kind"); 5360 return; 5357 5361 } } 5358 5362 RELEASE_ASSERT_NOT_REACHED(); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
r172808 r172940 641 641 void SpeculativeJIT::emitCall(Node* node) 642 642 { 643 bool isCall = node->op() == Call ;643 bool isCall = node->op() == Call || node->op() == ProfiledCall; 644 644 if (!isCall) 645 ASSERT(node->op() == Construct );645 ASSERT(node->op() == Construct || node->op() == ProfiledConstruct); 646 646 647 647 // For constructors, the this argument is not passed but we have to make space … … 689 689 690 690 m_jit.emitStoreCodeOrigin(node->origin.semantic); 691 692 CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo(); 693 694 if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) { 695 m_jit.vm()->callEdgeLog->emitLogCode( 696 m_jit, info->callEdgeProfile, callee.jsValueRegs()); 697 } 691 698 692 699 slowPath.append(branchNotCell(callee.jsValueRegs())); … … 714 721 m_jit.move(calleeTagGPR, GPRInfo::regT1); 715 722 } 716 CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();717 723 m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::regT2); 718 724 JITCompiler::Call slowCall = m_jit.nearCall(); … … 3676 3682 break; 3677 3683 3678 case CheckFunction: { 3684 case CheckCell: { 3685 SpeculateCellOperand cell(this, node->child1()); 3686 speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->value().asCell())); 3687 noResult(node); 3688 break; 3689 } 3690 3691 case GetExecutable: { 3679 3692 SpeculateCellOperand function(this, node->child1()); 3680 speculationCheck(BadFunction, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, function.gpr(), node->function()->value().asCell())); 3681 noResult(node); 3682 break; 3683 } 3684 3685 case CheckExecutable: { 3686 SpeculateCellOperand function(this, node->child1()); 3687 speculateCellType(node->child1(), function.gpr(), SpecFunction, JSFunctionType); 3688 speculationCheck(BadExecutable, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, JITCompiler::Address(function.gpr(), JSFunction::offsetOfExecutable()), node->executable())); 3689 noResult(node); 3693 GPRTemporary result(this, Reuse, function); 3694 GPRReg functionGPR = function.gpr(); 3695 GPRReg resultGPR = result.gpr(); 3696 speculateCellType(node->child1(), functionGPR, SpecFunction, JSFunctionType); 3697 m_jit.loadPtr(JITCompiler::Address(functionGPR, JSFunction::offsetOfExecutable()), resultGPR); 3698 cellResult(resultGPR, node); 3690 3699 break; 3691 3700 } … … 4157 4166 case Call: 4158 4167 case Construct: 4168 case ProfiledCall: 4169 case ProfiledConstruct: 4159 4170 emitCall(node); 4160 4171 break; … … 4893 4904 case NativeCall: 4894 4905 case NativeConstruct: 4906 case CheckBadCell: 4907 case BottomValue: 4895 4908 RELEASE_ASSERT_NOT_REACHED(); 4896 4909 break; -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
r172808 r172940 627 627 void SpeculativeJIT::emitCall(Node* node) 628 628 { 629 630 bool isCall = node->op() == Call; 629 bool isCall = node->op() == Call || node->op() == ProfiledCall; 631 630 if (!isCall) 632 DFG_ASSERT(m_jit.graph(), node, node->op() == Construct );631 DFG_ASSERT(m_jit.graph(), node, node->op() == Construct || node->op() == ProfiledConstruct); 633 632 634 633 // For constructors, the this argument is not passed but we have to make space … … 671 670 m_jit.emitStoreCodeOrigin(node->origin.semantic); 672 671 672 CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo(); 673 674 if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) { 675 m_jit.vm()->callEdgeLog->emitLogCode( 676 m_jit, callLinkInfo->callEdgeProfile, JSValueRegs(calleeGPR)); 677 } 678 673 679 slowPath = m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleeGPR, targetToCheck, MacroAssembler::TrustedImmPtr(0)); 674 680 … … 683 689 684 690 m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0 685 CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();686 691 m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2 687 692 JITCompiler::Call slowCall = m_jit.nearCall(); … … 3769 3774 break; 3770 3775 3771 case CheckFunction: { 3776 case CheckCell: { 3777 SpeculateCellOperand cell(this, node->child1()); 3778 speculationCheck(BadCell, JSValueSource::unboxedCell(cell.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, cell.gpr(), node->cellOperand()->value().asCell())); 3779 noResult(node); 3780 break; 3781 } 3782 3783 case GetExecutable: { 3772 3784 SpeculateCellOperand function(this, node->child1()); 3773 speculationCheck(BadFunction, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, function.gpr(), node->function()->value().asCell())); 3774 noResult(node); 3775 break; 3776 } 3777 3778 case CheckExecutable: { 3779 SpeculateCellOperand function(this, node->child1()); 3780 speculateCellType(node->child1(), function.gpr(), SpecFunction, JSFunctionType); 3781 speculationCheck(BadExecutable, JSValueSource::unboxedCell(function.gpr()), node->child1(), m_jit.branchWeakPtr(JITCompiler::NotEqual, JITCompiler::Address(function.gpr(), JSFunction::offsetOfExecutable()), node->executable())); 3782 noResult(node); 3785 GPRTemporary result(this, Reuse, function); 3786 GPRReg functionGPR = function.gpr(); 3787 GPRReg resultGPR = result.gpr(); 3788 speculateCellType(node->child1(), functionGPR, SpecFunction, JSFunctionType); 3789 m_jit.loadPtr(JITCompiler::Address(functionGPR, JSFunction::offsetOfExecutable()), resultGPR); 3790 cellResult(resultGPR, node); 3783 3791 break; 3784 3792 } … … 4220 4228 case Call: 4221 4229 case Construct: 4230 case ProfiledCall: 4231 case ProfiledConstruct: 4222 4232 emitCall(node); 4223 4233 break; 4224 4234 4225 4235 case CreateActivation: { 4226 4236 DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame); … … 4971 4981 case MultiPutByOffset: 4972 4982 case FiatInt52: 4973 DFG_CRASH(m_jit.graph(), node, "Unexpected FTL node"); 4983 case CheckBadCell: 4984 case BottomValue: 4985 DFG_CRASH(m_jit.graph(), node, "Unexpected node"); 4974 4986 break; 4975 4987 } -
trunk/Source/JavaScriptCore/dfg/DFGStructureRegistrationPhase.cpp
r172737 r172940 63 63 64 64 switch (node->op()) { 65 case CheckExecutable:66 registerStructure(node->executable()->structure());67 break;68 69 65 case CheckStructure: 70 66 registerStructures(node->structureSet()); -
trunk/Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
r164229 r172940 51 51 return false; 52 52 53 if (m_graph.m_profiledBlock->m_didFailFTLCompilation) 53 if (m_graph.m_profiledBlock->m_didFailFTLCompilation) { 54 removeFTLProfiling(); 54 55 return false; 56 } 55 57 56 58 #if ENABLE(FTL_JIT) 57 59 FTL::CapabilityLevel level = FTL::canCompile(m_graph); 58 if (level == FTL::CannotCompile) 60 if (level == FTL::CannotCompile) { 61 removeFTLProfiling(); 59 62 return false; 63 } 60 64 61 65 if (!Options::enableOSREntryToFTL()) … … 119 123 #endif // ENABLE(FTL_JIT) 120 124 } 125 126 private: 127 void removeFTLProfiling() 128 { 129 for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) { 130 BasicBlock* block = m_graph.block(blockIndex); 131 if (!block) 132 continue; 133 134 for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) { 135 Node* node = block->at(nodeIndex); 136 switch (node->op()) { 137 case ProfiledCall: 138 node->setOp(Call); 139 break; 140 141 case ProfiledConstruct: 142 node->setOp(Construct); 143 break; 144 145 default: 146 break; 147 } 148 } 149 } 150 } 121 151 }; 122 152 -
trunk/Source/JavaScriptCore/dfg/DFGValidate.cpp
r171660 r172940 201 201 VALIDATE((node), !mayExit(m_graph, node) || node->origin.forExit.isSet()); 202 202 VALIDATE((node), !node->hasStructure() || !!node->structure()); 203 VALIDATE((node), !node->hasFunction() || node->function()->value().isFunction()); 203 VALIDATE((node), !node->hasCellOperand() || node->cellOperand()->value().isCell()); 204 VALIDATE((node), !node->hasCellOperand() || !!node->cellOperand()->value()); 204 205 205 206 if (!(node->flags() & NodeHasVarArgs)) { -
trunk/Source/JavaScriptCore/dfg/DFGWatchpointCollectionPhase.cpp
r171613 r172940 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013, 2014 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 115 115 116 116 case AllocationProfileWatchpoint: 117 addLazily(jsCast<JSFunction*>(m_node-> function()->value())->allocationProfileWatchpointSet());117 addLazily(jsCast<JSFunction*>(m_node->cellOperand()->value())->allocationProfileWatchpointSet()); 118 118 break; 119 119 -
trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp
r172176 r172940 105 105 case InvalidationPoint: 106 106 case StringCharAt: 107 case CheckFunction: 107 case CheckCell: 108 case CheckBadCell: 108 109 case StringCharCodeAt: 109 110 case AllocatePropertyStorage: … … 127 128 case Check: 128 129 case CountExecution: 129 case CheckExecutable:130 case GetExecutable: 130 131 case GetScope: 131 132 case AllocationProfileWatchpoint: … … 167 168 case GetEnumeratorPname: 168 169 case ToIndexString: 170 case BottomValue: 169 171 // These are OK. 172 break; 173 case ProfiledCall: 174 case ProfiledConstruct: 175 // These are OK not because the FTL can support them, but because if the DFG sees one of 176 // these then the FTL will see a normal Call/Construct. 170 177 break; 171 178 case Identity: … … 327 334 case SwitchImm: 328 335 case SwitchChar: 336 case SwitchCell: 329 337 break; 330 338 default: -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
r172756 r172940 68 68 CodeBlock* codeBlock, BlockIndex blockIndex, unsigned nodeIndex) 69 69 { 70 71 70 dataLog("Crashing in thought-to-be-unreachable FTL-generated code for ", pointerDump(codeBlock), " at basic block #", blockIndex); 72 71 if (nodeIndex != UINT_MAX) … … 154 153 BasicBlock* block = depthFirst[blockIndex]; 155 154 for (unsigned nodeIndex = block->size(); nodeIndex--; ) { 156 Node* m_node = block->at(nodeIndex); 157 if (m_node->hasKnownFunction()) { 155 Node* node = block->at(nodeIndex); 156 switch (node->op()) { 157 case NativeCall: 158 case NativeConstruct: { 158 159 int numArgs = m_node->numChildren(); 159 160 if (numArgs > maxNumberOfArguments) 160 161 maxNumberOfArguments = numArgs; 162 break; 163 } 164 default: 165 break; 161 166 } 162 167 } … … 469 474 compileCheckStructure(); 470 475 break; 471 case CheckFunction: 472 compileCheckFunction(); 473 break; 474 case CheckExecutable: 475 compileCheckExecutable(); 476 case CheckCell: 477 compileCheckCell(); 478 break; 479 case CheckBadCell: 480 compileCheckBadCell(); 481 break; 482 case GetExecutable: 483 compileGetExecutable(); 476 484 break; 477 485 case ArrayifyToStructure: … … 1744 1752 } 1745 1753 1746 void compileCheck Function()1754 void compileCheckCell() 1747 1755 { 1748 1756 LValue cell = lowCell(m_node->child1()); 1749 1757 1750 1758 speculate( 1751 BadFunction, jsValueValue(cell), m_node->child1().node(), 1752 m_out.notEqual(cell, weakPointer(m_node->function()->value().asCell()))); 1753 } 1754 1755 void compileCheckExecutable() 1759 BadCell, jsValueValue(cell), m_node->child1().node(), 1760 m_out.notEqual(cell, weakPointer(m_node->cellOperand()->value().asCell()))); 1761 } 1762 1763 void compileCheckBadCell() 1764 { 1765 terminate(BadCell); 1766 } 1767 1768 void compileGetExecutable() 1756 1769 { 1757 1770 LValue cell = lowCell(m_node->child1()); 1758 1759 1771 speculateFunction(m_node->child1(), cell); 1760 1761 speculate( 1762 BadExecutable, jsValueValue(cell), m_node->child1().node(), 1763 m_out.notEqual( 1764 m_out.loadPtr(cell, m_heaps.JSFunction_executable), 1765 weakPointer(m_node->executable()))); 1772 setJSValue(m_out.loadPtr(cell, m_heaps.JSFunction_executable)); 1766 1773 } 1767 1774 … … 3674 3681 int numArgs = numPassedArgs + dummyThisArgument; 3675 3682 3676 ASSERT(m_node->hasKnownFunction()); 3677 3678 JSFunction* knownFunction = m_node->knownFunction(); 3683 JSFunction* knownFunction = jsCast<JSFunction*>(m_node->cellOperand()->value().asCell()); 3679 3684 NativeFunction function = knownFunction->nativeFunction(); 3680 3685 … … 3919 3924 } 3920 3925 3921 case SwitchString: 3926 case SwitchString: { 3922 3927 DFG_CRASH(m_graph, m_node, "Unimplemented"); 3923 break; 3924 } 3928 return; 3929 } 3930 3931 case SwitchCell: { 3932 LValue cell; 3933 switch (m_node->child1().useKind()) { 3934 case CellUse: { 3935 cell = lowCell(m_node->child1()); 3936 break; 3937 } 3938 3939 case UntypedUse: { 3940 LValue value = lowJSValue(m_node->child1()); 3941 LBasicBlock cellCase = FTL_NEW_BLOCK(m_out, ("Switch/SwitchCell cell case")); 3942 m_out.branch( 3943 isCell(value), unsure(cellCase), unsure(lowBlock(data->fallThrough.block))); 3944 m_out.appendTo(cellCase); 3945 cell = value; 3946 break; 3947 } 3948 3949 default: 3950 DFG_CRASH(m_graph, m_node, "Bad use kind"); 3951 return; 3952 } 3953 3954 buildSwitch(m_node->switchData(), m_out.intPtr, cell); 3955 return; 3956 } } 3925 3957 3926 3958 DFG_CRASH(m_graph, m_node, "Bad switch kind"); … … 5187 5219 for (unsigned i = 0; i < data->cases.size(); ++i) { 5188 5220 cases.append(SwitchCase( 5189 constInt(type, data->cases[i].value.switchLookupValue( )),5221 constInt(type, data->cases[i].value.switchLookupValue(data->kind)), 5190 5222 lowBlock(data->cases[i].target.block), Weight(data->cases[i].target.count))); 5191 5223 } -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r172820 r172940 985 985 } 986 986 987 if (vm()->callEdgeLog) { 988 DeferGCForAWhile awhile(*this); 989 vm()->callEdgeLog->processLog(); 990 } 991 987 992 RELEASE_ASSERT(!m_deferralDepth); 988 993 ASSERT(vm()->currentThreadIsHoldingAPILock()); -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r171213 r172940 89 89 } 90 90 91 void storeValue(JSValueRegs regs, void* address) 92 { 93 #if USE(JSVALUE64) 94 store64(regs.gpr(), address); 95 #else 96 store32(regs.payloadGPR(), bitwise_cast<void*>(bitwise_cast<uintptr_t>(address) + PayloadOffset)); 97 store32(regs.tagGPR(), bitwise_cast<void*>(bitwise_cast<uintptr_t>(address) + TagOffset)); 98 #endif 99 } 100 101 void loadValue(Address address, JSValueRegs regs) 102 { 103 #if USE(JSVALUE64) 104 load64(address, regs.gpr()); 105 #else 106 if (address.base == regs.payloadGPR()) { 107 load32(address.withOffset(TagOffset), regs.tagGPR()); 108 load32(address.withOffset(PayloadOffset), regs.payloadGPR()); 109 } else { 110 load32(address.withOffset(PayloadOffset), regs.payloadGPR()); 111 load32(address.withOffset(TagOffset), regs.tagGPR()); 112 } 113 #endif 114 } 115 91 116 void moveTrustedValue(JSValue value, JSValueRegs regs) 92 117 { -
trunk/Source/JavaScriptCore/jit/CCallHelpers.h
r172867 r172940 1667 1667 } 1668 1668 #endif 1669 1670 void setupArguments(JSValueRegs arg1) 1671 { 1672 #if USE(JSVALUE64) 1673 setupArguments(arg1.gpr()); 1674 #else 1675 setupArguments(arg1.payloadGPR(), arg1.tagGPR()); 1676 #endif 1677 } 1669 1678 1670 1679 void setupResults(GPRReg destA, GPRReg destB) -
trunk/Source/JavaScriptCore/jit/GPRInfo.h
r172734 r172940 61 61 GPRReg payloadGPR() const { return m_gpr; } 62 62 63 bool uses(GPRReg gpr) const { return m_gpr == gpr; } 64 63 65 private: 64 66 GPRReg m_gpr; … … 170 172 } 171 173 174 bool uses(GPRReg gpr) const { return m_tagGPR == gpr || m_payloadGPR == gpr; } 175 172 176 private: 173 177 int8_t m_tagGPR; -
trunk/Source/JavaScriptCore/jit/JITCall.cpp
r172176 r172940 213 213 214 214 store64(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC))); 215 216 CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); 217 218 if (CallEdgeLog::isEnabled() && shouldEmitProfiling() 219 && Options::baselineDoesCallEdgeProfiling()) 220 m_vm->ensureCallEdgeLog().emitLogCode(*this, info->callEdgeProfile, JSValueRegs(regT0)); 215 221 216 222 if (opcodeID == op_call_eval) { … … 224 230 225 231 ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); 226 CallLinkInfo* info = m_codeBlock->addCallLinkInfo();227 232 info->callType = CallLinkInfo::callTypeFor(opcodeID); 228 233 info->codeOrigin = CodeOrigin(m_bytecodeOffset); -
trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp
r172176 r172940 301 301 store32(regT1, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) + TagOffset - sizeof(CallerFrameAndPC))); 302 302 303 CallLinkInfo* info = m_codeBlock->addCallLinkInfo(); 304 305 if (CallEdgeLog::isEnabled() && shouldEmitProfiling() 306 && Options::baselineDoesCallEdgeProfiling()) { 307 m_vm->ensureCallEdgeLog().emitLogCode( 308 *this, info->callEdgeProfile, JSValueRegs(regT1, regT0)); 309 } 310 303 311 if (opcodeID == op_call_eval) { 304 312 compileCallEval(instruction); … … 314 322 315 323 ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex); 316 CallLinkInfo* info = m_codeBlock->addCallLinkInfo();317 324 info->callType = CallLinkInfo::callTypeFor(opcodeID); 318 325 info->codeOrigin = CodeOrigin(m_bytecodeOffset); -
trunk/Source/JavaScriptCore/runtime/Options.h
r172820 r172940 168 168 v(bool, enablePolyvariantDevirtualization, true) \ 169 169 v(bool, enablePolymorphicAccessInlining, true) \ 170 v(bool, enablePolymorphicCallInlining, true) \ 171 v(bool, callStatusShouldUseCallEdgeProfile, true) \ 172 v(bool, callEdgeProfileReallyProcessesLog, true) \ 173 v(bool, baselineDoesCallEdgeProfiling, false) \ 174 v(bool, dfgDoesCallEdgeProfiling, true) \ 175 v(bool, enableCallEdgeProfiling, true) \ 176 v(unsigned, frequentCallThreshold, 2) \ 170 177 v(bool, optimizeNativeCalls, false) \ 171 178 \ -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r172820 r172940 374 374 } 375 375 376 CallEdgeLog& VM::ensureCallEdgeLog() 377 { 378 if (!callEdgeLog) 379 callEdgeLog = std::make_unique<CallEdgeLog>(); 380 return *callEdgeLog; 381 } 382 376 383 #if ENABLE(JIT) 377 384 static ThunkGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic) -
trunk/Source/JavaScriptCore/runtime/VM.h
r172867 r172940 73 73 class ArityCheckFailReturnThunks; 74 74 class BuiltinExecutables; 75 class CallEdgeLog; 75 76 class CodeBlock; 76 77 class CodeCache; … … 234 235 OwnPtr<DFG::LongLivedState> dfgState; 235 236 #endif // ENABLE(DFG_JIT) 237 238 std::unique_ptr<CallEdgeLog> callEdgeLog; 239 CallEdgeLog& ensureCallEdgeLog(); 236 240 237 241 VMType vmType;
Note:
See TracChangeset
for help on using the changeset viewer.