SlideShare a Scribd company logo
14 Sep., 2010                     TOKYO INSTITUTE OF TECHNOLOGY
                                   DEPARTMENT OF COMPUTER SCIENCE
ICSM 2010 ERA




                 iFL: An Interactive
Environment for Understanding
   Feature Implementations

        Shinpei Hayashi, Katsuyuki Sekine,
                and Motoshi Saeki
             Department of Computer Science,
            Tokyo Institute of Technology, Japan
Abstract
   We have developed iFL
    − An environment for program understanding
    − Interactively supports the understanding of feature
      implementation using a feature location technique
    − Can reduce understanding costs




                                                       2
Background
   Program understanding is costly
     − Extending/fixing existing features
          Understanding the implementation of target
          feature is necessary
     − Dominant of maintenance costs [Vestdam 04]


   Our focus: feature/concept location (FL)
     − Locating/extracting code fragments which
       implement the given feature/concept
[Vestdam 04]: “Maintaining Program Understanding – Issues, Tools, and Future Directions”,
Nordic Journal of Computing, 2004.
                                                                                            3
FL Example (Search-based)
   I want to understand                                   scheduler

        the feature
  converting input time
   strings to schedule
         objects…
                                     Source Code
                                 ……
                              public Time(String hour, …) {
        A new                      ......
        maintainer            }
                              …
   FL                         public void createSchedule() {
                                   ......
    schedule time    Search   }
                              public void updateSchedule(…) {
                                  ……
     Feature Location
 (Search-based approach)         Reading these methods
                                 for understanding   4
Problem 1:
How to Find Appropriate Queries?
         FL   How??

                      Search



   Constructing appropriate queries
    requires rich knowledge for the
    implementation
    − Times: time, date, hour/minute/second
    − Images: image, picture, figure
   Developers in practice use several
    keywords for FL through trial and error
                                              5
Problem 2:
How to Fix FL Results?
    Complete (Optimum) FL results are rare
     − Accuracy of used FL techniques
     − Individual difference in appropriate code


            An FL result                                    Necessary code
        (code fragments)                                    (false negatives)

     Unnecessary code
      (false positives)

FL
 schedule time     Search



                                      Optimum result
                            (Code fragments that should be understood)      6
Our Solution: Feedbacks
   Added two feedback processes

                      Query Input
                    schedule        Search


                   Feature location
                   (calculating scores)
    Updating   1st: ScheduleManager.addSchedule()       Relevance
    queries    2nd: EditSchedule.inputCheck()
               …                                        feedback
                                                      (addition of hints)
                     Selection and
                   understanding of
                    code fragments
                                   Finish if the user judges that he/she
                                   has read all the necessary code fragments 7
Query Expansion
   Wide query for initial FL
    − By expanding queries to its synonyms
   Narrow query for subsequent FLs
    − By using concrete identifies in source code
     1st FL                                  2nd FL
      schedule* date*       Search             schedule time     Search




Thesaurus               Expand
                                     A code fragment in a FL result
                                     public void createSchedule() {
                                         …
         • schedule • list               String hour = …
         • agenda   • time               Time time = new Time(hour, …);
         • plan     • date               …
                                     }                                    8
Relevance Feedback
   Improving FL results by users feedback
    − Adding a hint when the selected code fragments
      is relevant or irrelevant to the feature
    − Feedbacks are propagated into other fragments
      using dependencies

                             Dependency

       i th result of FL                  (i+1) th result of FL

         1       2    9        6             1      8     11      6

Code fragment              : relevant              propagation
with its score                                    by dependencies
                                                                      9
Supporting Tool: iFL
   Implemented as an Eclipse plug-in
    − For static analysis: Eclipse JDT
    − For dynamic analysis: Reticella [Noda 09]
    − For a thesaurus: WordNet


                      Exec. Traces /
                                               Synonym
        Reticella
                      dependencies     iFL-      Info.
                                       core              Word
                        Syntactic                        Net
                       information
           JDT
                                              Implemented!
                    Eclipse
                                                                10
Supporting Tool: iFL




                       11
How iFL Works



                Inputting
                 Query




                            12
How iFL Works



                              Calculating
   Evaluated code fragments    scores
       with their scores




                                            13
How iFL Works
           Associated method will be shown
                  in the code editor
          when user selects a code fragment




                                          14
How iFL Works



                 Calculating
                scores again




                               Adding
                                hints
                                        15
How iFL Works



            Scores
            updated




                      16
How iFL Works

  Code
 reading




   FL

                17
Preliminary Evaluation
    A user (familiar with Java and iFL) actually
     understood feature implementations
                                          Non-
     # Correct     # FL    Interactive                                     # Query
                                       interactive   Δ Costs   Overheads
      Events     execution    costs                                        updating
                                          costs

S1     19      5      20      31 0.92         1                              2
S2      7 5 change requirements and0.67 features
               5       8      10      related 1                              1
S3      1 from 2
               Sched 2         2     0.00     1                              0
S4     10 (home-grown, small-sized) 1.00
               6      10      13              0                              2
S5      3      6       6      15 0.75         3                              2
          2 change requirements and related features
J1     10 from 4
               JDraw 20      156 0.93 10                                     2
J2      4 (open-source, medium-sized)0.92 14
               6      18 173                                                 3
                                                                                      18
Evaluation Criteria
                         # selected, but unnecessary code fragments
                        Reduced ratio of overheads
 between interactive and non-interactive approaches
                                          Non-
     # Correct     # FL    Interactive                                     # Query
                                       interactive   Δ Costs   Overheads
      Events     execution    costs                                        updating
                                          costs

S1     19           5          20         31         0.92         1          2
S2      7           5          8          10         0.67         1          1
S3      1           2          2           2         0.00         1          0
S4     10           6          10         13         1.00         0          2
S5      3           6          6          15         0.75         3          2
J1     10           4          20        156         0.93        10          2
J2      4           6          18        173         0.92        14          3
                                                                                      19
Evaluation Results
    Reduced costs for 6 out of 7 cases
     − Especially, reduced 90% of costs for 4 cases
                                           Non-
      # Correct     # FL    Interactive                                     # Query
                                        interactive   Δ Costs   Overheads
       Events     execution    costs                                        updating
                                           costs

S1      19           5          20         31         0.92         1          2
S2       7           5          8          10         0.67         1          1
S3       1           2          2           2         0.00         1          0
S4      10           6          10         13         1.00         0          2
S5       3           6          6          15         0.75         3          2
J1      10           4          20        156         0.93        10          2
J2       4           6          18        173         0.92        14          3
                                                                                       20
Evaluation Results
    Small overheads
     − Sched: 1.2, JDraw: 12 in average
                                           Non-
      # Correct     # FL    Interactive                                     # Query
                                        interactive   Δ Costs   Overheads
       Events     execution    costs                                        updating
                                           costs

S1      19           5          20         31         0.92         1          2
S2       7           5          8          10         0.67         1          1
S3       1           2          2           2         0.00         1          0
S4      10           6          10         13         1.00         0          2
S5       3           6          6          15         0.75         3          2
J1      10           4          20        156         0.93        10          2
J2       4           6          18        173         0.92        14          3
                                                                                       21
Evaluation Results
    No effect in S3
     − Because non-interactive approach is sufficient for understanding
     − Not because of the fault in interactive approach
                                           Non-
      # Correct     # FL    Interactive                                     # Query
                                        interactive   Δ Costs   Overheads
       Events     execution    costs                                        updating
                                           costs

S1      19           5          20         31         0.92         1          2
S2       7           5          8          10         0.67         1          1
S3       1           2          2           2         0.00         1          0
S4      10           6          10         13         1.00         0          2
S5       3           6          6          15         0.75         3          2
J1      10           4          20        156         0.93        10          2
J2       4           6          18        173         0.92        14          3
                                                                                       22
Conclusion
   Summary
    − We developed iFL: interactively supporting the
      understanding of feature implementation using FL
    − iFL can reduce understanding costs in 6 out of 7
      cases
   Future Work
    − Evaluation++: on larger-scale projects
    − Feedback++: for more efficient relevance feedback
      • Observing code browsing activities on IDE


                                                     23
additional slides
The FL Approach
   Based on search-based FL
   Use of static + dynamic analyses

           Static analysis
     Query              Methods with        Hints

                                           
     schedule
                        their scores
Source
 code
                      Execution trace   Evaluating
     Test                               events     Events with
     case             / dependencies
                                                   their scores
                                                    (FL result)
         Dynamic analysis
                                                              25
Static Analysis: eval. methods
    Matching queries to identifiers
Schedule time         Search

             Expand
                                The basic score (BS) of
                                   createSchedule: 21
Thesaurus
                        public void createSchedule() {
     • schedule           …                  20 for method names
     • agenda             String hour = …
     • time               Time time = new Time(hour, …);
     • date               …
                                     1 for local variables
                        }
  Expanded queries
                                                                   26
Dynamic Analysis
   Extracting execution traces and their
    dependencies by executing source code with
    a test case
                                  Dependencies
       Execution trace      (Method invocation relations)
                                         e1
     e1: loadSchedule()
     e2: initWindow()              e2     e3      e6
     e3: createSchedule()
     e4: Time()
     e5: ScheduleModel()                  e4
     e6: updateList()
                                          e5
                                                       27
Evaluating Methods
     Highly scored events are:
      − executing methods having high basic scores
      − Close to highly scored events

                                                      52.2
Events        Methods       BS   Score          e1    (20)
 e1      loadSchedule()     20   52.2
 e2      initWindow()        0   18.7
 e3      createSchedule()   21   38.7     e2    e3           e6
                                                     38.7         19.0
 e4      Time()             20   66.0    18.7        (21)
                                          (0)                      (2)
 e5      ScheduleModel()    31   42.6           e4    66.0
 e6      updateList()        2   19.0                 (20)

                                                e5    42.6
                                                      (31)               28
Selection and Understanding
    Selecting highly ranked events
      − Extracting associated code fragment (method
        body) and reading it to understand


Events       Methods     Scores Rank
 e1      loadSchedule()   52.2 2       Extracted code fragments
 e2      initWindow()     18.7 6          public Time(String hour, …) {
 e3      createSchedule() 38.7   4          …
                                            String hour = …
 e4      Time()           66.0 1            String minute = … …
 e5      ScheduleModel() 42.6    3        }

 e6      updateList()     19.0 5
                                                                          29
Relevance Feedback
     User reads the selected code
      fragments, and adds hints
      − Relevant (): maximum basic score
      − Irrelevant to the feature (): score becomes 0

                                      Basic
Events       Methods        Hints              Scores     Ranks
                                     scores
 e1      loadSchedule()                20     52.2 77.6       2
 e2      initWindow()                   0        18.7         6
 e3      createSchedule()              21     38.7 96.4   4       1
 e4      Time()                    20 46.5   66.0 70.2   1       3
 e5      ScheduleModel()               31     42.6 51.0   3       4
 e6      updateList()                   2        19.0         5
                                                                      30

More Related Content

PDF
Guiding Identification of Missing Scenarios for Dynamic Feature Location
PDF
Recording Finer-Grained Software Evolution with IDE: An Annotation-Based Appr...
PDF
Visualizing Stakeholder Concerns with Anchored Map
PDF
Toward Understanding How Developers Recognize Features in Source Code from De...
PDF
Generating Assertion Code from OCL: A Transformational Approach Based on Simi...
PDF
Supporting Design Model Refactoring for Improving Class Responsibility Assign...
PDF
Detecting Occurrences of Refactoring with Heuristic Search
PDF
Toward Structured Location of Features
Guiding Identification of Missing Scenarios for Dynamic Feature Location
Recording Finer-Grained Software Evolution with IDE: An Annotation-Based Appr...
Visualizing Stakeholder Concerns with Anchored Map
Toward Understanding How Developers Recognize Features in Source Code from De...
Generating Assertion Code from OCL: A Transformational Approach Based on Simi...
Supporting Design Model Refactoring for Improving Class Responsibility Assign...
Detecting Occurrences of Refactoring with Heuristic Search
Toward Structured Location of Features

What's hot (20)

PDF
Sentence-to-Code Traceability Recovery with Domain Ontologies
PDF
HDR Defence - Software Abstractions for Parallel Architectures
PDF
Boost.Dispatch
PDF
(Costless) Software Abstractions for Parallel Architectures
DOC
Datastructure notes
PPTX
Concept lattices: a representation space to structure software variability
PDF
A Logic Meta-Programming Foundation for Example-Driven Pattern Detection in O...
PDF
Faults and Regression testing - Localizing Failure-Inducing Program Edits Bas...
PDF
Declare Your Language: What is a Compiler?
PDF
Automatic Task-based Code Generation for High Performance DSEL
PPT
Unit3 cspc
PDF
A Recommender System for Refining Ekeko/X Transformation
PDF
Multi-dimensional exploration of API usage - ICPC13 - 21-05-13
PPS
02 iec t1_s1_plt_session_02
PDF
Cd lab manual
PPT
C by balaguruswami - e.balagurusamy
PPTX
Programming in java basics
PPTX
Compiler Design Unit 4
PPS
07 iec t1_s1_oo_ps_session_10
PPT
C++ Interview Questions
Sentence-to-Code Traceability Recovery with Domain Ontologies
HDR Defence - Software Abstractions for Parallel Architectures
Boost.Dispatch
(Costless) Software Abstractions for Parallel Architectures
Datastructure notes
Concept lattices: a representation space to structure software variability
A Logic Meta-Programming Foundation for Example-Driven Pattern Detection in O...
Faults and Regression testing - Localizing Failure-Inducing Program Edits Bas...
Declare Your Language: What is a Compiler?
Automatic Task-based Code Generation for High Performance DSEL
Unit3 cspc
A Recommender System for Refining Ekeko/X Transformation
Multi-dimensional exploration of API usage - ICPC13 - 21-05-13
02 iec t1_s1_plt_session_02
Cd lab manual
C by balaguruswami - e.balagurusamy
Programming in java basics
Compiler Design Unit 4
07 iec t1_s1_oo_ps_session_10
C++ Interview Questions
Ad

Viewers also liked (11)

PDF
Incremental Feature Location and Identification in Source Code
PDF
Feature Location for Multi-Layer System Based on Formal Concept Analysis
PDF
Modeling and Utilizing Security Knowledge for Eliciting Security Requirements
PDF
Class Responsibility Assignment as Fuzzy Constraint Satisfaction
PDF
Understanding Source Code Differences by Separating Refactoring Effects
PDF
Establishing Regulatory Compliance in Goal-Oriented Requirements Analysis
PDF
Refactoring Edit History of Source Code
PDF
Historef: A Tool for Edit History Refactoring
PDF
How Can You Improve Your As-is Models? Requirements Analysis Methods Meet GQM
PDF
Terminology Matching of Requirements Specification Documents and Regulations ...
PDF
FOSE2010 ミニチュートリアル 「データマイニング技術を応用したソフトウェア構築・保守支援」
Incremental Feature Location and Identification in Source Code
Feature Location for Multi-Layer System Based on Formal Concept Analysis
Modeling and Utilizing Security Knowledge for Eliciting Security Requirements
Class Responsibility Assignment as Fuzzy Constraint Satisfaction
Understanding Source Code Differences by Separating Refactoring Effects
Establishing Regulatory Compliance in Goal-Oriented Requirements Analysis
Refactoring Edit History of Source Code
Historef: A Tool for Edit History Refactoring
How Can You Improve Your As-is Models? Requirements Analysis Methods Meet GQM
Terminology Matching of Requirements Specification Documents and Regulations ...
FOSE2010 ミニチュートリアル 「データマイニング技術を応用したソフトウェア構築・保守支援」
Ad

Similar to iFL: An Interactive Environment for Understanding Feature Implementations (20)

PDF
iFL: An Interactive Environment for Understanding Feature Implementations
PPTX
Loc and function point
PDF
AI for Program Specifications UW PLSE 2025 - final.pdf
PPT
software effort estimation
PPTX
Splunk for Developers
PDF
Java Is A Programming Dialect And Registering Stage Essay
PPTX
Splunk for Developers Breakout Session
PDF
PDF
Reverse Engineering of Module Dependencies
PDF
Adobe Scan 22-Aug-2024-1167527822678.pdf
PPTX
Splunk for Developers
PPTX
Framework
PDF
JNTUA COMPILER DESIGN Notes.pdf,for all units
PPTX
NDC Sydney 2019 - Microservices for building an IDE – The innards of JetBrain...
PPTX
Splunk for Developers
PPT
software engineering software development life cycle
PDF
Elements of DDD with ASP.NET MVC & Entity Framework Code First
PDF
Se chapter 1,2,3 2 mark qa
DOC
SathishKumar Natarajan
PDF
Btech IT Sem VII and VIII-1 (1).pdf
iFL: An Interactive Environment for Understanding Feature Implementations
Loc and function point
AI for Program Specifications UW PLSE 2025 - final.pdf
software effort estimation
Splunk for Developers
Java Is A Programming Dialect And Registering Stage Essay
Splunk for Developers Breakout Session
Reverse Engineering of Module Dependencies
Adobe Scan 22-Aug-2024-1167527822678.pdf
Splunk for Developers
Framework
JNTUA COMPILER DESIGN Notes.pdf,for all units
NDC Sydney 2019 - Microservices for building an IDE – The innards of JetBrain...
Splunk for Developers
software engineering software development life cycle
Elements of DDD with ASP.NET MVC & Entity Framework Code First
Se chapter 1,2,3 2 mark qa
SathishKumar Natarajan
Btech IT Sem VII and VIII-1 (1).pdf

More from Institute of Science Tokyo (9)

PDF
Revisiting the Effect of Branch Handling Strategies on Change Recommendation
PDF
An Extensive Study on Smell Aware Bug Localization
PDF
RefactorHub: A Commit Annotator for Refactoring
PDF
Can Automated Impact Analysis Technique Help Predicting Decaying Modules?
PDF
The Impact of Systematic Edits in History Slicing
PDF
ChangeMacroRecorder: Recording Fine-Grained Textual Changes of Source Code
PDF
Inference-Based Detection of Architectural Violations in MVC2
PDF
Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis
PDF
ソフトウェア工学勉強会への誘い
Revisiting the Effect of Branch Handling Strategies on Change Recommendation
An Extensive Study on Smell Aware Bug Localization
RefactorHub: A Commit Annotator for Refactoring
Can Automated Impact Analysis Technique Help Predicting Decaying Modules?
The Impact of Systematic Edits in History Slicing
ChangeMacroRecorder: Recording Fine-Grained Textual Changes of Source Code
Inference-Based Detection of Architectural Violations in MVC2
Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis
ソフトウェア工学勉強会への誘い

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
Spectroscopy.pptx food analysis technology
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Electronic commerce courselecture one. Pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Approach and Philosophy of On baking technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
Teaching material agriculture food technology
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Tartificialntelligence_presentation.pptx
PDF
Getting Started with Data Integration: FME Form 101
NewMind AI Weekly Chronicles - August'25-Week II
Spectroscopy.pptx food analysis technology
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
A comparative analysis of optical character recognition models for extracting...
“AI and Expert System Decision Support & Business Intelligence Systems”
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Unlocking AI with Model Context Protocol (MCP)
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Electronic commerce courselecture one. Pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Approach and Philosophy of On baking technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
MYSQL Presentation for SQL database connectivity
Dropbox Q2 2025 Financial Results & Investor Presentation
Teaching material agriculture food technology
Empathic Computing: Creating Shared Understanding
Tartificialntelligence_presentation.pptx
Getting Started with Data Integration: FME Form 101

iFL: An Interactive Environment for Understanding Feature Implementations

  • 1. 14 Sep., 2010 TOKYO INSTITUTE OF TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE ICSM 2010 ERA iFL: An Interactive Environment for Understanding Feature Implementations Shinpei Hayashi, Katsuyuki Sekine, and Motoshi Saeki Department of Computer Science, Tokyo Institute of Technology, Japan
  • 2. Abstract  We have developed iFL − An environment for program understanding − Interactively supports the understanding of feature implementation using a feature location technique − Can reduce understanding costs 2
  • 3. Background  Program understanding is costly − Extending/fixing existing features Understanding the implementation of target feature is necessary − Dominant of maintenance costs [Vestdam 04]  Our focus: feature/concept location (FL) − Locating/extracting code fragments which implement the given feature/concept [Vestdam 04]: “Maintaining Program Understanding – Issues, Tools, and Future Directions”, Nordic Journal of Computing, 2004. 3
  • 4. FL Example (Search-based) I want to understand scheduler the feature converting input time strings to schedule objects… Source Code …… public Time(String hour, …) { A new ...... maintainer } … FL public void createSchedule() { ...... schedule time Search } public void updateSchedule(…) { …… Feature Location (Search-based approach) Reading these methods for understanding 4
  • 5. Problem 1: How to Find Appropriate Queries? FL How?? Search  Constructing appropriate queries requires rich knowledge for the implementation − Times: time, date, hour/minute/second − Images: image, picture, figure  Developers in practice use several keywords for FL through trial and error 5
  • 6. Problem 2: How to Fix FL Results?  Complete (Optimum) FL results are rare − Accuracy of used FL techniques − Individual difference in appropriate code An FL result Necessary code (code fragments) (false negatives) Unnecessary code (false positives) FL schedule time Search Optimum result (Code fragments that should be understood) 6
  • 7. Our Solution: Feedbacks  Added two feedback processes Query Input schedule Search Feature location (calculating scores) Updating 1st: ScheduleManager.addSchedule() Relevance queries 2nd: EditSchedule.inputCheck() … feedback (addition of hints) Selection and understanding of code fragments Finish if the user judges that he/she has read all the necessary code fragments 7
  • 8. Query Expansion  Wide query for initial FL − By expanding queries to its synonyms  Narrow query for subsequent FLs − By using concrete identifies in source code 1st FL 2nd FL schedule* date* Search schedule time Search Thesaurus Expand A code fragment in a FL result public void createSchedule() { … • schedule • list String hour = … • agenda • time Time time = new Time(hour, …); • plan • date … } 8
  • 9. Relevance Feedback  Improving FL results by users feedback − Adding a hint when the selected code fragments is relevant or irrelevant to the feature − Feedbacks are propagated into other fragments using dependencies Dependency i th result of FL (i+1) th result of FL 1 2 9 6 1 8 11 6 Code fragment : relevant propagation with its score by dependencies 9
  • 10. Supporting Tool: iFL  Implemented as an Eclipse plug-in − For static analysis: Eclipse JDT − For dynamic analysis: Reticella [Noda 09] − For a thesaurus: WordNet Exec. Traces / Synonym Reticella dependencies iFL- Info. core Word Syntactic Net information JDT Implemented! Eclipse 10
  • 12. How iFL Works Inputting Query 12
  • 13. How iFL Works Calculating Evaluated code fragments scores with their scores 13
  • 14. How iFL Works Associated method will be shown in the code editor when user selects a code fragment 14
  • 15. How iFL Works Calculating scores again Adding hints 15
  • 16. How iFL Works Scores updated 16
  • 17. How iFL Works Code reading FL 17
  • 18. Preliminary Evaluation  A user (familiar with Java and iFL) actually understood feature implementations Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 change requirements and0.67 features 5 8 10 related 1 1 S3 1 from 2 Sched 2 2 0.00 1 0 S4 10 (home-grown, small-sized) 1.00 6 10 13 0 2 S5 3 6 6 15 0.75 3 2 2 change requirements and related features J1 10 from 4 JDraw 20 156 0.93 10 2 J2 4 (open-source, medium-sized)0.92 14 6 18 173 3 18
  • 19. Evaluation Criteria # selected, but unnecessary code fragments Reduced ratio of overheads between interactive and non-interactive approaches Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 19
  • 20. Evaluation Results  Reduced costs for 6 out of 7 cases − Especially, reduced 90% of costs for 4 cases Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 20
  • 21. Evaluation Results  Small overheads − Sched: 1.2, JDraw: 12 in average Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 21
  • 22. Evaluation Results  No effect in S3 − Because non-interactive approach is sufficient for understanding − Not because of the fault in interactive approach Non- # Correct # FL Interactive # Query interactive Δ Costs Overheads Events execution costs updating costs S1 19 5 20 31 0.92 1 2 S2 7 5 8 10 0.67 1 1 S3 1 2 2 2 0.00 1 0 S4 10 6 10 13 1.00 0 2 S5 3 6 6 15 0.75 3 2 J1 10 4 20 156 0.93 10 2 J2 4 6 18 173 0.92 14 3 22
  • 23. Conclusion  Summary − We developed iFL: interactively supporting the understanding of feature implementation using FL − iFL can reduce understanding costs in 6 out of 7 cases  Future Work − Evaluation++: on larger-scale projects − Feedback++: for more efficient relevance feedback • Observing code browsing activities on IDE 23
  • 25. The FL Approach  Based on search-based FL  Use of static + dynamic analyses Static analysis Query Methods with Hints   schedule their scores Source code Execution trace Evaluating Test events Events with case / dependencies their scores (FL result) Dynamic analysis 25
  • 26. Static Analysis: eval. methods  Matching queries to identifiers Schedule time Search Expand The basic score (BS) of createSchedule: 21 Thesaurus public void createSchedule() { • schedule … 20 for method names • agenda String hour = … • time Time time = new Time(hour, …); • date … 1 for local variables } Expanded queries 26
  • 27. Dynamic Analysis  Extracting execution traces and their dependencies by executing source code with a test case Dependencies Execution trace (Method invocation relations) e1 e1: loadSchedule() e2: initWindow() e2 e3 e6 e3: createSchedule() e4: Time() e5: ScheduleModel() e4 e6: updateList() e5 27
  • 28. Evaluating Methods  Highly scored events are: − executing methods having high basic scores − Close to highly scored events 52.2 Events Methods BS Score e1 (20) e1 loadSchedule() 20 52.2 e2 initWindow() 0 18.7 e3 createSchedule() 21 38.7 e2 e3 e6 38.7 19.0 e4 Time() 20 66.0 18.7 (21) (0) (2) e5 ScheduleModel() 31 42.6 e4 66.0 e6 updateList() 2 19.0 (20) e5 42.6 (31) 28
  • 29. Selection and Understanding  Selecting highly ranked events − Extracting associated code fragment (method body) and reading it to understand Events Methods Scores Rank e1 loadSchedule() 52.2 2 Extracted code fragments e2 initWindow() 18.7 6 public Time(String hour, …) { e3 createSchedule() 38.7 4 … String hour = … e4 Time() 66.0 1 String minute = … … e5 ScheduleModel() 42.6 3 } e6 updateList() 19.0 5 29
  • 30. Relevance Feedback  User reads the selected code fragments, and adds hints − Relevant (): maximum basic score − Irrelevant to the feature (): score becomes 0 Basic Events Methods Hints Scores Ranks scores e1 loadSchedule() 20 52.2 77.6 2 e2 initWindow() 0 18.7 6 e3 createSchedule() 21 38.7 96.4 4 1 e4 Time()  20 46.5 66.0 70.2 1 3 e5 ScheduleModel() 31 42.6 51.0 3 4 e6 updateList() 2 19.0 5 30