John's Review
Problem
This paper presents a tool that allows developers to determine who (a person or organizational group) has relevant experience with a particular code element (a single module or a subsystem).
Contributions
- Presents a tool (the Experience Browser) that aggregates experience atoms (e.g. source code deltas) for individual developers and developer groups.
- Presents a visualization of level of experience and allows a developer to perform simple queries such as which developers have experience with a particular sub system.
- Presents results from the use of the Experience Browser on two telecommunication network element projects in Europe.
Weaknesses
- Using the timing of interactions of different developers with a modification request is not a very accurate way of determining how hard it is to find an expert. This made the argument that finding experts is a critical problem less credible.
- Using the vertical and horizontal size of text to indicate the magnitude of number of experience atoms and people contributing respectively, seems a very odd way to represent this information.
- The independence model that is used to calculate Table 2 is never explained.
Questions
- The authors use change deltas as a measure of experience. What developer experience is not captured by change deltas? How could this experience be quantified?
- What are people's thoughts about the quantitative resume?
- What is the "OA&M interface"? (Section 2, first paragraph)
- What is the difference between a 'patch' and a 'bug fix'? (Section 2.4, first bullet point)
- Why would a subsystem node have to be expanded in order to display the experience for the subsystem? (Section 3.2, third paragraph)
- Do developers really spend 70% of the time communicating? (This result is not from the authors, but from a study that they are quoting. I just want to know if others find this number surprisingly high).
Belief
Overall the paper is well written and demonstrates an interesting tool for showing people's experience. I have little trouble believing that the tool is useful, and they provide data showing that the tool is useful for both newcomers and experienced project members. However, I found their use of timing of changes to the modification request to be a poor technique to show that the problem of finding experts is a critical problem.
Brett's Review
Problem
When working on a large project in a large team, it can be difficult to find an expert either in a specific area or for a specific piece of code. This paper presents a tool, the Exepertise Browser (
ExB), to help automate in identifying people within a team who are experts in various aspects.
Contribution
- A tool that automates the identification of who is an expert for a specific piece of code or area
- Results of how different groups within a team, based on the level of experience with a code base, tend to seek out experts (either for a piece of code or what a specific person has done)
Weaknesses
- Taking into account the managerial structure of a project is not considered which could skew how a person works with the code base and their level of expertise (e.g., a manager telling a subordinate to do something blindly, making the manager the expert without touching the code directly)
- The authors only examine two projects which seems to be major corporate projects and do not attempt to analyze usage in a project that is not so structured in terms of members (e.g., an open source project)
- The paper also does not consider smaller development teams and how the usefulness of the tool might dwindle in such a circumstance
Questions
- How would the tool handle an automated refactoring of a code base and not have the committer be flagged as an expert because of this automated change?
- Are results skewed if someone changes a piece of code multiple times because of constant revisions stemming from the original author's lack of knowledge?
- How would the tool be used in a team with a completely (or almost completely) flat managerial hierarchy?
Belief
I have no problem believing that a tool for discovering who is an expert for a certain chunk of code is useful, especially in a large project. Having this automated would definitely simplify discovering this knowledge and from it being stale. But the author's do need to address how this tool might be used in different team structures along with code changes that stem from automated refactoring or people performing "code monkey"-like changes for someone else.