Skip to main content

Scrum Experience Report for CSP

This is my experience report for the Scrum Alliance's Certified Scrum Practitioner (CSP) requirements. I publish it online now because of the debate on the CSM Exam. It was written in April, 2009 in reference to a project that lasted over one year.

----

1. Through the questions below, please describe one project on which you have used Scrum over the past twelve months.

1.1. What was the project’s purpose? What business goal was the project intended to deliver?

To introduce Agile practices to existing organization while developing a new VoIP product for home use by consumers. The software to be developed created the new users account and provisioned the consumer’s hardware on the network and the VoIP Broadsoft server.


1.2. What was the project length? What was the duration of the project?

I was on site for about 1 year (Oct, 2007 - Oct, 2008) the project continued on until approx. Feb, 2009
when it was canceled. After having delivered a product to the market place for regional market test. 1.3. What was the cost of the project? How did budgeted costs compare to actual costs?
I’m not familiar with the cost/budget numbers. However the total including hardware was around 10 million for the first year.

1.4. Discuss the value of the project? How did projected benefits compare to actual (if measured) benefits?

The value was a product in the new market space of consumer voice over IP to be sold in a major consumer electronics store. One estimate of the market space in 2007 was $1.2 billion and growing rapidly.

1.5. Discuss the project’s size. How many people were on the project team(s)? How were they organized into teams?

In Oct. 2007 the project started with about 5 people, these people became the team leaders and project leaders. Hiring of team members started and the first few sprints started on infrastructure and largely working on the large backlog of features. Hiring continued for approx. 6 months culminating with about 30 developers (Java programmers, QA, Web UI programmers). With 3 - 5 business analysis, 4 -8 systems engineers, managers, accountants, directors and VP not included. We started with two very small teams of about 4 or 5 people each (Java devs and QA) increased to 3 on site teams of 7 to 9 people and one off shore team of about 5 developers.

1.6. Describe the project’s teams. Were the teams cross-functional and self-organizing? Were the teams collocated in an open space? Were the teams physically separated within one location, or located in more than one physical location?

The original team was collocated in one open space that we frequently rearranged physically to suit our needs. As the space was consumed by the growing team these physical (desk location) changes reduced to changing desks with individuals to be with a team.
The teams were cross functional up to a point. The QA specialist were embedded on a team with the Java developers and for quite a few sprints all UI was done by these developers. Later Web UI specialist were hired and embedded on a team; later on they floated among the 4 development teams as there were only 2 UI devs and a UI arch. and graphic designer. And toward the release date the UI team members acted more like their own team, with part-time dev team responsibilities.
The system engineers were always a separate team, however collocated physically in an adjacent room, easily accessed. The business analysis were physically in a separate room but interacted frequently with the developers.
The development teams were fairly self organizing. Several times electing to redistribute the team members and shuffling the team/squad make-up to share knowledge/skills. The leadership group remained core to the group but as the group expanded the leadership was also developed and several time interim leads appointed because of long term vacations.
The off-shore team sent two developers for an initial 2 or 3 week session before starting work. Then they returned and were managed by an on-site scrum master/delivery manager. Except for this group of 5 developers all development was done in a collocated open space. The space was not perfect and not large enough toward the end, but did include meeting spaces, and enough open rooms for private conversations most of the time.

1.7. Tell us about the project’s initiation. How was the project initiated? How was the team trained to use the Scrum process?

Envisioned as a large project to be grown from just a few existing employees. Most of the people would be new hires/consultants. The original employees had done some agile-like practices on other projects and were interested in learning and applying Agile practices. They were given Scrum Master training, and several others were given Product Owner training by SolutionsIQ. In addition to this training the consultants that were brought on-site by SolutionsIQ were well versed in Agile/Scrum/XP practices. This embedding of Scrum/XP knowledgeable people on the team in addition to a strong desire to practice Agile by the core existing employees created a core group of Agile minded people.
The new hires were screened for a desire to use Agile practices but originally very few had strong experience with Scrum/Agile practices. Much of the training was done as on the job training - just in time. Explanations and practices by the core group in Scrum and later XP practices were the major learning transfer mechanisms.

1.8. Discuss project reporting. How did you report progress to management? To customers?

Most visible progress reporting was done at sprint demos, for management. Working software was demonstrated, such as the first phone call placed using the software was demoed at a sprint review with the CEO taking the call. Reports were also given to management in meetings via reports on progress, challenges and dependencies. Dependencies with other internal systems and external vendors being some of the most challenging issues throughout the project.
Customers (end users) were not informed.
Internal customers (users of internal facing systems) were consulted during design and development of stories and present at demos.

1.9. How was change handled? What difficulties were surfaced by Scrum that had to be resolved? How were these resolved?

Interfaces with other groups such as systems engineering that required large lead times and large planning windows for hardware purchases and configuration changes was a challenge. In many cases Scrum functioned very well within the develop teams, however at the boundaries the other groups not using Agile process/planning were often problems. These problems were often handled in more traditional ways - overtime to get the job done, the changes made to met deadlines for other teams and sometimes the development team.
A sprint (3 weeks) using overtime was tried to increase development throughput, but upon completion of the sprint it was found to result in very little additional output, and less satisfaction of the developers. The cost vs. benefit didn’t work out too well, management never asked for overtime
again. Instead they faced the facts and decided to postpone the launch date. While at the same time decided to cut the scope of the minimal feature set required for launch.

1.10. Discuss management. What was the previous role of the ScrumMaster? Who took on the role of Product Owner? To what degree were they successful in fulfilling their roles?

Having a new team to be built was an advantage, the role of Scrum Master was new and the organization hired an experienced consultant to play this role for all teams. Previous development groups at the organization were less defined than ‘teams’ more of a ‘work group’ lead by a Director with ‘lead developers’. The Scrum Master was instrumental in the projects success, was instrumental in alerting management to impediments and helping the teams to function and practice Scrum well.
The Product Owner role was given to a product manager that was trained in Scrum/PO and after several months was doing a very fine job. The PO and her team converted a backlog of many stories estimated at over 2000 story points into a release plan of around 750 story points, and then held to that basic release point by removing or reducing scope as additional work was discovered. This was key to the success of the project!

1.11. Discuss engineering. What environmental factors or software engineering practices had to be changed?

The teams adopted many of the XP practices, although there was resistance on some practices. Over time the resistance faded on some practices. For example pair-programming was an issue for quite some time, but with time most developers were very confortable with pairing, and it was a “standard” practice. TDD was recognized but not practiced well or never became the “standard”. However writing unit test, integration test and acceptance test during the story development was “standard” practice. Acceptance test were noted as a great benefit in systems integration tests of the pre-production systems - having these automated test (StoryTestIQ) allowed the integration team to reach a very productive state in less than one day (assumed to need a week).
The practice of developers writing QA tests, was an issue for quite some time. The QA members of the team were viewed (at the beginning of the project) to be “tainted” and the desire to have external QA was thought to be a “requirement” for quality. In the middle of the project duration (5 or 6 months in) the sheer size of the automated acceptance test suite was an issue of maintenance and cost for a small QA group embedded in the teams to maintain and work on continuing stories. More and more responsibility for maintaining the suites was given to developers and shared across the teams.
The practice of not doing speculative work (because we’ll need it later) took many of the Business Analysis and Product Owner staff a while to be OK with. Many of the new dev team members had to also be cautioned against this also. This became the DB Write Only data issue, if we don’t need to read that data yet.... consider it written - it’s DONE.
The practice of not breaking the build took a very long time to sink in. There had been an existing practice of breaking the builds of existing projects - view as OK by current staff. This view was allowed to become “standard” practice, and quite a few sprints there were days when the apps couldn’t be built. It took quite a few months of pointing out lost time, risk, energy spent to fix the builds, etc. and this practice of breaking the build as OK appeared to die out. It would have been much easier to institute that policy if the original core development team had believed in that from the beginning. So even with a good CI build box, it is the people and attitudes that matter.

1.12.Tell us about stabilization. For how long did the software have to be stabilized before it could be released? How did you structure this stabilization process?

We did new development for about 9 months then about 5 weeks of stabilization/integration on production hardware and network. Production hardware was viewed as too expensive to bring in (purchase/setup) before “needed”. In hind sight we wished to have mock-production hardware in a mock-production network. Much of the stabilization/integration issues had to do not with software problems but with networking system. For example firewall issues between servers, network open connection timeout issues; the sort of things that the development team took for granted that these issues were too low level to be a problem. The network/system engineers had a different belief system about these issues than the software development teams; these lead to the largest issues and most finger pointing. Had we had mock-hardware to start testing upon earlier these issues and differences could have been ironed out in less stressful environments.
To adapt to the appearance of a need to quicker releases we changed from a 3 week sprint cycle to a 1 week sprint cycle during the integration 5 week period. Turns out there were much fewer application releases than anticipated, as fewer issues were software bugs and mostly system / software configuration issues. The quicker pace also produced unneeded stress on the people. However the 5 week time frame - reserved with no new feature development (just bug fixes) was a smart planning move. I believe with a mock-hardware/network it could have been reduced, but other factors such as work load (some temp QA staff were brought in to run manual acceptance test suites, etc) would have been impacted. We went back to the 3 week sprint cycle after this stabilization period, when working on the 1.1 release.

1.13.Discuss success. To what degree was the project successful? To what degree was Scrum instrumental in the project’s success?

The original project scope and deadline where very unrealistic, Scrum allowed the PO and org MGMT to understand this was not feasible and to envision what would be feasible. This didn’t happen at the beginning of the project but about 3 months into the project. Scrum’s empirical tracking and estimating allowed the MGMT to predict that the project would not be completed by the unrealistic date, and then to predict when a smaller project could be completed with a larger team. Scrum then
allowed the PO a way of reducing scope to the “minimum releasable” feature set. Scrum allowed the PO to change what was important to the goal of a release during different “phases” of the development. For example at one phase it was important to work on user interface and usability stories, but after the usability testing was done and a few issues were fixed, the focus changed to billing system stories. Reprioritizing the backlog allow the PO to change what the dev team worked on and to communicate why that was important.
If the project is view by the company as successful - I don’t know. The product made it to the market, but not in the timeframe originally desired, with the features desired. However it did make it to market at the predicted timeframe (which was very important) and with the predicted features. It was pulled from the market, so it did not succeed in the market place. Would it have fared better with any other development methodology? No, not at all. Was the market failure Scrums fault - no not at all. The product’s viability in the market had been questioned all along. However Scrum was seen as a winning software development framework by the organization, they continue to use it.

1.14. Discuss the Scrum Process on this project. To what degree was the Scrum process implemented "out of the box?" To what degree did you have to modify the Scrum process for this project? For each modification, how did you formulate the modification so that the basic inspect/adapt mechanisms continued to function? What parts of Scrum couldn't be implemented, or failed, and why?

I believe the process was very close to the “box instructions”. We did a 3 week sprint (not monthly), our cross functional team did not include everyone required, but they were very close to the team and collocated in the same space. We used “team leads” per team and one Scrum Master for all teams, but that worked very well. The Scrum Master was present at all super-team functions and always available to the team leads. The team leads facilitated “Scrum Master” roles when he was not present, and sometimes when he was present. We had the typical meetings (planning part A & B, Scrum stand up, product review & retrospective). We included backlog grooming meetings and release planning meetings. Upon inspection of testing needs for stabilization; needing to produce a releasable product once per week we changed our process to a 1 week sprint, and reduced the meeting overhead by reducing but not eliminating the planning/demo/reto meetings. And also changed back when we realized that frequent of releases was not needed/practical and the teams were not sustainable at that pace.


2. How do you cause the accuracy of Product Backlog estimates to improve? To what degree does their accuracy matter?

Several ways. Have multiple people estimate, have the estimate discussed allow the whole team to estimate. Allow time for the team to reconsider the estimate, allow the team to preview the stories or do research on the stories or implementations before estimating. Develop domain knowledge in the product, by doing stories and
those similar stories estimates will improve. Track the performance of the team - measure the accuracy of the estimates.
Accuracy matters because people want to be correct. The team will not be happy with inaccurate estimates, someone will desire improvement. Inaccurate estimates will result in the team taking on too little or too much, both of which create inefficient story progress through the work queue. The accuracy will reflect in the team velocity, or lack of a steady velocity. It will matter to the PO who would like to be able to predict releases and what features can be expected.


3. How do you ensure that what a team commits to for a Sprint is what the team actually delivers?

Make the commitment visible to all - write it down. Make sure both the team & the PO understand the stories, the goal of the sprint and how each story helps achieve the goal. Understand the acceptance criteria of the stories. Make sure that the team tracks the progress toward the sprint goal, the stories - make sure that the teams work during the sprint is targeted toward the sprint goal/stories - not on distractions (other work).
Acceptance test ensure that the understood commitments are working at the end of a sprint. Run some or all of the acceptance test (if automated it is easy) during the sprint demo.

4. What metrics do you use to track the development process? Which metrics have been changed, removed, or newly implemented as a result of using Scrum?

The release burn down charts, stories completed from the backlog versus stories in the backlog (added/ removed stories). Cost of the development team per sprint. On a recent project I’ve used Agile EVM in addition to release plans to track progress, the client found this helpful. Frequent release allow the client and/ or end users to track the progress - it becomes much more visible, an implicit metric of success.

5. What type of training, resources, or tools would best help you successfully employ Scrum in the future?

A world/land in Second Life that was running a Scrum - ie a simulation that could be experienced by the team and stakeholders, that would help them to understand how empirical process controls allow for the possible to happen in improbable situations.

6. Describe the largest impediments you have encountered and how you have resolved it (or not!)

On a newly forming team the concept of not breaking the build was carried over from some of the core members from previous groups in which the rule “don’t break the build” was not followed. For quite a few sprints the teams would struggle with hours and some times days of having the build broken and no functional in some way. I discussed, preached, harassed, etc. the team to improve on this concept, but with the core team members setting a tone of “well it was easier to let it break and then fix it later”; it was hard to overcome this poor practice. I decided to make the issue a bit more visible so at a few retrospectives I announced the amount of time the build was broken, the number of times, stats derived from the build machine. Shockingly the build was broken something like 18 - 20 times in a 15 day sprint, with the mean time to fix in the several hour range (don’t remember now the exact figures). Reporting these stats at the reto started to get some attention, more people started to pay attention to the build and its state (working/broken). We got a build monitor for the team room (this helped in identifying when it was broken) and this reduced the mean time to fix it. However the practice of breaking the build (number of occurrence) was still quite high. Getting the QA team members to loudly complain when the latest build would not function for their testing helped. At one retrospective I lead the 25 person team in an initiative (group game) to experience an analogy to total team throughput when defects stop the process, and to discover who is best suited to fixing the defects (responsibility). Within a few more sprints and continual improvement we got to a point where the build was rarely broken.
However this experience taught me that so times it is very important to fight early for team norms, that I believe in. Since changing norms is much harder.

7. Describe how you have worked with other ScrumMasters to advance the use of Scrum within an organization and within the community.

On the project I’m writing about we had 3 or 4 CSM, many times we would have lunch to discuss the project discuss issues and possible tacks to resolve issues. By the end of the project (approx. 1 year) the organization had adopted Scrum as a development practice and were very happy with the process and the success of the project. They even had non-development groups wishing to adopt Scrum.
Within the community I participate in various users groups (XP, Scrum, Beyond Agile) attend and participate in lectures by various Agile luminaries, and read and post to various Agile discussion groups on line, and started blogging on AgileIQ.

Comments

Andrej Ruckij said…
here are my thoughts about general question #3 - http://www.agilemindstorm.com/2010/06/how-do-you-ensure-that-commitments-are.html