This site is from a past semester! The current version will be here when the new semester starts.

tP week 9: v1.2tP week 11: v1.3


tP week 10: mid-v1.3

  1. Do a postmortem of the previous iteration
  2. Adjust process rigor if necessary
  3. Start the next iteration
  4. Update the DG with design details
  5. Smoke-test CATcher COMPULSORY
  6. Do a trial JAR release

How much is enough to get full marks? Not surprisingly, a common question tutors receive around this time of the project is "can you look at our project and tell us if we have done enough to get full marks?". Here's the answer to that question:

The tP effort is graded primarily based on peer judgements (tutor judgements are used too). That means you will be judging the effort of another team later, which also means you should be able to make a similar judgement for your own project now. While we understand effort estimating is hard for software projects, it is an essential SE skill, and we must practice it when we can.

The expected minimum bar to get full marks for effort:

  • For the team: an effort equivalent to the effort required to develop AB3 from scratch
  • For an individual: an effort equivalent to the effort the iP required

If you surpass the above bars (in your own estimation), you should be in a good position to receive full marks for the effort. But keep in mind that there are many other components in the tP grading, not just the effort.

1 Do a postmortem of the previous iteration

  • Discuss with the team how the iteration went (i.e., what worked well, what didn't), and your plans to improve the process (not the product) in the next iteration.
  • Keep notes about the discussion in your shared project notes document so that the tutor can check them later.

2 Adjust process rigor if necessary

  • Adjust process rigor, as explained in the panel below:

3 Start the next iteration

This iteration is your last chance to add features as a strict feature-freeze will be enforced in the next iteration (i.e., v1.4). That iteration (which is shorter than usual) is reserved for bug fixing and documentation work only. Any non-compliance with that restriction will be penalized. In other words, in terms of product design and implementation, treat this iteration as creating the final version of the product.

The version you deliver in this iteration (i.e., v1.3) will be subjected a peer testing (aka PE Dry Run) and you will be informed of the bugs they find (no penalty for those bugs). The same peer testers will be asked to check if you have changed features during the feature freeze later.

Given that you'll have to make important feature decisions in this iteration, it may be useful to know what kind of feature flaws that can cost you marks (you will not be allowed to fix feature flaws while a feature-freeze is in force). The panel below contains some excerpts from the guidelines your peers will use to determine feature flaws of your product after the final submission.

During the feature freeze, you will not be allowed to tweak even things such as error messages. The panel below gives some details on the feature-freeze that will be imposed in v1.4, to prepare you in advance:

As you did in the previous iteration,

  • Plan the next iteration (steps are given below as a reminder):
    • Decide which enhancements will be added to the product in this iteration, assuming this is the last iteration.
    • If possible, split that into two incremental versions v1.3 and v1.3b.
    • Divide the work among team members.
    • Reflect the above plan in the issue tracker.
  • Start implementing the features as per the plan made above.
  • Track the progress using GitHub issue tracker, milestones, labels, etc.

In addition,

  • Maintain the defensiveness of the code: Remember to use assertions, exceptions, and logging in your code, as well as other defensive programming measures when appropriate.
    Remember to enable assertions in your IDEA run configurations and in the gradle file.
  • Recommend: Each PR should also update the relevant parts of documentation and tests. That way, your documentation/testing work will not pile up towards the end.

4 Update the DG with design details

This task is time-sensitive. If done later than the deadline, it will not be counted as 'done' (i.e., no grace period). Reason: This is 'an early draft'; if done late, it is the 'final version' already.

You are discouraged from moving sections currently in DeveloperGuide.md to additional markdown files. Reasons: 1. You need to submit the DG as a single PDF file at the end of the semester. 2. When checking DG-related tP increments, we only check your contributions to that file.

A similar requirement applies to the UserGuide.md too.

FAQ: Why not wait till the end to write documentation?


  • Update the Developer Guide as follows:
    • Each member should describe the implementation of at least one enhancement she has added (or planning to add).
      Expected length: 1+ page per person
    • The description can contain things such as,
      • How the feature is implemented (or is going to be implemented).
      • Why it is implemented that way.
      • Alternatives considered.
Admin tP Deliverables → DG → Tips
  • Aim to showcase your documentation skills. The primary objective of the DG is to explain the design/implementation to a future developer, but a secondary objective is to serve as evidence of your ability to document deeply-technical content using prose, examples, diagrams, code snippets, etc. appropriately. To that end, you may also describe features that you plan to implement in the future, even beyond v1.4 (hypothetically).
    For an example, see the description of the undo/redo feature implementation in the AddressBook-Level3 developer guide.
  • Use multiple UML diagram types. Following from the point above, try to include UML diagrams of multiple types to showcase your ability to use different UML diagrams.
  • Diagramming tools:
    • AB3 uses PlantUML (see the guide Using PlantUML @SE-EDU/guides for more info).
    • You may use any other tool too (e.g., PowerPoint). But if you do, note the following:
      • Choose a diagramming tool that has some 'source' format that can be version-controlled using git and updated incrementally (reason: because diagrams need to evolve with the code that is already being version controlled using git). For example, if you use PowerPoint to draw diagrams, also commit the source PowerPoint files so that they can be reused when updating diagrams later.
      • Use the same diagramming tool for the whole project, except in cases for which there is a strong need to use a different tool due to a shortcoming in the primary diagramming tool. Do not use a mix of different tools simply based on personal preferences.
    • Can UML diagrams be used in project submissions? Not a good idea. Given below are three reasons each of which can be reported by evaluators as 'bugs' in your diagrams, costing you marks:
      • They often don't follow the standard UML notation (e.g., they add extra icons).
      • They tend to include every little detail whereas we want to limit UML diagrams to important details only, to improve readability.
      • Diagrams reverse-engineered by an IDE might not represent the actual design as some design concepts cannot be deterministically identified from the code e.g., differentiating between multiplicities 0..1 vs 1, composition vs aggregation.
  • Keep diagrams simple. The aim is to make diagrams comprehensible, not necessarily comprehensive.
    Ways to simplify diagrams:
    • Omit less important details. Examples:
      • a class diagram can omit minor utility classes, private/unimportant members; some less-important associations can be shown as attributes instead.
      • a sequence diagram can omit less important interactions, self-calls.
    • Omit repetitive details e.g., a class diagram can show only a few representative ones in place of many similar classes (note how the AB3 Logic class diagram shows concrete *Command classes using a placeholder XYZCommand).
    • Limit the scope of a diagram. Decide the purpose of the diagram (i.e., what does it help to explain?) and omit details not related to it. In particular, avoid showing lower-level details of multiple components in the same diagram unless strictly necessary e.g., note how the this sequence diagram shows only the detailed interactions within the Logic component i.e., does not show detailed interactions within the model component.
    • Break diagrams into smaller fragments when possible.
      • If a component has a lot of classes, consider further dividing into sub-components (e.g., a Parser sub-component inside the Logic component). After that, sub-components can be shown as black-boxes in the main diagram and their details can be shown as separate diagrams.
      • You can use ref frames to break sequence diagrams to multiple diagrams. Similarly, rakes can be used to divide activity diagrams.
    • Stay at the highest level of abstraction possible e.g., note how this sequence diagram shows only the interactions between architectural components, abstracting away the interactions that happen inside each component.
    • Use visual representations as much as possible. E.g., show associations and navigabilities using lines and arrows connecting classes, rather than adding a variable in one of the classes.
    • For some more examples of what NOT to do, see here.
  • Integrate diagrams into the description. Place the diagram close to where it is being described.
  • Use code snippets sparingly. The more you use code snippets in the DG, and longer the code snippet, the higher the risk of it getting outdated quickly. Instead, use code snippets only when necessary and cite only the strictly relevant parts only. You can also use pseudo code instead of actual programming code.
  • Resize diagrams so that the text size in the diagram matches the the text size of the main text of the diagram. See example.

5 Smoke-test CATcher COMPULSORY

  • This activity is compulsory and counts for 3 participation points. Please do it before the weekly deadline.

Some background: As you know, our includes peer-testing tP products under exam conditions. In the past, we used GitHub as the platform for that -- which was not optimal (e.g., it was hard to ensure the compulsory labels have been applied). As a remedy, some ex-students have been developing an app called that we'll be using for the PE this semester.

This week, we would like you to smoke-test the CATcher app to ensure it can work with your OS, Browser, GitHub account, by following the steps given in the panel below.

  • [Heads up] Load-testing CATcher will be done during the upcoming lecture (Fri, Oct 27th ), during the first 15 minutes of lecture. This is different from smoke-testing you did above, and this will count for participation separately.
    Therefore, remember to attend the live lecture (via Zoom or F2F) at least for the first 15 minutes (this activity cannot be done any other time).

6 Do a trial JAR release

This task is time-sensitive. If done later than the deadline, it will not be counted as 'done' (i.e., no grace period). Reason: This is 'an early draft'; if done late, it is the 'final version' already.

  • Do a as described in the Developer Guide. You can name it something like v1.2.1 (or v1.3.trial). Ensure that the jar file works as expected in an empty folder and using Java 11, by doing some manual testing. Reason: You are required to do a proper product release for v1.3. Doing a trial at this point will help you iron out any problems in advance. It may take additional effort to get the jar working especially if you use third party libraries or additional assets such as images.
  • If you want to smoke-test your JAR file on an OS that is not available within your team, you can post a request in the forum to see if anyone else in the class can help you smoke-test it on that OS.


tP week 9: v1.2tP week 11: v1.3