AI Operations Summary

Purpose

This document summarizes the review experience and operational patterns observed during AI-assisted development on the Offshore Campaign Service project. Steering improvement recommendations are included at the end for reference.


Code Review Role Transition

Role Change: Code Writer → Code Reviewer

Code Review Support Guidelines

When assisting with code reviews, the AI should:

  1. Pre-Review Analysis

    • Read and summarize code changes before review
    • Identify potential issues against design specs
    • Check alignment with requirements and acceptance criteria
    • Verify adherence to code standards
  2. Review Checklist Generation

    • Create task-specific review checklists
    • Reference relevant requirements and design sections
    • Highlight critical areas needing attention
  3. Spec Alignment Verification

    • Compare implementation against design document
    • Verify correctness properties are tested
    • Check for deviations from intended architecture
  4. Standards Compliance

    • Verify code follows project standards (code-standards.md)
    • Check for hardcoded values, magic strings
    • Verify constant usage and naming conventions
    • Check file size limits (150 source / 300 test lines)
  5. Test Coverage Analysis

    • Review test completeness against acceptance criteria
    • Check for missing edge cases and error scenarios

Code Review Checklist Template

Design Alignment:

  • Implementation matches design document specifications
  • No redundant logic across layers
  • Correctness properties are validated with tests

Code Quality:

  • No hardcoded configuration values
  • String literals extracted to constants where appropriate
  • File sizes within limits (150 source / 300 test)
  • Methods are focused and single-purpose

Documentation:

  • Javadoc is accurate and complete
  • HTML entities properly encoded
  • Comments reflect actual behavior

Testing:

  • All acceptance criteria have corresponding tests
  • Edge cases and error scenarios covered
  • Tests use constants, not string literals

Build & Standards:

  • Build passes without errors or warnings
  • No git merge conflict markers
  • Imports are organized and unused imports removed

Person’s Review Responsibilities

Role: Reviewer of Requirements, Design, Tasks, and CRs

The person (developer) is responsible for reviewing all artifacts that the AI produces or prepares to ensure quality. The AI generates drafts and surfaces issues, but the person owns the final approval at every stage. This review gate exists because the person understands the business context, system constraints, and production impact that the AI cannot fully assess on its own.

Review Workflow

AI generates/updates → Person reviews → Person approves or requests changes → AI incorporates feedback

1. Requirements Review

The person reviews requirements.md (or Quip/LLD source) for:

  • Completeness: All user stories have clear acceptance criteria
  • Correctness: Requirements accurately reflect the intended behavior
  • Feasibility: Requirements are technically achievable within the codebase
  • Edge cases: Error scenarios and boundary conditions are covered
  • Correctness properties: Formal properties are defined and testable

When problems are found: Direct the AI to update requirements.md. Changes cascade to design.md and tasks.md.

2. Design Review

The person reviews design.md for:

  • Requirements coverage: Every requirement maps to a design element
  • Architecture fit: Design follows existing project patterns and conventions
  • Data model correctness: Models support all required operations
  • Component interactions: APIs, contracts, and dependencies are clearly defined
  • Error handling: Failure modes are addressed

When problems are found: Direct the AI to update design.md. Check if requirements.md needs adjustment, then cascade to tasks.md.

3. Task Review

The person reviews tasks.md for:

  • Design coverage: Every design component has implementation tasks
  • Dependency ordering: Tasks are sequenced correctly
  • CR sizing: Each task fits within line limits (150 source / 300 test)
  • Independent buildability: Each task can build and pass tests on its own
  • Realistic estimates: Line estimates match actual complexity

When problems are found: Direct the AI to update tasks.md. Verify alignment with design.md still holds.

4. CR Review

The person reviews each CR for:

  • Spec alignment: Implementation matches requirements and design
  • Code quality: Follows project standards (code-standards.md)
  • Test coverage: Acceptance criteria have corresponding tests
  • Size compliance: Within CR line limits
  • Build status: Passes build and all tests

When problems are found: Direct the AI to fix the code, then re-review.

Cross-Document Consistency (Person Verifies)

At any review stage, the person checks:

  • Requirements → Design: No orphaned or missing design elements
  • Design → Tasks: No orphaned or missing tasks
  • Tasks → CRs: Tasks map to CRs within size limits
  • No contradictions across documents

Review Checklist Template

Requirements:

  • User stories are complete with acceptance criteria
  • Edge cases and error scenarios documented
  • Correctness properties defined and testable

Design:

  • Every requirement has a corresponding design element
  • Data models and APIs are correct
  • Error handling covers all failure modes

Tasks:

  • Every design element has implementation tasks
  • Dependencies ordered correctly
  • CR sizing within limits

CRs:

  • Implementation matches spec
  • Code standards followed
  • Tests cover acceptance criteria
  • Build passes

CR Planning: Task-to-CR Mapping with Line Estimation

CR Line Limits (from code-standards.md)

Category Max Lines per CR
SOURCE 150 lines
TEST 300 lines
CONFIG 150 lines

Line Estimation Guidelines

Source Code Estimates:

  • Data model class (Lombok): ~30-50 lines
  • Utility/helper class: ~80-120 lines
  • Validator class: ~80-100 lines
  • DAO modification (add method): ~20-40 lines
  • Service/component modification: ~30-60 lines
  • Constants addition: ~5-10 lines

Test Code Estimates:

  • Model serialization tests: ~80-120 lines
  • Utility class tests: ~120-180 lines
  • Validator tests: ~80-120 lines
  • DAO integration tests: ~100-150 lines
  • Component tests: ~150-200 lines

CR Grouping Strategy

Rule 1: Merge small tasks into one CR

  • If Task A (50 source + 100 test) and Task B (80 source + 150 test) are related, merge into one CR (130 source + 250 test) ✅

Rule 2: Split large tasks across CRs

  • If Task C has 200 source lines, split by layer:
    • CR-C1: Data model + constants (80 source + 100 test)
    • CR-C2: Business logic (120 source + 200 test)

Rule 3: Keep dependencies in order

  • CR-1 must merge before CR-2 if CR-2 depends on CR-1

Rule 4: Group by reviewability

  • One logical change per CR
  • Each CR should build and pass tests independently

Multi-Person Task Allocation

When cooperating with another person on the same feature, use the AI to identify which tasks are independent so they can be worked on in parallel.

Step 1: AI Dependency Analysis

  • After tasks.md is finalized, ask the AI: “Which tasks are independent and can be worked on in parallel?”
  • The AI analyzes task dependencies and produces a dependency graph
  • Tasks with no shared dependencies are candidates for parallel work

Step 2: Classify Tasks

Category Description Example
Independent No dependency on other incomplete tasks Data model + tests, Constants addition
Sequential Must wait for another task to complete Service logic that depends on data model
Shared Touches the same files as another task Two tasks modifying the same DAO class

Step 3: Allocation Rules

  • Assign independent tasks to different people for parallel execution
  • Keep sequential tasks with the same person to avoid blocking
  • Avoid assigning shared-file tasks to different people (merge conflicts)
  • Each person’s task set should be independently buildable

Step 4: CR Coordination

  • Each person creates CRs for their own tasks
  • Merge independent CRs in any order
  • Sequential CRs follow dependency order regardless of who owns them
  • Communicate when a blocking CR is merged so the dependent person can proceed

Example:

Task 1: Add data model + constants (Independent)     → Person A
Task 2: Add validator + tests (Independent)           → Person B
Task 3: Add service logic (Depends on Task 1 & 2)    → Person A (after both merge)
Task 4: Add API integration (Depends on Task 3)      → Person B (after Task 3 merges)

UT Line Optimization Techniques

1. Consolidate similar test cases

@Test void testInvalidInputs() {
    assertNull(method(null));
    assertNull(method(""));
    assertNull(method("  "));
}

2. Extract test data helpers

WidgetInfo w1 = createWidget("id1", 1);
WidgetInfo w2 = createWidget("id2", 2);

3. Batch assertions

assertEquals(1, result.size());
assertEquals("expected", result.get(0).getName());

4. Use parameterized tests for variations

@ParameterizedTest
@ValueSource(strings = {"HIDE", "DISPLAY", "null", ""})
void testStatusValues(String status) { }

5. Target line counts per test class

  • Aim for ~200 lines per test class (leaves buffer under 300)
  • If approaching 280+, split by test category
Logo

AtomGit 是由开放原子开源基金会联合 CSDN 等生态伙伴共同推出的新一代开源与人工智能协作平台。平台坚持“开放、中立、公益”的理念,把代码托管、模型共享、数据集托管、智能体开发体验和算力服务整合在一起,为开发者提供从开发、训练到部署的一站式体验。

更多推荐