[Feat] Api Limit — 외부 API 일일 호출 한도 가드 + 누적 비용 알림#69
Conversation
… / ApiCostBreakdown) (#68)
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthrough외부 API별 일일 할당량 검사·호출 카운트 집계·비용 합산 및 KRW 단위 알림을 추가하고, 이를 비디오 분석 큐 소비자와 외부 API 어댑터들에 가드·카운터·알림으로 통합했습니다. 관련 DDL, JPA/QueryDSL 어댑터, 설정 프로퍼티 및 테스트가 함께 추가되었습니다. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Adapter as 외부 API Adapter<br/>(YouTube/Gemini/GooglePlaces)
participant Guard as ApiQuotaGuardService
participant External as External API
participant Counter as ApiCallCounterService
participant Persist as ApiCallCountPersistencePort
participant Notifier as NotificationPort
Client->>Adapter: 요청
Adapter->>Guard: isExceeded(apiType)?
alt 할당량 초과
Guard-->>Adapter: true
Adapter-->>Client: 예외 (BAD_GATEWAY_...)
else 허용
Guard-->>Adapter: false
Adapter->>External: HTTP 호출
External-->>Adapter: 성공 응답
Adapter->>Counter: recordSuccess(apiType)
Counter->>Persist: increment / findAllByDate
Persist-->>Counter: 현재 집계 반환
Counter->>Counter: 비용 합산 계산
alt 임계값 초과
Counter->>Notifier: sendCostAlert(CostAlertEvent)
Notifier-->>Counter: 전송 결과
Counter->>Persist: upsert 최신 임계값
end
Adapter-->>Client: 성공 응답
end
sequenceDiagram
participant QueueConsumer as VideoAnalysisQueueConsumer
participant Guard as ApiQuotaGuardService
participant Queue as InMemory PriorityBlockingQueue
loop 소비 루프
QueueConsumer->>Guard: isAnyApiExceeded()?
alt 어떤 API 할당량 초과
Guard-->>QueueConsumer: true
QueueConsumer->>QueueConsumer: wait(QUOTA_BACKOFF_MS)
else 정상
Guard-->>QueueConsumer: false
QueueConsumer->>Queue: dequeue task
QueueConsumer->>QueueConsumer: task 처리
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 55 minutes and 6 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.kt (1)
33-51:⚠️ Potential issue | 🟠 Major카운팅 시점을 API 응답 역직렬화 이후에서 요청 시점으로 변경하거나, 성공/실패 카운트를 분리해 주세요.
YouTube Data API v3는 4xx/5xx 오류를 포함한 모든 요청에 쿼터 비용을 부과합니다. 현재 코드는
body<YouTubeSearchResponse>()를 통한 역직렬화 성공 후에만recordSuccess()를 호출하므로, 실패 응답이나 파싱 오류 발생 시 비용은 차감되지만 카운트되지 않습니다. 이로 인해 일일 비용이 과소집계되고 가드 동작이 지연될 수 있습니다.Lines 50, 86, 136, 163, 200, 247에서 카운팅 시점을 조정하거나, HTTP 호출 시도와 역직렬화 성공을 별도로 추적하는 방식 검토를 제안합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.kt` around lines 33 - 51, The API call success is being recorded only after deserialization (body<YouTubeSearchResponse>()), so request attempts that return 4xx/5xx or fail to parse are not counted; change the counting logic in the YouTubeAdapter by moving or duplicating the call to apiCallCounterService.recordSuccess(ApiType.YOUTUBE_DATA) to the moment the HTTP request is made (e.g., immediately after youtubeRestClient.get().uri(...).retrieve() is invoked) or by adding a separate apiCallCounterService.recordFailure(ApiType.YOUTUBE_DATA) in the error handling path so that both request attempts and parse successes/failures are tracked; ensure updates reference guardOrThrow(), youtubeRestClient.get()/retrieve(), body<YouTubeSearchResponse>(), and apiCallCounterService.recordSuccess/recordFailure so the change is localized and consistent across the other listed call sites.linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/VideoAnalyzeAdapter.kt (1)
120-133:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift쿼터 체크와 카운트 증가가 분리되어 동시 요청 시 한도 초과 호출이 발생할 수 있습니다.
현재는
isExceeded검사 후 실제 호출을 수행하고 나중에recordSuccess를 올려서, 경계값에서 여러 요청이 동시에 통과할 수 있습니다. “호출 슬롯 선점(원자적 check+increment)” API로 바꾸는 게 안전합니다.💡 개선 방향 예시
- if (apiQuotaGuardService.isExceeded(ApiType.GEMINI)) { - throw LinktripException(ExceptionCode.BAD_GATEWAY_GEMINI) - } + if (!apiCallCounterService.tryAcquireSlot(ApiType.GEMINI)) { + throw LinktripException(ExceptionCode.BAD_GATEWAY_GEMINI) + } @@ - apiCallCounterService.recordSuccess(ApiType.GEMINI) + // 성공 카운트는 슬롯 선점으로 대체 (또는 실패 시 보정 정책 명시)Also applies to: 171-185
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/VideoAnalyzeAdapter.kt` around lines 120 - 133, The quota check and success-record are non-atomic causing race conditions; change usage of apiQuotaGuardService.isExceeded(...) plus later apiCallCounterService.recordSuccess(...) to an atomic "reserve or acquire" API (e.g., apiQuotaGuardService.tryReserve(ApiType.GEMINI) returning boolean or throwing) so a request obtains a slot before calling client.models.generateContent, and release/finalize the reservation on success/failure via a matching method (e.g., apiQuotaGuardService.release or apiQuotaGuardService.finalize). Update both call sites that use ApiType.GEMINI (the block around client.models.generateContent and the second occurrence noted) to use the new atomic reserve/finalize flow and remove the separate isExceeded/recordSuccess pattern.
🧹 Nitpick comments (6)
sql/ddl/youtube_recent_video.sql (2)
12-12: ⚡ Quick winpublished_at을 DATETIME으로 변경 권장.
현재
VARCHAR(64)로 날짜를 저장하면 MySQL의 날짜 함수 활용이 불가하고, 날짜 기반 정렬/필터링 시 문자열 비교로 인한 성능 저하가 발생합니다.📅 제안: DATETIME 타입 사용
- `published_at` VARCHAR(64) NOT NULL, + `published_at` DATETIME NOT NULL COMMENT '영상 업로드 일시(UTC)',참고: JPA entity의
publishedAt: String도LocalDateTime또는Instant로 변경 필요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/ddl/youtube_recent_video.sql` at line 12, Change the published_at column from VARCHAR(64) to a proper DATETIME (or TIMESTAMP) type so MySQL date functions, indexing, and date-based sorting/filtering work correctly; update any DDL references to `published_at` and adjust the corresponding JPA entity field `publishedAt` from String to an appropriate temporal type (e.g., java.time.LocalDateTime or java.time.Instant) and ensure the JPA mapping/column annotation reflects the new temporal type.
1-19: 운영 고려사항: 캐시 만료 전략 부재.현재 스키마에는 캐시 만료를 위한 타임스탬프나 TTL 컬럼이 없습니다. 다음 사항을 고려하세요:
- 캐시 무효화 전략: 현재는
deleteAllByChannelIdIn()으로 채널 단위 삭제만 가능합니다. 개별 영상 또는 시간 기반 만료가 필요한지 검토하세요.- 스토리지 증가: 채널/영상 삭제 시 캐시도 정리하는 배치 작업이 필요할 수 있습니다.
- 최신성 보장: 캐시 갱신 주기를 애플리케이션 레벨에서 관리해야 하며,
created_at컬럼 추가를 고려할 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/ddl/youtube_recent_video.sql` around lines 1 - 19, Add a created_at timestamp column to youtube_recent_video and populate it on insert so the application can enforce cache TTL and support time-based eviction; update any insertion logic that writes to youtube_recent_video (and related repository/DAO methods) to set created_at, and add an index on (channel_id, created_at) if you need fast per-channel expiration queries, while keeping existing deleteAllByChannelIdIn() for channel-level purge and adding a new repository method (e.g., deleteByChannelIdAndCreatedAtBefore) to remove expired entries.linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiQuotaPolicyAdapter.kt (1)
12-17: ⚡ Quick winquota 설정값 유효성(특히 음수) 방어를 추가해 주세요.
현재는 음수 한도도 그대로 통과됩니다. 운영 환경변수 오설정 시 전체 외부 API가 즉시 차단될 수 있으니, 바인딩 단계에서
>= 0검증(또는 음수 거부 예외)로 fail-fast 처리하는 편이 안전합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiQuotaPolicyAdapter.kt` around lines 12 - 17, In PropertiesApiQuotaPolicyAdapter, add a fail-fast non-negative validation for the bound quota values (properties.gemini, properties.youtubeData, properties.googlePlaces) so negative limits are rejected: perform checks (>= 0) during adapter construction/initialization and throw a clear IllegalArgumentException (or similar) naming the offending field, or if you prefer keep runtime-safe behavior, update override fun dailyLimit(apiType: ApiType): Long? to return null or throw when the selected value is < 0; reference the class PropertiesApiQuotaPolicyAdapter and the method dailyLimit and the fields gemini, youtubeData, googlePlaces when implementing the fix.sql/ddl/trip_plan_request.sql (1)
20-22: ⚡ Quick win배치 조회 인덱스에
deleted를 포함하는 편이 안전합니다.주석 기준으로 “미처리 + 미삭제” 조회라면
(video_analysis_task_id, processed, deleted)가 더 일관된 실행계획을 만듭니다.♻️ 제안 수정안
- KEY `idx_trip_plan_request_task_processed` (`video_analysis_task_id`, `processed`) + KEY `idx_trip_plan_request_task_processed` (`video_analysis_task_id`, `processed`, `deleted`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sql/ddl/trip_plan_request.sql` around lines 20 - 22, The current composite index KEY `idx_trip_plan_request_task_processed` (`video_analysis_task_id`, `processed`) should include the `deleted` column to match the "미처리 + 미삭제" query pattern; update the index definition on TripPlanRequest (the KEY named idx_trip_plan_request_task_processed) to be (`video_analysis_task_id`, `processed`, `deleted`) so the query planner can use a single covering index for both processed and deleted predicates.linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCount.kt (1)
11-30: ⚡ Quick win도메인 불변식 검증을 추가하면 안전성이 올라갑니다.
callCount >= 1,id.isNotBlank()정도는init에서 강제해 두면 비정상 상태 유입을 빠르게 막을 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCount.kt` around lines 11 - 30, Add defensive invariant checks inside ApiCallCount by implementing an init block that verifies id.isNotBlank() and callCount >= 1 (and optionally validate apiType and callDate if needed); update the companion object factory create(...) to continue using IdGenerator.generate() and set callCount = 1L, but ensure any alternative constructors or deserialization paths will fail fast via the init assertions so invalid instances cannot be created (refer to class ApiCallCount, companion object create, properties id and callCount).linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiDailyCostAlert.kt (1)
11-27: ⚡ Quick win
lastSentThresholdKrw음수 방어를 도메인에서 보장해 주세요.알림 임계값은 비음수 값만 유효하므로
init { require(lastSentThresholdKrw >= 0) }검증을 추가하는 편이 안전합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiDailyCostAlert.kt` around lines 11 - 27, Add a domain-level non-negative guard for lastSentThresholdKrw in ApiDailyCostAlert by adding an init block that checks require(lastSentThresholdKrw >= 0) and fails fast with a clear message; update the companion object factory method create (ApiDailyCostAlert.create) remains unchanged but will now benefit from the validation in the ApiDailyCostAlert init to prevent negative threshold values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCounterService.kt`:
- Around line 48-60: There is a race where multiple threads can read lastSent
via alertPort.findLastSentThresholdKrw and all proceed to send the same alert
before alertPort.upsert runs; change the flow so only the caller that atomically
claims the threshold at DB level sends the alert: implement and call a new
DB-backed method (e.g., alertPort.tryAdvanceThreshold(date, newThreshold) or
upsert-if-greater/CAS) that returns true only if it stored/advanced the saved
threshold, and only when it returns true do you call
notificationPort.sendCostAlert(CostAlertEvent(...)) and avoid calling
sendCostAlert when tryAdvanceThreshold returns false; keep existing
ApiDailyCostAlert.create logic but move persistence into the atomic claim
operation in the alertPort implementation.
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.kt`:
- Around line 24-25: The quota guard currently treats a missing dailyLimit as
"allowed" (val limit = policyPort.dailyLimit(apiType) ?: return false), which
silently disables protection; change this to fail-closed by returning that the
quota is exceeded (replace ?: return false with ?: return true) so missing
config blocks the call, and additionally add a startup-time validation in
ApiQuotaGuardService (e.g., in the constructor or an init block) that iterates
all ApiType values and calls policyPort.dailyLimit(apiType) and throws an
IllegalStateException if any required limit is null to catch misconfiguration
early.
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCostPolicyPort.kt`:
- Around line 6-10: 문서 주석과 시그니처가 불일치합니다: ApiCostPolicyPort 인터페이스의
perCallKrw(apiType: ApiType): Long은 non-null을 반환하므로 KDoc에서 "null = 비용 추적 비활성"을
제거하고 대신 "0 = 비용 추적 비활성"처럼 null을 사용하지 않는 정책으로 주석을 정리하세요; 또는 반대로 null 허용을 원하면
perCallKrw의 반환 타입을 Long?으로 변경하고 주석에 맞게 nullable 동작(예: null이면 추적 비활성)을 명확히
문서화하세요.
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiDailyCostAlertPersistencePort.kt`:
- Around line 10-14: The current ApiDailyCostAlertPersistencePort (methods
findLastSentThresholdKrw and upsert) allows a race in the
find→decide→send→upsert flow; add an atomic operation to the port such as
tryAdvanceThreshold(date: LocalDate, candidateThreshold: Long): Boolean that
will check the stored last threshold and, only if the candidate is strictly
greater, persist the new threshold and return true (false otherwise). Update
callers to call ApiDailyCostAlertPersistencePort.tryAdvanceThreshold(...) and
only send notifications when it returns true; implementers of the port (DB/repo
classes backing ApiDailyCostAlertPersistencePort) must perform the
check-and-update atomically (transactional update, SQL compare-and-set, or
optimistic locking) instead of separate findLastSentThresholdKrw and upsert
calls to avoid concurrent duplicate sends.
In `@linktrip-bootstrap/src/main/resources/application.yml`:
- Around line 51-56: 주석이 현재 동작과 다르므로 "null/미설정 = 무제한 (가드 비활성)" 문구를 수정하세요:
application.yml의 daily-quota 설명을 daily-quota, gemini, youtube-data,
google-places 및 환경변수
API_DAILY_QUOTA_GEMINI/API_DAILY_QUOTA_YOUTUBE_DATA/API_DAILY_QUOTA_GOOGLE_PLACES를
언급해, 환경변수가 없을 때는 무제한이 아니라 각 항목에 설정된 기본 숫자값(예: 500/9000/3000)이 사용된다고 명확히 기재하도록
바꿉니다.
In
`@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/properties/ApiCostProperties.kt`:
- Around line 9-13: ApiCostProperties 데이터 클래스(gemini, youtubeData,
googlePlaces)에 음수 값 바인딩을 차단하도록 검증을 추가하세요: 클래스에 Spring의 `@Validated를` 붙이고 각 생성자
프로퍼티에 음수 금지를 위한 제약(`@Min`(0) 또는 동등한 검증 어노테이션)을 적용하거나, 검증 어노테이션 사용이 불가능하면
ApiCostProperties의 생성자/초기화 블록에서 각 필드에 대해 값 < 0이면 IllegalArgumentException을 던지도록
구현해 바인딩 단계에서 음수를 차단하도록 수정하세요.
In
`@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/properties/ApiQuotaProperties.kt`:
- Around line 9-13: The ApiQuotaProperties data class allows negative quota
values to be bound, causing guard logic to misbehave; update the binding to
validate values are non-negative by annotating the class with validation support
(e.g., `@Validated`) and add a non-negative constraint (e.g., `@Min`(0) or
`@PositiveOrZero`) to each property (gemini, youtubeData, googlePlaces) so any
provided value < 0 is rejected during binding; ensure the validation annotation
package is available on the classpath so Spring will enforce it at startup.
In
`@linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiCallCountPersistenceAdapter.kt`:
- Around line 27-30: In ApiCallCountPersistenceAdapter (the block that calls
querydslRepository.incrementCallCount(apiCallCount.apiType,
apiCallCount.callDate) and then
jpaRepository.save(ApiCallCountEntity.from(apiCallCount)) when updated == 0L),
wrap the save call in a try/catch that detects a
unique-constraint/DataIntegrityViolation/ConstraintViolation exception; on
catching that specific exception, call
querydslRepository.incrementCallCount(apiCallCount.apiType,
apiCallCount.callDate) again to apply the missed +1 (and swallow or rethrow
other exceptions). This ensures concurrent INSERT unique-violation paths retry
the UPDATE +1 instead of losing the count.
In
`@linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.kt`:
- Around line 26-32: The upsert implementation in
ApiDailyCostAlertPersistenceAdapter (method upsert) is vulnerable to race
conditions because it does findByAlertDate then save; change it to an atomic
upsert: either execute an INSERT ... ON DUPLICATE KEY UPDATE via a custom
repository method or implement update-first-then-insert-with-duplicate-fallback:
call a repository update method that sets lastSentThresholdKrw by alertDate (or
use jpaRepository.save if update count==0), then attempt to insert
ApiDailyCostAlertEntity.from(alert) and catch the
duplicate-key/DataIntegrityViolationException to retry the update; update
references: upsert, jpaRepository.findByAlertDate, jpaRepository.save,
ApiDailyCostAlertEntity.from, and ApiDailyCostAlertPersistenceAdapter.
- Line 23: 현재 ktlint 포맷 규칙 위반으로 CI가 깨지는 문제이니
ApiDailyCostAlertPersistenceAdapter의 함수 findLastSentThresholdKrw를 한 줄 표현식 대신 블록
바디로 바꿔 줄바꿈을 적용해 포맷을 맞춰 주세요; 예를 들어 override fun findLastSentThresholdKrw(date:
LocalDate): Long? { return
jpaRepository.findByAlertDate(date)?.lastSentThresholdKrw }처럼 반환을 새 줄에 두어
ktlint가 통과되도록 수정하세요.
In `@README.md`:
- Line 120: README.md contains multiple fenced code blocks missing language
identifiers triggering markdownlint MD040; add appropriate language tags (e.g.,
```text, ```yaml, ```json) to each offending fenced block (the ``` markers
referenced by the MD040 warnings) so that each code fence has a language
specifier and the lint warnings are resolved.
- Around line 229-231: The README's POST /api/video/analyze documentation is
inconsistent: the response table shows `INVALID` -> 200 while the API summary
states "COMPLETED 외 202" (non-COMPLETED are 202); pick one canonical rule and
make both places match—either change the table row for `INVALID` to 202 or
change the API summary to state that `INVALID` returns 200; update all
occurrences (including the `COMPLETED`, `INVALID`, `PENDING`/`PROCESSING`
entries and the API list summary) so the single chosen rule is applied
consistently throughout the README.
In `@sql/ddl/trip_plan_item.sql`:
- Around line 6-21: The trip_plan_item table lacks DB-level referential and
uniqueness constraints: add foreign key constraints from
trip_plan_item.trip_plan_id -> trip_plan(id) and
trip_plan_item.travel_itinerary_item_id -> travel_itinerary_item(id) (choose ON
DELETE CASCADE or RESTRICT per business rules) and add a unique index to prevent
duplicate mappings such as UNIQUE(trip_plan_id, travel_itinerary_item_id) (or
UNIQUE(trip_plan_id, travel_itinerary_item_id, deleted) if you want to allow
duplicates only when marked deleted); update or replace the existing KEY
`idx_trip_plan_item_plan_deleted_day_order` as needed to keep the query index
while adding the FK/UNIQUE declarations so TripPlanItemQuerydslRepository /
TripPlanQuerydslRepository invariants are enforced at DB level.
In `@sql/ddl/trip_plan_request.sql`:
- Around line 6-23: Add a DB-level foreign key on
trip_plan_request.video_analysis_task_id to enforce referential integrity with
the video_analysis_task table: alter the CREATE TABLE (or add an ALTER TABLE) to
include a CONSTRAINT (e.g., fk_trip_plan_request_video_analysis_task_id)
REFERENCES video_analysis_task(id) and choose appropriate cascade behavior
(e.g., ON DELETE CASCADE or ON DELETE RESTRICT) so orphaned trip_plan_request
rows cannot remain when video_analysis_task rows are removed.
In `@sql/ddl/trip_plan.sql`:
- Around line 16-18: The UNIQUE KEY uk_trip_plan_member_task (member_id,
video_analysis_task_id) conflicts with the soft-delete column deleted: if you
want to allow recreation after soft-delete, modify the unique constraint to
include the deleted column (e.g., make it on (member_id, video_analysis_task_id,
deleted)); alternatively, if recreation must be disallowed, update application
checks that use deleted=false to ignore deleted rows (or change queries that
check existence) so they align with the DB constraint—locate
uk_trip_plan_member_task, member_id, video_analysis_task_id and the deleted
column in the trip_plan table and update either the constraint or the app
existence logic accordingly.
In `@sql/ddl/video_analysis_task_hashtag.sql`:
- Around line 5-18: The mapping table video_analysis_task_hashtag lacks foreign
key constraints so video_analysis_task_id and hashtag_id can point to
non-existent parents; add FK constraints on
video_analysis_task_hashtag(video_analysis_task_id) -> video_analysis_task(id)
and video_analysis_task_hashtag(hashtag_id) -> hashtag(id) (ensure referenced
column types/lengths match), and choose appropriate ON DELETE behavior (e.g., ON
DELETE CASCADE to remove mappings when a parent is deleted or ON DELETE RESTRICT
if you want to prevent parent deletion); apply the constraints as part of the
CREATE TABLE (or ALTER TABLE) for video_analysis_task_hashtag to enforce
referential integrity.
In `@sql/ddl/video_timeline.sql`:
- Around line 6-20: Add a foreign key constraint on
video_timeline.video_analysis_task_id referencing video_analysis_task.id to
enforce referential integrity; modify the CREATE TABLE for video_timeline to
include a CONSTRAINT (e.g., fk_video_timeline_task) FOREIGN KEY
(`video_analysis_task_id`) REFERENCES `video_analysis_task`(`id`) and choose an
appropriate ON DELETE behavior (recommend ON DELETE CASCADE or ON DELETE
RESTRICT based on domain rules) while keeping the existing index
`idx_video_timeline_task_deleted_timestamp`.
In `@sql/ddl/youtube_recent_video.sql`:
- Around line 6-12: The table youtube_recent_video currently allows duplicate
rows for the same channel_id and video_id; add a composite UNIQUE constraint on
(channel_id, video_id) in the CREATE TABLE statement for youtube_recent_video to
prevent duplicates, and update the corresponding JPA entity to include
`@Table`(uniqueConstraints = `@UniqueConstraint`(columnNames =
{"channel_id","video_id"})) (or equivalent) so the schema and entity are
consistent.
---
Outside diff comments:
In
`@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/VideoAnalyzeAdapter.kt`:
- Around line 120-133: The quota check and success-record are non-atomic causing
race conditions; change usage of apiQuotaGuardService.isExceeded(...) plus later
apiCallCounterService.recordSuccess(...) to an atomic "reserve or acquire" API
(e.g., apiQuotaGuardService.tryReserve(ApiType.GEMINI) returning boolean or
throwing) so a request obtains a slot before calling
client.models.generateContent, and release/finalize the reservation on
success/failure via a matching method (e.g., apiQuotaGuardService.release or
apiQuotaGuardService.finalize). Update both call sites that use ApiType.GEMINI
(the block around client.models.generateContent and the second occurrence noted)
to use the new atomic reserve/finalize flow and remove the separate
isExceeded/recordSuccess pattern.
In
`@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.kt`:
- Around line 33-51: The API call success is being recorded only after
deserialization (body<YouTubeSearchResponse>()), so request attempts that return
4xx/5xx or fail to parse are not counted; change the counting logic in the
YouTubeAdapter by moving or duplicating the call to
apiCallCounterService.recordSuccess(ApiType.YOUTUBE_DATA) to the moment the HTTP
request is made (e.g., immediately after
youtubeRestClient.get().uri(...).retrieve() is invoked) or by adding a separate
apiCallCounterService.recordFailure(ApiType.YOUTUBE_DATA) in the error handling
path so that both request attempts and parse successes/failures are tracked;
ensure updates reference guardOrThrow(), youtubeRestClient.get()/retrieve(),
body<YouTubeSearchResponse>(), and
apiCallCounterService.recordSuccess/recordFailure so the change is localized and
consistent across the other listed call sites.
---
Nitpick comments:
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCount.kt`:
- Around line 11-30: Add defensive invariant checks inside ApiCallCount by
implementing an init block that verifies id.isNotBlank() and callCount >= 1 (and
optionally validate apiType and callDate if needed); update the companion object
factory create(...) to continue using IdGenerator.generate() and set callCount =
1L, but ensure any alternative constructors or deserialization paths will fail
fast via the init assertions so invalid instances cannot be created (refer to
class ApiCallCount, companion object create, properties id and callCount).
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiDailyCostAlert.kt`:
- Around line 11-27: Add a domain-level non-negative guard for
lastSentThresholdKrw in ApiDailyCostAlert by adding an init block that checks
require(lastSentThresholdKrw >= 0) and fails fast with a clear message; update
the companion object factory method create (ApiDailyCostAlert.create) remains
unchanged but will now benefit from the validation in the ApiDailyCostAlert init
to prevent negative threshold values.
In
`@linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiQuotaPolicyAdapter.kt`:
- Around line 12-17: In PropertiesApiQuotaPolicyAdapter, add a fail-fast
non-negative validation for the bound quota values (properties.gemini,
properties.youtubeData, properties.googlePlaces) so negative limits are
rejected: perform checks (>= 0) during adapter construction/initialization and
throw a clear IllegalArgumentException (or similar) naming the offending field,
or if you prefer keep runtime-safe behavior, update override fun
dailyLimit(apiType: ApiType): Long? to return null or throw when the selected
value is < 0; reference the class PropertiesApiQuotaPolicyAdapter and the method
dailyLimit and the fields gemini, youtubeData, googlePlaces when implementing
the fix.
In `@sql/ddl/trip_plan_request.sql`:
- Around line 20-22: The current composite index KEY
`idx_trip_plan_request_task_processed` (`video_analysis_task_id`, `processed`)
should include the `deleted` column to match the "미처리 + 미삭제" query pattern;
update the index definition on TripPlanRequest (the KEY named
idx_trip_plan_request_task_processed) to be (`video_analysis_task_id`,
`processed`, `deleted`) so the query planner can use a single covering index for
both processed and deleted predicates.
In `@sql/ddl/youtube_recent_video.sql`:
- Line 12: Change the published_at column from VARCHAR(64) to a proper DATETIME
(or TIMESTAMP) type so MySQL date functions, indexing, and date-based
sorting/filtering work correctly; update any DDL references to `published_at`
and adjust the corresponding JPA entity field `publishedAt` from String to an
appropriate temporal type (e.g., java.time.LocalDateTime or java.time.Instant)
and ensure the JPA mapping/column annotation reflects the new temporal type.
- Around line 1-19: Add a created_at timestamp column to youtube_recent_video
and populate it on insert so the application can enforce cache TTL and support
time-based eviction; update any insertion logic that writes to
youtube_recent_video (and related repository/DAO methods) to set created_at, and
add an index on (channel_id, created_at) if you need fast per-channel expiration
queries, while keeping existing deleteAllByChannelIdIn() for channel-level purge
and adding a new repository method (e.g., deleteByChannelIdAndCreatedAtBefore)
to remove expired entries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7ecf5739-9421-4578-bd62-abe030e3e0e5
📒 Files selected for processing (46)
README.mdlinktrip-application/src/main/kotlin/com/linktrip/application/domain/notification/CostAlertEvent.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCount.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCounterService.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCostBreakdown.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiDailyCostAlert.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiType.ktlinktrip-application/src/main/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumer.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/notification/NotificationPort.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCallCountPersistencePort.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCostPolicyPort.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiDailyCostAlertPersistencePort.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiQuotaPolicyPort.ktlinktrip-bootstrap/src/main/resources/application.ymllinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/DiscordNotificationAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/GooglePlacesAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiCostPolicyAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiQuotaPolicyAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/VideoAnalyzeAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/config/ApiQuotaPropertiesConfig.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/properties/ApiCostProperties.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/properties/ApiQuotaProperties.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiCallCountPersistenceAdapter.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/ApiCallCountEntity.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/ApiDailyCostAlertEntity.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/ApiCallCountJpaRepository.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/ApiCallCountQuerydslRepository.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/ApiDailyCostAlertJpaRepository.ktsql/ddl/api_call_count.sqlsql/ddl/api_daily_cost_alert.sqlsql/ddl/hashtag.sqlsql/ddl/member.sqlsql/ddl/place.sqlsql/ddl/travel_itinerary_item.sqlsql/ddl/trip_plan.sqlsql/ddl/trip_plan_item.sqlsql/ddl/trip_plan_request.sqlsql/ddl/video_analysis_task.sqlsql/ddl/video_analysis_task_hashtag.sqlsql/ddl/video_timeline.sqlsql/ddl/youtube_channel.sqlsql/ddl/youtube_recent_video.sqlsql/ddl/youtube_video.sql
📜 Review details
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2026-03-18T01:08:05.661Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:08:05.661Z
Learning: In `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt`, the concurrent-write safety (duplicate videoId in batch, race condition on uk_youtube_video_video_id) is intentionally deferred. The maintainer (toychip) confirmed the system is currently single-server, so this is not a concern yet. When replication is introduced in the future, ShedLock will be used for distributed locking to address this. At that point, in-batch videoId deduplication should also be applied.
Applied to files:
sql/ddl/youtube_recent_video.sqllinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.ktsql/ddl/youtube_video.sqllinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.kt
📚 Learning: 2026-03-18T01:07:53.575Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:07:53.575Z
Learning: Apply the same deferred concurrent-write policy to all Kotlin persistence adapters in the MySQL adapter package: ensure writes are coordinated on a single server with ShedLock considered for the replication phase. For YouTubeVideoPersistenceAdapter.kt (and other adapters in this directory), verify that concurrent writes are serialized or properly guarded, document the policy in code comments, and ensure CI checks or deployment gating will catch any regression where multiple instances might attempt a conflicting write during replication.
Applied to files:
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiCallCountPersistenceAdapter.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.kt
📚 Learning: 2026-03-18T01:08:05.661Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:08:05.661Z
Learning: Similarly, `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeChannelPersistenceAdapter.kt` likely has the same deferred concurrent-write policy (single server, ShedLock planned for replication phase).
Applied to files:
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/ApiDailyCostAlertEntity.ktlinktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/YouTubeAdapter.ktREADME.md
🪛 GitHub Actions: CI - Pull request
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.kt
[error] 23-23: ktlintMainSourceSetCheck failed: Exceeded max line length (120) (cannot be auto-corrected).
[error] 23-23: ktlint: Parameter should start on a newline.
[error] 23-23: ktlint: Missing newline before ')'.
[error] 23-23: ktlint: Newline expected before expression body.
[error] 23-23: ktlint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 23-23: ktlint: Missing newline before ')'.
🪛 markdownlint-cli2 (0.22.1)
README.md
[warning] 120-120: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 182-182: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 237-237: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 303-303: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 324-324: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 342-342: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 378-378: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 392-392: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 424-424: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (20)
sql/ddl/youtube_recent_video.sql (1)
1-19: PR 요약과 불일치: youtube_recent_video 테이블이 PR 설명에 누락됨.PR 요약에서는 api_call_count, api_daily_cost_alert 테이블과 API 한도 가드 시스템을 설명하고 있으나, 이 youtube_recent_video 캐시 테이블은 전혀 언급되지 않았습니다.
이 테이블이 YouTube API 호출 최적화(캐싱을 통한 API 호출 감소)의 일부라면 PR 설명과 배포 전 DDL 체크리스트에 명시적으로 포함되어야 합니다.
sql/ddl/place.sql (1)
6-22: 테이블 제약과 키 설계가 목적에 맞게 잘 구성되었습니다.
PRIMARY KEY(id)와UNIQUE(google_place_id)조합으로 중복 방지 의도가 명확하고, 감사 컬럼 구성도 일관적입니다.sql/ddl/youtube_video.sql (1)
27-37: 유니크 키/조회 인덱스 구성이 실제 사용 패턴과 잘 맞습니다.
video_id유니크 제약과 국가/지역/테마 기반 인덱스가 조회 경로를 정확히 지원하는 구조입니다.sql/ddl/youtube_channel.sql (1)
17-24: 채널 식별/정렬 요구사항을 충족하는 제약 설정입니다.
channel_id유니크 제약과subscriber_count인덱스가 목적에 맞게 반영되었습니다.linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiQuotaPolicyPort.kt (1)
5-11: 포트 계약이 명확합니다.Line [10]의 nullable 반환 타입이 Line [7] 문서와 일치해 정책 해석이 분명합니다.
linktrip-output-http/src/main/kotlin/com/linktrip/output/http/config/ApiQuotaPropertiesConfig.kt (1)
8-10: 설정 바인딩 등록이 깔끔하게 처리되었습니다.신규 quota/cost 프로퍼티 클래스가 한 곳에서 명확히 활성화됩니다.
linktrip-application/src/main/kotlin/com/linktrip/application/port/output/notification/NotificationPort.kt (1)
3-13: 알림 포트 확장이 목적에 맞게 잘 반영되었습니다.비용 알림 이벤트를 포트 레벨에서 분리해 추가한 점이 명확합니다.
linktrip-application/src/main/kotlin/com/linktrip/application/domain/notification/CostAlertEvent.kt (1)
9-13: 도메인 이벤트 모델링이 명확합니다.알림에 필요한 핵심 데이터(date/threshold/breakdown)만 담겨 있어 사용처와 책임이 분명합니다.
linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiType.kt (1)
7-16: Enum 경계가 현재 정책 범위와 잘 맞습니다.호출 한도/비용 정책 어댑터에서 분기하기 좋은 형태로 정리되어 있습니다.
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/ApiDailyCostAlertJpaRepository.kt (1)
7-9: 리포지토리 인터페이스는 용도에 맞게 간결합니다.일자 기준 조회 메서드 시그니처가 포트 요구사항과 잘 맞습니다.
sql/ddl/hashtag.sql (1)
6-17: 해시태그 마스터 테이블 정의는 목적에 맞게 깔끔합니다.중복 방지 키와 기본 컬럼 구성이 명확해 조회/등록 경계가 잘 잡혀 있습니다.
sql/ddl/member.sql (1)
5-16: 테이블 키 설계가 명확합니다.
PRIMARY KEY(id)와UNIQUE(serial_number)조합이 계정 식별/중복 방지 목적에 잘 맞습니다.linktrip-application/src/main/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumer.kt (1)
43-47: dequeue 전 가드 + 백오프 적용 방향이 적절합니다.한도 초과 시 작업 꺼내기 자체를 지연시키는 구조가 비용 폭주 방지 목적과 잘 맞고, 고정 백오프도 구현 의도가 분명합니다.
Also applies to: 220-221
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/ApiCallCountJpaRepository.kt (1)
8-15: 리포지토리 조회 메서드 구성이 사용처와 잘 맞습니다.
findByApiTypeAndCallDate/findAllByCallDate시그니처가 어댑터 읽기 경로와 일관됩니다.linktrip-output-http/src/main/kotlin/com/linktrip/output/http/adapter/PropertiesApiCostPolicyAdapter.kt (1)
12-17: ApiType-비용 매핑이 명확하고 안전합니다.enum 분기 누락 없이 정책 값 매핑이 직관적으로 구현되어 있습니다.
linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCostBreakdown.kt (1)
7-19: 도메인 모델 책임 분리가 좋습니다.합계 계산(
totalKrw)과 표시용 필터(nonZero)를 모델 내부에 둔 점이 사용처 단순화에 도움이 됩니다.sql/ddl/api_daily_cost_alert.sql (1)
16-19:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUNIQUE 제약만으로 동시성 충돌이 해소되지는 않습니다.
Line 18 제약은 필수지만, 현재
select -> insert경로에서는 동시 요청 시 한 트랜잭션이 duplicate key 예외로 실패할 수 있습니다. 저장 경로를 단일 쿼리 upsert(ON DUPLICATE KEY UPDATE) 또는 duplicate-key 재시도 처리로 보강해 주세요.⛔ Skipped due to learnings
Learnt from: toychip Repo: Link-Trip/BackEnd PR: 24 File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28 Timestamp: 2026-03-18T01:08:05.661Z Learning: In `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt`, the concurrent-write safety (duplicate videoId in batch, race condition on uk_youtube_video_video_id) is intentionally deferred. The maintainer (toychip) confirmed the system is currently single-server, so this is not a concern yet. When replication is introduced in the future, ShedLock will be used for distributed locking to address this. At that point, in-batch videoId deduplication should also be applied.linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCallCountPersistencePort.kt (1)
11-20: 포트 경계가 명확해서 좋습니다.
increment, 단건 조회, 일자 조회가 분리되어 도메인 서비스에서 의도대로 사용하기 좋은 인터페이스입니다.linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/ApiDailyCostAlertEntity.kt (1)
15-48: 엔티티 매핑과 도메인 변환이 깔끔합니다.테이블 제약(일자 유니크) 의도를 코드로 잘 반영했고,
toDomain()/from()경로도 명확합니다.linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/ApiCallCountEntity.kt (1)
15-55: 엔티티 매핑과 도메인 변환이 깔끔합니다.유니크 제약, 컬럼 정의,
toDomain()/from()매핑이 일관되어 카운트 집계 모델로 적절합니다.
| val lastSent = alertPort.findLastSentThresholdKrw(today) ?: 0L | ||
|
|
||
| if (breakdown.totalKrw < lastSent + THRESHOLD_KRW) return | ||
|
|
||
| val newThreshold = (breakdown.totalKrw / THRESHOLD_KRW) * THRESHOLD_KRW | ||
| notificationPort.sendCostAlert( | ||
| CostAlertEvent( | ||
| date = today, | ||
| thresholdKrw = newThreshold, | ||
| breakdown = breakdown, | ||
| ), | ||
| ) | ||
| alertPort.upsert(ApiDailyCostAlert.create(today, newThreshold)) |
There was a problem hiding this comment.
임계값 알림 중복 발송 경쟁 조건이 있습니다.
현재는 lastSent 조회 → sendCostAlert → upsert 순서라 동시 호출에서 같은 임계값을 여러 스레드가 동시에 통과해 중복 알림이 나갈 수 있습니다.
중복 방지를 보장하려면 DB 레벨 원자 연산(예: tryAdvanceThreshold(date, newThreshold) CAS/upsert-if-greater)으로 “발송 권한”을 선점한 요청만 알림을 보내야 합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiCallCounterService.kt`
around lines 48 - 60, There is a race where multiple threads can read lastSent
via alertPort.findLastSentThresholdKrw and all proceed to send the same alert
before alertPort.upsert runs; change the flow so only the caller that atomically
claims the threshold at DB level sends the alert: implement and call a new
DB-backed method (e.g., alertPort.tryAdvanceThreshold(date, newThreshold) or
upsert-if-greater/CAS) that returns true only if it stored/advanced the saved
threshold, and only when it returns true do you call
notificationPort.sendCostAlert(CostAlertEvent(...)) and avoid calling
sendCostAlert when tryAdvanceThreshold returns false; keep existing
ApiDailyCostAlert.create logic but move persistence into the atomic claim
operation in the alertPort implementation.
| interface ApiDailyCostAlertPersistencePort { | ||
| fun findLastSentThresholdKrw(date: LocalDate): Long? | ||
|
|
||
| fun upsert(alert: ApiDailyCostAlert) | ||
| } |
There was a problem hiding this comment.
현재 포트 계약은 임계값 알림 중복 발송 경쟁 조건을 막기 어렵습니다.
find → 판단 → send → upsert 흐름은 동시 요청에서 같은 임계값을 중복 발송할 수 있습니다. 포트 레벨에서 “임계값 전진”을 원자적으로 처리하는 메서드(성공/실패 반환)를 제공하는 쪽이 안전합니다.
계약 변경 예시
interface ApiDailyCostAlertPersistencePort {
- fun findLastSentThresholdKrw(date: LocalDate): Long?
-
- fun upsert(alert: ApiDailyCostAlert)
+ fun tryAdvanceThreshold(date: LocalDate, newThresholdKrw: Long): Boolean
}// 호출부 개념 예시
if (alertPort.tryAdvanceThreshold(today, newThreshold)) {
notificationPort.sendCostAlert(event)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiDailyCostAlertPersistencePort.kt`
around lines 10 - 14, The current ApiDailyCostAlertPersistencePort (methods
findLastSentThresholdKrw and upsert) allows a race in the
find→decide→send→upsert flow; add an atomic operation to the port such as
tryAdvanceThreshold(date: LocalDate, candidateThreshold: Long): Boolean that
will check the stored last threshold and, only if the candidate is strictly
greater, persist the new threshold and return true (false otherwise). Update
callers to call ApiDailyCostAlertPersistencePort.tryAdvanceThreshold(...) and
only send notifications when it returns true; implementers of the port (DB/repo
classes backing ApiDailyCostAlertPersistencePort) must perform the
check-and-update atomically (transactional update, SQL compare-and-set, or
optimistic locking) instead of separate findLastSentThresholdKrw and upsert
calls to avoid concurrent duplicate sends.
| CREATE TABLE IF NOT EXISTS `trip_plan_request` ( | ||
| `id` VARCHAR(36) NOT NULL, | ||
| `member_id` VARCHAR(36) NOT NULL, | ||
| `video_analysis_task_id` VARCHAR(36) NOT NULL, | ||
| `processed` BIT(1) NOT NULL, | ||
| `deleted` BIT(1) NOT NULL, | ||
| `created_at` DATETIME(6) DEFAULT NULL, | ||
| `updated_at` DATETIME(6) DEFAULT NULL, | ||
|
|
||
| PRIMARY KEY (`id`), | ||
|
|
||
| -- 동일 멤버가 동일 task 에 중복 요청하지 않도록 차단 + 멤버별 요청 조회 키. | ||
| UNIQUE KEY `uk_trip_plan_request_member_task` (`member_id`, `video_analysis_task_id`), | ||
|
|
||
| -- 분석 완료된 task 에 대한 미처리 요청 batch 조회. | ||
| -- TripPlanRequestQuerydslRepository 의 후속 처리 로직이 사용. | ||
| KEY `idx_trip_plan_request_task_processed` (`video_analysis_task_id`, `processed`) | ||
| ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci; |
There was a problem hiding this comment.
trip_plan_request.video_analysis_task_id에 FK가 없어 고아 요청이 남을 수 있습니다.
video_analysis_task와의 관계가 애플리케이션 레벨에만 의존하면 정합성 누수가 생기기 쉽습니다. DB 레벨 FK를 추가해 무결성을 보장해 주세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql/ddl/trip_plan_request.sql` around lines 6 - 23, Add a DB-level foreign
key on trip_plan_request.video_analysis_task_id to enforce referential integrity
with the video_analysis_task table: alter the CREATE TABLE (or add an ALTER
TABLE) to include a CONSTRAINT (e.g.,
fk_trip_plan_request_video_analysis_task_id) REFERENCES video_analysis_task(id)
and choose appropriate cascade behavior (e.g., ON DELETE CASCADE or ON DELETE
RESTRICT) so orphaned trip_plan_request rows cannot remain when
video_analysis_task rows are removed.
| -- 동일 멤버가 동일 영상에 대해 여행 계획을 두 번 생성하지 않도록 차단. | ||
| -- 멤버의 task 별 plan 존재 여부 조회 키로도 사용. | ||
| UNIQUE KEY `uk_trip_plan_member_task` (`member_id`, `video_analysis_task_id`) |
There was a problem hiding this comment.
소프트 삭제 정책과 유니크 키가 충돌합니다.
Line 18의 유니크 키가 deleted를 포함하지 않아, soft delete된 기존 행이 있으면 애플리케이션의 deleted=false 존재 체크를 통과해도 DB insert에서 유니크 충돌이 납니다. 정책을 “삭제 후 재생성 허용”으로 가져갈 거면 유니크 키에 deleted를 포함하거나, 반대로 재생성 금지 정책이라면 앱 존재 체크도 deleted를 무시하도록 맞춰야 합니다.
🔧 정합성 맞추기 예시 (재생성 허용 정책)
- UNIQUE KEY `uk_trip_plan_member_task` (`member_id`, `video_analysis_task_id`)
+ UNIQUE KEY `uk_trip_plan_member_task` (`member_id`, `video_analysis_task_id`, `deleted`)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- 동일 멤버가 동일 영상에 대해 여행 계획을 두 번 생성하지 않도록 차단. | |
| -- 멤버의 task 별 plan 존재 여부 조회 키로도 사용. | |
| UNIQUE KEY `uk_trip_plan_member_task` (`member_id`, `video_analysis_task_id`) | |
| -- 동일 멤버가 동일 영상에 대해 여행 계획을 두 번 생성하지 않도록 차단. | |
| -- 멤버의 task 별 plan 존재 여부 조회 키로도 사용. | |
| UNIQUE KEY `uk_trip_plan_member_task` (`member_id`, `video_analysis_task_id`, `deleted`) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sql/ddl/trip_plan.sql` around lines 16 - 18, The UNIQUE KEY
uk_trip_plan_member_task (member_id, video_analysis_task_id) conflicts with the
soft-delete column deleted: if you want to allow recreation after soft-delete,
modify the unique constraint to include the deleted column (e.g., make it on
(member_id, video_analysis_task_id, deleted)); alternatively, if recreation must
be disallowed, update application checks that use deleted=false to ignore
deleted rows (or change queries that check existence) so they align with the DB
constraint—locate uk_trip_plan_member_task, member_id, video_analysis_task_id
and the deleted column in the trip_plan table and update either the constraint
or the app existence logic accordingly.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
README.md (2)
595-605:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
POST /api/video/analyzeAPI 요약 문구가 INVALID(200)과 충돌합니다.현재 status 매핑 테이블은
INVALID -> 200으로 되어 있는데, API 목록 요약(“COMPLETED면 결과 인라인(200) / 그 외 status 반환(202)”)에서는INVALID까지 202로 읽힐 소지가 있습니다. 테이블과 동일한 규칙으로 요약 문구를 정리해 주세요.🛠️ 제안 수정
| POST | `/api/video/analyze` | 영상 분석 요청. COMPLETED면 결과 인라인(200) / 그 외 status 반환(202) | +| POST | `/api/video/analyze` | 영상 분석 요청. COMPLETED(200, 결과 인라인) / INVALID(200) / PENDING·PROCESSING(202, 클라이언트 폴링) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 595 - 605, The API summary for POST /api/video/analyze conflicts with the status mapping table: the table maps INVALID -> 200 but the summary implies INVALID may be treated as 202; update the summary under "POST /api/video/analyze" so it matches the status mapping table (i.e., indicate that COMPLETED and INVALID return result inline (200) and all other statuses return 202), and ensure the wording explicitly references the same status names (COMPLETED, INVALID) to avoid ambiguity.
539-585:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win마지막 MD040 묶음: 539/553/585 라인 fenced code block language 지정 누락.
static analysis 기준으로 539, 553, 585 라인에서 MD040 경고가 추가로 있습니다. 해당 fenced block들도
```text등으로 language 태그만 붙이면 린트가 정리될 가능성이 큽니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 539 - 585, The markdown fenced code blocks that start the diagrams for YouTubeCollectScheduler, VideoAnalysisBackfillScheduler, and PlaceEnrichRetryJob are missing a language tag (triggering MD040); update each triple-backtick fence (the blocks containing "YouTubeCollectScheduler (매시간 cron)...", "VideoAnalysisBackfillScheduler (10분 cron, 환경별 활성화)...", and "PlaceEnrichRetryJob") to include a language identifier (e.g., ```text) so the linter no longer flags MD040.
🧹 Nitpick comments (1)
linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.kt (1)
29-29: ⚡ Quick win
LocalDate.now()직접 호출 대신Clock주입을 권장합니다.현재 방식은 자정 경계/타임존 케이스 테스트가 까다롭습니다.
Clock주입으로 시간 의존성을 분리하면 검증과 유지보수가 쉬워집니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.kt` at line 29, The code calls LocalDate.now() directly in ApiQuotaGuardService (specifically in the expression countPort.findByApiTypeAndDate(apiType, LocalDate.now())?.callCount), which makes time-dependent behavior hard to test; inject a java.time.Clock into ApiQuotaGuardService (add a constructor parameter or field with a default of Clock.systemDefaultZone()), replace LocalDate.now() with LocalDate.now(clock) in the countPort.findByApiTypeAndDate call, and update any callers/tests to supply a fixed Clock (Clock.fixed(...)) for deterministic testing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 120-182: The README contains fenced code blocks (the ASCII flow
diagrams between the triple backticks shown in the diff) without a language
specifier causing markdownlint MD040; update those backtick-openings to include
a language such as text (e.g., change ``` to ```text) for the blocks that
contain the "[ ① 요청 처리 — 트랜잭션 안 ]" diagram and the following diagram region so
markdownlint no longer flags MD040; ensure all similar fenced blocks in the same
section use a language specifier.
- Around line 356-503: Several fenced code blocks in the README lack language
specifiers; update the blocks under headings like "프록시 라운드로빈 (webshare)", the
mermaid sequence/flowchart blocks in "모호 실패 분류 (sentinel ping)", the "프록시 풀
우선순위" and priority explanation blocks, and the diagrams in "큐 우선순위" / "외부 API 비용
보호" to include appropriate language tags (e.g., use ```mermaid for diagrams,
```text for plain explanatory blocks, and ```yaml or ```text for config/list
blocks) so static analysis MD040 is satisfied.
---
Outside diff comments:
In `@README.md`:
- Around line 595-605: The API summary for POST /api/video/analyze conflicts
with the status mapping table: the table maps INVALID -> 200 but the summary
implies INVALID may be treated as 202; update the summary under "POST
/api/video/analyze" so it matches the status mapping table (i.e., indicate that
COMPLETED and INVALID return result inline (200) and all other statuses return
202), and ensure the wording explicitly references the same status names
(COMPLETED, INVALID) to avoid ambiguity.
- Around line 539-585: The markdown fenced code blocks that start the diagrams
for YouTubeCollectScheduler, VideoAnalysisBackfillScheduler, and
PlaceEnrichRetryJob are missing a language tag (triggering MD040); update each
triple-backtick fence (the blocks containing "YouTubeCollectScheduler (매시간
cron)...", "VideoAnalysisBackfillScheduler (10분 cron, 환경별 활성화)...", and
"PlaceEnrichRetryJob") to include a language identifier (e.g., ```text) so the
linter no longer flags MD040.
---
Nitpick comments:
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.kt`:
- Line 29: The code calls LocalDate.now() directly in ApiQuotaGuardService
(specifically in the expression countPort.findByApiTypeAndDate(apiType,
LocalDate.now())?.callCount), which makes time-dependent behavior hard to test;
inject a java.time.Clock into ApiQuotaGuardService (add a constructor parameter
or field with a default of Clock.systemDefaultZone()), replace LocalDate.now()
with LocalDate.now(clock) in the countPort.findByApiTypeAndDate call, and update
any callers/tests to supply a fixed Clock (Clock.fixed(...)) for deterministic
testing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 068b8cce-fe42-489a-aeb8-7db742722f09
📒 Files selected for processing (6)
README.mdlinktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCostPolicyPort.ktlinktrip-bootstrap/src/main/resources/application.ymllinktrip-common/src/main/kotlin/com/linktrip/common/exception/ExceptionCode.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.kt
✅ Files skipped from review due to trivial changes (2)
- linktrip-common/src/main/kotlin/com/linktrip/common/exception/ExceptionCode.kt
- linktrip-application/src/main/kotlin/com/linktrip/application/port/output/quota/ApiCostPolicyPort.kt
🚧 Files skipped from review as they are similar to previous changes (2)
- linktrip-bootstrap/src/main/resources/application.yml
- linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/ApiDailyCostAlertPersistenceAdapter.kt
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: 자동 검증 (ktlint + test)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T01:08:05.661Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:08:05.661Z
Learning: In `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt`, the concurrent-write safety (duplicate videoId in batch, race condition on uk_youtube_video_video_id) is intentionally deferred. The maintainer (toychip) confirmed the system is currently single-server, so this is not a concern yet. When replication is introduced in the future, ShedLock will be used for distributed locking to address this. At that point, in-batch videoId deduplication should also be applied.
Applied to files:
README.md
📚 Learning: 2026-03-18T01:08:05.661Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:08:05.661Z
Learning: Similarly, `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeChannelPersistenceAdapter.kt` likely has the same deferred concurrent-write policy (single server, ShedLock planned for replication phase).
Applied to files:
README.md
🪛 markdownlint-cli2 (0.22.1)
README.md
[warning] 120-120: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 182-182: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 356-356: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 395-395: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 431-431: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 464-464: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 485-485: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 503-503: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 539-539: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 553-553: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
[warning] 585-585: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (1)
linktrip-application/src/main/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardService.kt (1)
25-34: Fail-closed 가드 처리 방향이 적절합니다.정책 누락 시 즉시
INTERNAL_QUOTA_POLICY_NOT_CONFIGURED예외를 발생시켜, 보호 로직이 조용히 비활성화되지 않도록 한 점이 좋습니다.
…yzeLimit 주입) (#68)
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumerTest.kt (1)
54-56: ⚡ Quick win쿼터 가드 분기(정상/초과) 테스트를 이 클래스에 추가해 주세요.
지금 변경은 mock 주입까지만 반영되어 있고
apiQuotaGuardService상호작용에 대한 stubbing/verify가 없습니다. 이 PR의 핵심 기능(일일 한도 가드) 회귀를 막으려면, 한도 초과 시 dequeue/처리 중단(또는 대기)과 정상 시 처리 진행을 명시적으로 검증하는 테스트를 추가하는 편이 안전합니다.Also applies to: 68-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumerTest.kt` around lines 54 - 56, Add two tests in VideoAnalysisQueueConsumerTest that stub and verify interactions with the mocked ApiQuotaGuardService: 1) "quota exceeded" test should stub apiQuotaGuardService.isQuotaExceeded(...) (or the real guard method used by VideoAnalysisQueueConsumer) to return true, invoke the consumer handler (e.g., VideoAnalysisQueueConsumer.consume or the test helper that processes a queued message), and verify that downstream processing components (e.g., VideoProcessor.process / repository.save / dequeue/ack methods used in the test) are NOT called and that the message is left unprocessed; 2) "quota normal" test should stub the same method to return false, invoke the consumer, and verify that downstream processing is called exactly once and the message is acknowledged/removed. Use Mockito stubbing (when(...).thenReturn(...)) and verify(...) / verifyNoInteractions(...) against apiQuotaGuardService and the processing collaborators to make the expected branching explicit.linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/KeywordAnalyzeServiceTest.kt (1)
47-51: ⚡ Quick win빈 검색 결과 케이스에서 분석 호출 0회도 함께 고정해두면 더 안전합니다.
현재는 검색 호출 횟수만 검증합니다. 이 케이스는
analyzeVideo가 절대 호출되지 않아야 비용/부작용 회귀를 바로 잡을 수 있습니다.제안 diff
val result = service.analyzeByKeywords(region = null, country = null, maxResults = 5) assertEquals(totalKeywords, result.keywordCount) + assertEquals(0, result.tasks.size) verify(youTubePort, times(totalKeywords)).searchVideos(any(), any()) + verify(videoAnalyzeUseCase, never()).analyzeVideo(any(), any())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/KeywordAnalyzeServiceTest.kt` around lines 47 - 51, The test currently only verifies youTubePort.searchVideos call count but must also assert that analyzeVideo is never invoked for the empty-search-results case; update KeywordAnalyzeServiceTest (where service.analyzeByKeywords is called) to add a mock verification that analyzeVideo (the method responsible for per-video analysis) is called 0 times—e.g., verify(videoAnalyzer, times(0)).analyzeVideo(any()) or verify(service, times(0)).analyzeVideo(any()) depending on which collaborator is mocked—so the test fails if analysis is accidentally triggered.linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt (1)
80-102: ⚡ Quick win
isAnyApiExceeded()의 정책 누락(fail-closed) 경로 테스트를 추가하면 회귀 방지에 더 좋습니다.현재
isExceeded()의 null 정책 예외는 잘 검증되지만,isAnyApiExceeded()에서도 동일하게 예외가 전파되는지 확인하는 케이스가 있으면 안전합니다.추가 테스트 예시
+ `@Test` + fun `isAnyApiExceeded 호출 중 정책 누락 ApiType 이 있으면_INTERNAL_QUOTA_POLICY_NOT_CONFIGURED 가 발생한다`() { + ApiType.entries.forEach { apiType -> + whenever(policyPort.dailyLimit(apiType)) + .thenReturn(if (apiType == ApiType.GEMINI) null else 500L) + } + + val exception = + assertThrows<LinktripException> { + service.isAnyApiExceeded() + } + assertEquals(ExceptionCode.INTERNAL_QUOTA_POLICY_NOT_CONFIGURED, exception.exceptionCode) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt` around lines 80 - 102, Add a test in ApiQuotaGuardServiceTest that verifies the fail-closed path of isAnyApiExceeded(): stub policyPort.dailyLimit(apiType) to return null for at least one ApiType (while other types return normal limits) and assert that calling service.isAnyApiExceeded() throws an exception (use assertThrows to verify it propagates), mirroring the existing null-policy test for isExceeded(); reference service.isAnyApiExceeded() and policyPort.dailyLimit(apiType) when adding the test.linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt (1)
40-165: ⚡ Quick win
LocalDate.now()반복 사용으로 자정 경계에서 테스트가 flaky해질 수 있습니다.각 테스트(또는
stubBreakdown)에서 기준 날짜를 한 번만 캡처한today값을 재사용하면 stubbing/verify 일관성이 좋아집니다.개선 예시
+ private val today: LocalDate = LocalDate.now() ... - assertEquals(LocalDate.now(), captor.firstValue.callDate) + assertEquals(today, captor.firstValue.callDate) ... - whenever(alertPort.findLastSentThresholdKrw(LocalDate.now())).thenReturn(null) + whenever(alertPort.findLastSentThresholdKrw(today)).thenReturn(null) ... - whenever(countPort.findAllByDate(LocalDate.now())).thenReturn( + whenever(countPort.findAllByDate(today)).thenReturn(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt` around lines 40 - 165, Tests call LocalDate.now() multiple times causing flakiness at midnight; capture the date once and reuse it. In each test (or in the shared setup) set val today = LocalDate.now() and use today when stubbing/verifying (pass today into stubBreakdown or add a date parameter to stubBreakdown), replace all LocalDate.now() uses in calls like countPort.findAllByDate(...), alertPort.findLastSentThresholdKrw(...), ApiCallCount(... callDate = ...), and verify(...) to use the captured today so all assertions and mocks reference the same fixed date.linktrip-application/src/main/kotlin/com/linktrip/application/domain/trip/TripPlanService.kt (1)
29-30: ⚡ Quick win일일 한도 설정값의 하한 검증을 추가해주세요.
운영 환경변수 오입력으로
0또는 음수가 들어오면 모든 요청이 즉시 차단되거나 의도치 않은 동작이 납니다. 초기화 시점에> 0검증을 넣는 것이 안전합니다.간단한 방어 코드 예시
class TripPlanService( @@ private val dailyVideoAnalyzeLimit: Long, ) : TripPlanUseCase { + init { + require(dailyVideoAnalyzeLimit > 0) { + "member.daily-video-analyze-limit must be greater than 0" + } + } + `@Transactional` override fun registerRequest(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@linktrip-application/src/main/kotlin/com/linktrip/application/domain/trip/TripPlanService.kt` around lines 29 - 30, The injected property dailyVideoAnalyzeLimit in TripPlanService must be validated to ensure it's > 0 to prevent misconfiguration; add a check during initialization (e.g., in TripPlanService's init block or a `@PostConstruct` method) that verifies dailyVideoAnalyzeLimit > 0 and throw an IllegalArgumentException (or fail fast with a clear message) if not, so any 0 or negative values are rejected at startup and the service doesn't run with invalid limits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt`:
- Around line 158-159: KtLint is failing due to long single-line constructor
calls for ApiCallCount; reformat each ApiCallCount(...) into multiline
constructor argument style so each parameter sits on its own line (e.g.,
ApiCallCount(id = "y", apiType = ApiType.YOUTUBE_DATA, callDate =
LocalDate.now(), callCount = youtubeDataCount) -> split across lines), and do
the same for the ApiType.GOOGLE_PLACES instance; update the calls referencing
ApiCallCount, apiType, callDate, and callCount to match the project's KtLint
argument-wrapping rules.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt`:
- Around line 46-47: The test uses exact LocalDate.now() when stubbing
countPort.findByApiTypeAndDate which makes the mock flaky at midnight; update
all stubs in ApiQuotaGuardServiceTest (calls to countPort.findByApiTypeAndDate
with ApiType.GEMINI or other ApiType values) to match the date argument with
any() (or eq(any()) as your mocking library requires) instead of
LocalDate.now(), e.g. whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI,
any())).thenReturn(apiCallCount(...)), and apply the same change to every
occurrence (including the other ApiType stubs and all listed occurrences).
---
Nitpick comments:
In
`@linktrip-application/src/main/kotlin/com/linktrip/application/domain/trip/TripPlanService.kt`:
- Around line 29-30: The injected property dailyVideoAnalyzeLimit in
TripPlanService must be validated to ensure it's > 0 to prevent
misconfiguration; add a check during initialization (e.g., in TripPlanService's
init block or a `@PostConstruct` method) that verifies dailyVideoAnalyzeLimit > 0
and throw an IllegalArgumentException (or fail fast with a clear message) if
not, so any 0 or negative values are rejected at startup and the service doesn't
run with invalid limits.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt`:
- Around line 40-165: Tests call LocalDate.now() multiple times causing
flakiness at midnight; capture the date once and reuse it. In each test (or in
the shared setup) set val today = LocalDate.now() and use today when
stubbing/verifying (pass today into stubBreakdown or add a date parameter to
stubBreakdown), replace all LocalDate.now() uses in calls like
countPort.findAllByDate(...), alertPort.findLastSentThresholdKrw(...),
ApiCallCount(... callDate = ...), and verify(...) to use the captured today so
all assertions and mocks reference the same fixed date.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt`:
- Around line 80-102: Add a test in ApiQuotaGuardServiceTest that verifies the
fail-closed path of isAnyApiExceeded(): stub policyPort.dailyLimit(apiType) to
return null for at least one ApiType (while other types return normal limits)
and assert that calling service.isAnyApiExceeded() throws an exception (use
assertThrows to verify it propagates), mirroring the existing null-policy test
for isExceeded(); reference service.isAnyApiExceeded() and
policyPort.dailyLimit(apiType) when adding the test.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/KeywordAnalyzeServiceTest.kt`:
- Around line 47-51: The test currently only verifies youTubePort.searchVideos
call count but must also assert that analyzeVideo is never invoked for the
empty-search-results case; update KeywordAnalyzeServiceTest (where
service.analyzeByKeywords is called) to add a mock verification that
analyzeVideo (the method responsible for per-video analysis) is called 0
times—e.g., verify(videoAnalyzer, times(0)).analyzeVideo(any()) or
verify(service, times(0)).analyzeVideo(any()) depending on which collaborator is
mocked—so the test fails if analysis is accidentally triggered.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumerTest.kt`:
- Around line 54-56: Add two tests in VideoAnalysisQueueConsumerTest that stub
and verify interactions with the mocked ApiQuotaGuardService: 1) "quota
exceeded" test should stub apiQuotaGuardService.isQuotaExceeded(...) (or the
real guard method used by VideoAnalysisQueueConsumer) to return true, invoke the
consumer handler (e.g., VideoAnalysisQueueConsumer.consume or the test helper
that processes a queued message), and verify that downstream processing
components (e.g., VideoProcessor.process / repository.save / dequeue/ack methods
used in the test) are NOT called and that the message is left unprocessed; 2)
"quota normal" test should stub the same method to return false, invoke the
consumer, and verify that downstream processing is called exactly once and the
message is acknowledged/removed. Use Mockito stubbing
(when(...).thenReturn(...)) and verify(...) / verifyNoInteractions(...) against
apiQuotaGuardService and the processing collaborators to make the expected
branching explicit.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7b93b9ec-74d7-40bc-a267-c02a5daba36d
📒 Files selected for processing (13)
linktrip-application/src/main/kotlin/com/linktrip/application/domain/trip/TripPlanService.ktlinktrip-application/src/main/kotlin/com/linktrip/application/port/output/persistence/TripPlanRequestPersistencePort.ktlinktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.ktlinktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.ktlinktrip-application/src/test/kotlin/com/linktrip/application/domain/trip/TripPlanServiceTest.ktlinktrip-application/src/test/kotlin/com/linktrip/application/domain/video/KeywordAnalyzeServiceTest.ktlinktrip-application/src/test/kotlin/com/linktrip/application/domain/video/VideoAnalysisQueueConsumerTest.ktlinktrip-bootstrap/src/main/resources/application.ymllinktrip-common/src/main/kotlin/com/linktrip/common/exception/ExceptionCode.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/TripPlanRequestPersistenceAdapter.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/TripPlanRequestEntity.ktlinktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/TripPlanRequestQuerydslRepository.ktsql/ddl/trip_plan_request.sql
✅ Files skipped from review due to trivial changes (1)
- linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/entity/TripPlanRequestEntity.kt
🚧 Files skipped from review as they are similar to previous changes (3)
- linktrip-bootstrap/src/main/resources/application.yml
- linktrip-common/src/main/kotlin/com/linktrip/common/exception/ExceptionCode.kt
- sql/ddl/trip_plan_request.sql
📜 Review details
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-03-18T01:07:53.575Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:07:53.575Z
Learning: Apply the same deferred concurrent-write policy to all Kotlin persistence adapters in the MySQL adapter package: ensure writes are coordinated on a single server with ShedLock considered for the replication phase. For YouTubeVideoPersistenceAdapter.kt (and other adapters in this directory), verify that concurrent writes are serialized or properly guarded, document the policy in code comments, and ensure CI checks or deployment gating will catch any regression where multiple instances might attempt a conflicting write during replication.
Applied to files:
linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/TripPlanRequestPersistenceAdapter.kt
📚 Learning: 2026-03-18T01:08:05.661Z
Learnt from: toychip
Repo: Link-Trip/BackEnd PR: 24
File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28
Timestamp: 2026-03-18T01:08:05.661Z
Learning: In `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt`, the concurrent-write safety (duplicate videoId in batch, race condition on uk_youtube_video_video_id) is intentionally deferred. The maintainer (toychip) confirmed the system is currently single-server, so this is not a concern yet. When replication is introduced in the future, ShedLock will be used for distributed locking to address this. At that point, in-batch videoId deduplication should also be applied.
Applied to files:
linktrip-application/src/test/kotlin/com/linktrip/application/domain/trip/TripPlanServiceTest.kt
🪛 GitHub Actions: CI - Pull request
linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt
[error] 158-158: KtLint (ktlintTestSourceSetCheck) failed: Exceeded max line length (120) (cannot be auto-corrected).
[error] 158-158: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 158-158: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 158-158: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 158-158: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 158-158: KtLint: Missing newline before ")".
[error] 159-159: KtLint (ktlintTestSourceSetCheck) failed: Exceeded max line length (120) (cannot be auto-corrected).
[error] 159-159: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 159-159: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 159-159: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 159-159: KtLint: Argument should be on a separate line (unless all arguments can fit a single line).
[error] 159-159: KtLint: Missing newline before ")".
🔇 Additional comments (9)
linktrip-application/src/test/kotlin/com/linktrip/application/domain/video/KeywordAnalyzeServiceTest.kt (2)
32-40: 비용 가드 의도를 정확히 검증한 테스트입니다.존재하지 않는 region에서 외부 검색/분석 호출이 모두 0회인지 확인한 점이 PR 목표(비용 폭주 방지)와 잘 맞습니다.
53-99: best-effort 회복성 시나리오 검증이 좋습니다.키워드 단위/영상 단위 예외 격리를 분리해 검증한 구조가 명확하고, 호출 횟수까지 확인해서 회귀 방지 효과가 큽니다.
linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt (1)
53-60: 경계값(inclusive) 검증이 명확해서 좋습니다.한도와 정확히 같은 경우를 별도 테스트로 분리해 정책 의도를 정확히 고정한 점이 좋습니다.
linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiCallCounterServiceTest.kt (1)
135-144: N+1 회피 검증 포인트는 아주 좋습니다.
findAllByDate단일 조회와findByApiTypeAndDate미호출을 함께 검증한 점이 서비스 의도를 정확히 고정해 줍니다.linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/repository/TripPlanRequestQuerydslRepository.kt (1)
35-51: 일자 범위 기준 카운트 쿼리 구현이 명확하고 안전합니다.
[startOfDay, startOfNextDay)경계와 soft-delete 제외 조건이 잘 반영되어 있습니다.linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/TripPlanRequestPersistenceAdapter.kt (1)
34-37: 포트-어댑터 위임 추가가 깔끔합니다.신규 포트 메서드를 Querydsl 리포지토리로 일관되게 연결했습니다.
linktrip-application/src/main/kotlin/com/linktrip/application/port/output/persistence/TripPlanRequestPersistencePort.kt (1)
20-23: 일일 한도 정책을 위한 포트 확장이 적절합니다.애플리케이션 계층에서 필요한 조회 책임이 명확하게 드러납니다.
linktrip-application/src/test/kotlin/com/linktrip/application/domain/trip/TripPlanServiceTest.kt (1)
90-114: 한도 초과/중복 요청 시나리오 테스트가 핵심 경로를 잘 커버합니다.예외 코드 검증과
countByMemberIdAndDate스킵 검증이 정책 의도를 정확히 고정합니다.linktrip-application/src/main/kotlin/com/linktrip/application/domain/trip/TripPlanService.kt (1)
37-39:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift일일 한도 체크와 저장이 비원자적이라 동시 요청 시 한도 초과 저장이 가능합니다.
Line 37
39, Line 4445 흐름은count확인과save가 분리되어 있어, 동시성 상황에서 두 요청이 동시에 통과해 일일 한도를 초과할 수 있습니다. PR 목적(어뷰징/비용 가드)과 직접 충돌하는 경로입니다.수정 방향 예시 (원자적 저장 경로로 통합)
- ensureDailyVideoAnalyzeLimitNotExceeded(memberId) - requestPort.save(TripPlanRequest.create(memberId, videoAnalysisTaskId)) + val saved = + requestPort.saveIfUnderDailyLimit( + memberId = memberId, + videoAnalysisTaskId = videoAnalysisTaskId, + date = LocalDate.now(), + limit = dailyVideoAnalyzeLimit, + ) + if (!saved) { + throw LinktripException(ExceptionCode.TOO_MANY_REQUESTS_VIDEO_ANALYZE_DAILY) + }// Port 예시 fun saveIfUnderDailyLimit( memberId: String, videoAnalysisTaskId: String, date: LocalDate, limit: Long, ): BooleanDB 레벨에서 member/day 단위 락 또는 카운터 row 기반 원자적 증가/검증으로 처리해야 우회가 막힙니다.
Also applies to: 42-50
⛔ Skipped due to learnings
Learnt from: toychip Repo: Link-Trip/BackEnd PR: 24 File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28 Timestamp: 2026-03-18T01:08:05.661Z Learning: In `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt`, the concurrent-write safety (duplicate videoId in batch, race condition on uk_youtube_video_video_id) is intentionally deferred. The maintainer (toychip) confirmed the system is currently single-server, so this is not a concern yet. When replication is introduced in the future, ShedLock will be used for distributed locking to address this. At that point, in-batch videoId deduplication should also be applied.Learnt from: toychip Repo: Link-Trip/BackEnd PR: 24 File: linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeVideoPersistenceAdapter.kt:16-28 Timestamp: 2026-03-18T01:08:05.661Z Learning: Similarly, `linktrip-output-persistence/mysql/src/main/kotlin/com/linktrip/output/persistence/mysql/adapter/YouTubeChannelPersistenceAdapter.kt` likely has the same deferred concurrent-write policy (single server, ShedLock planned for replication phase).
| whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, LocalDate.now())) | ||
| .thenReturn(apiCallCount(ApiType.GEMINI, callCount = 100L)) |
There was a problem hiding this comment.
LocalDate.now() 직접 매칭으로 테스트가 자정 경계에서 플래키해질 수 있습니다.
Line 46 등에서 LocalDate.now()를 스텁 인자로 직접 쓰면, 서비스 호출 시점 날짜가 바뀌는 순간 mock 미스매치가 날 수 있습니다. 날짜 인자는 any()로 매칭해 불안정성을 제거하는 편이 안전합니다.
수정 예시
import org.mockito.junit.jupiter.MockitoExtension
+import org.mockito.kotlin.any
import org.mockito.kotlin.whenever
- whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, LocalDate.now()))
+ whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, any()))
.thenReturn(apiCallCount(ApiType.GEMINI, callCount = 100L))
- whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, LocalDate.now()))
+ whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, any()))
.thenReturn(apiCallCount(ApiType.GEMINI, callCount = 500L))
- whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, LocalDate.now()))
+ whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, any()))
.thenReturn(apiCallCount(ApiType.GEMINI, callCount = 999L))
- whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, LocalDate.now())).thenReturn(null)
+ whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI, any())).thenReturn(null)
- whenever(countPort.findByApiTypeAndDate(apiType, LocalDate.now()))
+ whenever(countPort.findByApiTypeAndDate(apiType, any()))
.thenReturn(apiCallCount(apiType, callCount = 100L))Also applies to: 56-57, 65-66, 75-75, 84-85, 97-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@linktrip-application/src/test/kotlin/com/linktrip/application/domain/quota/ApiQuotaGuardServiceTest.kt`
around lines 46 - 47, The test uses exact LocalDate.now() when stubbing
countPort.findByApiTypeAndDate which makes the mock flaky at midnight; update
all stubs in ApiQuotaGuardServiceTest (calls to countPort.findByApiTypeAndDate
with ApiType.GEMINI or other ApiType values) to match the date argument with
any() (or eq(any()) as your mocking library requires) instead of
LocalDate.now(), e.g. whenever(countPort.findByApiTypeAndDate(ApiType.GEMINI,
any())).thenReturn(apiCallCount(...)), and apply the same change to every
occurrence (including the other ApiType stubs and all listed occurrences).
관련 이슈
변경 내용
외부 API (Gemini / YouTube Data / Google Places) 비용 폭주를 사전에 차단하기 위한 일일 호출 한도 가드 + 누적 비용 알림 시스템을 도입했습니다.
정책
application.yml의api.daily-quota.*/api.cost-per-call-krw.*로 외부 설정화API_DAILY_QUOTA_*,API_COST_*_KRW)가드 (이중 차단)
BAD_GATEWAY_*throwVideoAnalysisQueueConsumer가 큐에서 작업 꺼내기 전에 한도 체크 → 초과 시 60초 대기. 가장 빠른 시점에서 차단카운트 적재
(api_type, call_date)단일 row 로 일별 호출량 누적SET call_count = call_count + 1로 lost update 차단 (Querydsl)비용 알림
@사용자멘션 + API 별 breakdown)api_daily_cost_alert로 동일 임계값 알림 중복 발송 방지헥사고날 구조
domain/quota(모델 + 서비스),domain/notification/CostAlertEventport/output/quota/*4개 (카운트 영속화 / 한도 정책 / 비용 정책 / 알림 추적 영속화)신규 테이블
배포 전 DDL 수동 실행 필요 (prod validate 모드 충돌 방지):
api_call_count—(api_type, call_date)uniqueapi_daily_cost_alert—alert_dateunique체크리스트
application.yml의 한도 / 비용 단가 운영 환경 값 검증Summary by CodeRabbit
새로운 기능
문서