Post

๐Ÿ“ TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering

๐Ÿ“ TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering

๐Ÿ“Š TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering


๐Ÿง  What is TIFA?

Image

  • The key question: How can we evaluate the performance of Text-to-Image generation models and Composed Image Retrieval (CIR) models?
  • One of the recent solutions is TIFA (Text-to-Image Faithfulness Assessment), proposed by Hu et al. at ICCV 2023.
  • The core idea is to automatically evaluate how faithfully the text condition (prompt) is reflected in the generated image.

๐Ÿ•ฐ๏ธ Image Evaluation Metrics before TIFA

Before TIFA (Hu et al., 2023), image generation evaluation developed along two main dimensions:
(1) Image Quality Evaluation and (2) Image-Text Alignment Evaluation.


1. ๐Ÿ‘ฉโ€โš–๏ธ Human Evaluation

  • Method: Show humans two images generated by different models and perform pairwise comparison to decide which is better.
  • Advantage: Most reliable and intuitive.
  • Limitation: Expensive, slow, and impractical for large-scale evaluation.

2. ๐Ÿ–ผ๏ธ Image Quality Evaluation

An older approach, less accurate than text-image evaluation.

  • Inception Score (IS, Salimans et al. 2016)
    • Definition: Analyzes feature distributions of generated images using an Inception-V3 classifier.
    • Advantage: Captures both diversity and quality of generated images.
    • Limitation: Does not require ground-truth images, but does not reflect semantic faithfulness.
  • FID (Frรฉchet Inception Distance, Heusel et al. 2017)
    • Definition: Compares the feature distribution of generated images and real images using Inception-V3.
    • Advantage: Evaluates both fidelity (realism) and diversity.
    • Limitation: Requires real images (GT) and struggles on complex datasets.

3. ๐Ÿ”— Image-Text Alignment Evaluation

Emerged with the development of Detection Models and Vision-Language Models.

  • CLIP-based
    • CLIPScore (Hessel et al. 2021), CLIP-R
    • Compute cosine similarity between image and text embeddings from CLIP.
    • Simple and fully automatic, but lacks granularity in fine-grained attributes (color, material, spatial relations, etc.).
  • Captioning-based
    • Use an image captioning model to describe the generated image, then compare the caption with the original prompt.
    • Use NLP metrics such as CIDEr, SPICE.
    • Limitation: Highly dependent on the performance of the captioning model.
  • Object Detection-based (SOA, DALL-Eval, etc.)
    • Use object detectors to check whether objects/attributes/relations from the prompt exist in the image.
    • Advantage: Can directly verify attributes like existence, color, count, and spatial relations.
    • Limitation: Evaluates only limited axes; cannot capture material, shape, activity, or context.

4. ๐Ÿ’ก Motivation for TIFA

  • Existing metrics only partially capture either image quality or image-text alignment.
  • Detection-based evaluations are restricted to limited attributes.
  • TIFA introduces QA-based evaluation:
    • Prompt โ†’ (Object/Attribute/Relation parsing) โ†’ Question generation โ†’ VQA-based checking โ†’ Scoring.
    • This allows evaluation of broad attributes including material, shape, activity, and context.

๐Ÿ‘‰ Summary:

  • IS, FID โ†’ Focus on image quality (fidelity, diversity).
  • CLIPScore, CIDEr, SPICE โ†’ Focus on image-text alignment.
  • SOA, DALL-Eval โ†’ Focus on limited attributes.
  • TIFA โ†’ QA-based, interpretable, and highly correlated with human judgments โœ…

๐Ÿ“Œ TIFA Evaluation Pipeline

Image

TIFA does not only check whether the correct image is retrieved, but evaluates the faithfulness to text conditions in a fine-grained manner.

  1. Prompt Input
    • Example: โ€œA brown dog playing on the beach.โ€
  2. Parsing with LLM
    • Extract Objects, Attributes, Relations.
    • Example: dog, brown, beach.
  3. Question Generation
    • โ€œIs there a dog in the image?โ€
    • โ€œIs the dog brown?โ€
    • โ€œIs the background a beach?โ€
  • Steps 2 and 3 are done in a single step using the official TIFA prompt:
    1
    2
    3
    4
    
    Given an image description, generate multiple-choice questions that verify if the image description is correct.
    First extract elements from the image description. 
    Then classify each element into a category (object, human, animal,food, activity, attribute, counting, color,material, spatial, location, shape, other).
    Finally, generate questions for each element.
    
  1. Answer Verification with VQA Model
    • Input the image and generated questions โ†’ the model provides answers.
    • Check whether the answers align with the conditions in the prompt.
  2. Scoring (Faithfulness Score Calculation)
    • Aggregate the results to compute a faithfulness score based on condition satisfaction.

โœ… Advantages of TIFA

  • Faithfulness to Text Conditions: Goes beyond simply checking if the correct image is in the Top-K; directly measures how well conditions are satisfied.
  • Interpretability: Each question corresponds to a condition, allowing easy inspection of which conditions were satisfied or failed.
  • High Correlation with Human Evaluation: Strongly aligns with human judgment results.

โš ๏ธ Limitations

Multiple question generation + VQA answering = computationally slow!

  • VQA Model Dependency: Performance depends heavily on the strength of the VQA model.
  • Extra Computational Cost: Unlike Recall, it requires both LLM parsing and VQA answering.
  • Not Yet a CIR Standard: Adoption rate is still low compared to Recall@K and mAP.

๐Ÿ‘‰ Summary:
TIFA is a powerful tool for automatically measuring โ€œtext-to-image faithfulnessโ€.
It complements traditional metrics such as Recall and mAP while providing human-friendly interpretability, making it a strong candidate to become a core metric for CIR and image generation evaluation in the future.

  • The authors also used TIFA to evaluate existing image generation models and created a leaderboard.
    Image

๐Ÿ“Š TIFA: Accurate and Interpretable Text-to-Image Faithfulness Evaluation with Question Answering


๐Ÿง  (ํ•œ๊ตญ์–ด) TIFA๋ž€ ๋ฌด์—‡์ธ๊ฐ€?

Image

  • Text-to-Image ์ƒ์„ฑ ๋ชจ๋ธ๊ณผ Composed Image Retrieval(CIR) ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€๋ฅผ ์–ด๋–ป๊ฒŒ ํ•ด์•ผํ•˜๋Š”๊ฐ€! ํ•˜๋Š” ๋ฌธ์ œ!!!
  • ์ด์— ๋“ฑ์žฅํ•œ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜๊ฐ€ ๋ฐ”๋กœ TIFA (Text-to-Image Faithfulness Assessment), Hu et al. (ICCV 2023).
  • ํ•ต์‹ฌ ์•„์ด๋””์–ด๋Š” ํ…์ŠคํŠธ ์กฐ๊ฑด(prompt)์ด ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€์— ์–ผ๋งˆ๋‚˜ ์ถฉ์‹คํ•˜๊ฒŒ ๋ฐ˜์˜๋˜์—ˆ๋Š”์ง€๋ฅผ ์ž๋™์œผ๋กœ ํ‰๊ฐ€ํ•˜๋Š” ๊ฒƒ.

๐Ÿ•ฐ๏ธ TIFA ์ด์ „์˜ ์ด๋ฏธ์ง€ ํ‰๊ฐ€ ์ง€ํ‘œ

TIFA(Hu et al., 2023)๊ฐ€ ์ œ์•ˆ๋˜๊ธฐ ์ „๊นŒ์ง€, ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ํฌ๊ฒŒ (1) ์ด๋ฏธ์ง€ ํ’ˆ์งˆ ํ‰๊ฐ€, (2) ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ์ •ํ•ฉ์„ฑ ํ‰๊ฐ€ ๋‘ ๊ฐ€์ง€ ๊ด€์ ์—์„œ ๋ฐœ์ „ํ•ด์™”์Œ.


1. ๐Ÿ‘ฉโ€โš–๏ธ Human Evaluation (์‚ฌ๋žŒ ํ‰๊ฐ€)

  • ๋ฐฉ๋ฒ•: ๋‘ ๋ชจ๋ธ์ด ์ƒ์„ฑํ•œ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ๋žŒ์—๊ฒŒ ๋ณด์—ฌ์ฃผ๊ณ  pairwise comparison(์Œ ๋น„๊ต) ๋ฐฉ์‹์œผ๋กœ ์–ด๋А ์ชฝ์ด ๋” ์ข‹์€์ง€ ํ‰๊ฐ€.
  • ์žฅ์ : ๊ฐ€์žฅ ์‹ ๋ขฐ๋„๊ฐ€ ๋†’๊ณ  ์ง๊ด€์ .
  • ํ•œ๊ณ„: ๋Œ€๊ทœ๋ชจ ์‹คํ—˜์ด ์–ด๋ ต๊ณ , ์‹œ๊ฐ„ยท๋น„์šฉ์ด ๋งŽ์ด ๋“ฆ.

2. ๐Ÿ–ผ๏ธ ์ด๋ฏธ์ง€ ํ’ˆ์งˆ ํ‰๊ฐ€ (Image Quality)

์˜ค๋ž˜๋œ ํ‰๊ฐ€๋ฐฉ์‹์œผ๋กœ ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ํ‰๊ฐ€๋ณด๋‹ค ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์Œ

  • Inception Score (IS, Salimans et al. 2016)
    • ์ •์˜: Inception-V3 ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ด์šฉํ•ด ์ƒ์„ฑ ์ด๋ฏธ์ง€์˜ feature ๋ถ„ํฌ๋ฅผ ๋ถ„์„.
    • ์žฅ์ : ์ด๋ฏธ์ง€์˜ ๋‹ค์–‘์„ฑ๊ณผ ํ’ˆ์งˆ์„ ๋™์‹œ์— ๋ฐ˜์˜.
    • ํ•œ๊ณ„: GT(ground-truth) ์ด๋ฏธ์ง€ ํ•„์š” ์—†์Œ โ†’ ํ•˜์ง€๋งŒ ์‹ค์ œ ์˜๋ฏธ์  ์ถฉ์‹ค์„ฑ์€ ๋ฐ˜์˜ X.
  • FID (Frรฉchet Inception Distance, Heusel et al. 2017)
    • ์ •์˜: ์ƒ์„ฑ ์ด๋ฏธ์ง€์™€ ์‹ค์ œ ์ด๋ฏธ์ง€์˜ feature ๋ถ„ํฌ(FID score)๋ฅผ ๋น„๊ตํ•˜์—ฌ ๊ฑฐ๋ฆฌ ์ธก์ •.
    • ์žฅ์ : ์ด๋ฏธ์ง€์˜ fidelity(์ง„์งœ ๊ฐ™์Œ)์™€ diversity(๋‹ค์–‘์„ฑ) ํ‰๊ฐ€ ๊ฐ€๋Šฅ.
    • ํ•œ๊ณ„: ์‹ค์ œ ์ด๋ฏธ์ง€(GT) ํ•„์š”, ๋ณต์žกํ•œ ๋ฐ์ดํ„ฐ์…‹์—์„œ๋Š” ํ•œ๊ณ„.

3. ๐Ÿ”— ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ์ •ํ•ฉ์„ฑ ํ‰๊ฐ€ (Image-Text Alignment)

Detection Model, VLM ๋“ฑ์ด ๋ฐœ์ „ํ•˜๋ฉด์„œ ๋“ฑ์žฅ!!

  • CLIP ๊ธฐ๋ฐ˜
    • CLIPScore (Hessel et al. 2021), CLIP-R
    • ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ฅผ CLIP ์ž„๋ฒ ๋”ฉ์— ๋„ฃ๊ณ  ์ฝ”์‚ฌ์ธ ์œ ์‚ฌ๋„๋กœ ์ •ํ•ฉ์„ฑ ํ‰๊ฐ€.
    • ๋‹จ์ˆœํ•˜๋ฉด์„œ๋„ ์ž๋™ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•˜์ง€๋งŒ, ์„ธ๋ฐ€ํ•œ ์†์„ฑ(์ƒ‰, ์žฌ์งˆ, ์œ„์น˜ ๋“ฑ) ๋ฐ˜์˜์ด ๋ถ€์กฑ.
  • Captioning ๊ธฐ๋ฐ˜
    • ์ƒ์„ฑ๋œ ์ด๋ฏธ์ง€๋ฅผ ์บก์…”๋‹ ๋ชจ๋ธ๋กœ ์„ค๋ช…(ํ…์ŠคํŠธ ์ƒ์„ฑ) ํ›„, ์›๋ž˜ ํ”„๋กฌํ”„ํŠธ์™€ ๋น„๊ต.
    • CIDEr, SPICE ๊ฐ™์€ ์ „ํ†ต์  NLP ์œ ์‚ฌ๋„ ์ง€ํ‘œ ํ™œ์šฉ.
    • ํ•œ๊ณ„: ์บก์…”๋‹ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์— ํฌ๊ฒŒ ์˜์กด.
  • Object Detection ๊ธฐ๋ฐ˜ (SOA, DALL-Eval ๋“ฑ)
    • ๊ฐ์ฒด ๊ฒ€์ถœ๊ธฐ๋กœ โ€œํ”„๋กฌํ”„ํŠธ์— ์žˆ๋Š” ๊ฐ์ฒด/์†์„ฑ/๊ด€๊ณ„โ€๊ฐ€ ์ด๋ฏธ์ง€ ์•ˆ์— ์กด์žฌํ•˜๋Š”์ง€ ์ฒดํฌ.
    • ์žฅ์ : ํŠน์ • ์†์„ฑ(๊ฐ์ฒด ์กด์žฌ ์—ฌ๋ถ€, ์ƒ‰, ์œ„์น˜, ๊ฐœ์ˆ˜ ๋“ฑ)์„ ์ง์ ‘ ํ™•์ธ ๊ฐ€๋Šฅ.
    • ํ•œ๊ณ„: ์ œํ•œ๋œ ์ถ•(axis)๋งŒ ํ‰๊ฐ€ ๊ฐ€๋Šฅ โ†’ ์žฌ์งˆ(material), ๋ชจ์–‘(shape), ๋งฅ๋ฝ(context) ๊ฐ™์€ ์š”์†Œ๋Š” ๋ฐ˜์˜ ๋ถˆ๊ฐ€.

4. ๐Ÿ’ก TIFA์˜ ๋“ฑ์žฅ ๋ฐฐ๊ฒฝ

  • ๊ธฐ์กด ์ง€ํ‘œ๋“ค์€ ํ’ˆ์งˆ(quality) ๋˜๋Š” ์ •ํ•ฉ์„ฑ(alignment)๋งŒ ๋ถ€๋ถ„์ ์œผ๋กœ ํ‰๊ฐ€.
  • ํŠนํžˆ object detection ๊ธฐ๋ฐ˜ ํ‰๊ฐ€๋Š” ์ œํ•œ๋œ ์†์„ฑ๋งŒ ๋ฐ˜์˜.
  • TIFA๋Š” QA ๊ธฐ๋ฐ˜ ํ‰๊ฐ€๋ฅผ ๋„์ž…ํ•˜์—ฌ,
    • ํ”„๋กฌํ”„ํŠธ โ†’ (๊ฐ์ฒดยท์†์„ฑยท๊ด€๊ณ„ ํŒŒ์‹ฑ) โ†’ ์งˆ๋ฌธ ์ƒ์„ฑ โ†’ VQA ๊ฒ€์‚ฌ โ†’ ์ ์ˆ˜ํ™”
    • ์ด๋Ÿฐ ๊ณผ์ •์œผ๋กœ ๊ด‘๋ฒ”์œ„ํ•œ ์†์„ฑ ์ฐจ์›(material, shape, activity, context ๋“ฑ)๊นŒ์ง€ ์ถฉ์‹ค๋„๋ฅผ ํ‰๊ฐ€ ๊ฐ€๋Šฅ.

๐Ÿ‘‰ ์ •๋ฆฌ:

  • IS, FID โ†’ ์ด๋ฏธ์ง€ ํ’ˆ์งˆ ์ž์ฒด(fidelity, diversity).
  • CLIPScore, CIDEr, SPICE โ†’ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์ •ํ•ฉ์„ฑ.
  • SOA, DALL-Eval โ†’ ์ œํ•œ์  ์†์„ฑ ๊ธฐ๋ฐ˜ ์ •ํ•ฉ์„ฑ.
  • TIFA โ†’ QA ๊ธฐ๋ฐ˜์œผ๋กœ ํ™•์žฅ, ์‚ฌ๋žŒ ํ‰๊ฐ€์™€ ๋†’์€ ์ƒ๊ด€์„ฑ ํ™•๋ณด โœ…

๐Ÿ“Œ TIFA์˜ ํ‰๊ฐ€ ํŒŒ์ดํ”„๋ผ์ธ

Image

TIFA๋Š” ๋‹จ์ˆœํžˆ ์ •๋‹ต ์ด๋ฏธ์ง€ ์—ฌ๋ถ€๋งŒ์„ ํ™•์ธํ•˜์ง€ ์•Š๊ณ , ํ…์ŠคํŠธ ์กฐ๊ฑด ์ถฉ์‹ค์„ฑ(faithfulness)์„ ์ •๋ฐ€ํ•˜๊ฒŒ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.

  1. ํ”„๋กฌํ”„ํŠธ ์ž…๋ ฅ
    • ์˜ˆ: โ€œ๊ฐˆ์ƒ‰ ๊ฐ•์•„์ง€๊ฐ€ ํ•ด๋ณ€์—์„œ ๋›ฐ์–ด๋…ธ๋Š” ์žฅ๋ฉดโ€
  2. LLM์„ ์ด์šฉํ•œ ํŒŒ์‹ฑ
    • ๊ฐ์ฒด(Object), ์†์„ฑ(Attribute), ๊ด€๊ณ„(Relation) ์ถ”์ถœ
    • ์˜ˆ: ๊ฐ•์•„์ง€, ๊ฐˆ์ƒ‰, ํ•ด๋ณ€
  3. ์งˆ๋ฌธ ์ƒ์„ฑ (Question Generation)
    • โ€œ์ด๋ฏธ์ง€์— ๊ฐ•์•„์ง€๊ฐ€ ์žˆ๋‚˜์š”?โ€
    • โ€œ๊ฐ•์•„์ง€๋Š” ๊ฐˆ์ƒ‰์ธ๊ฐ€์š”?โ€
    • โ€œ๋ฐฐ๊ฒฝ์€ ํ•ด๋ณ€์ธ๊ฐ€์š”?โ€
    1. 3๋ฒˆ์€ ์•„๋ž˜์˜ ํ”„๋กฌํฌํŠธ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ๋ฒˆ์— ์ƒ์„ฑ๋จ!!

TIFA์˜ Question Generation Prompt (๋…ผ๋ฌธ ์›๋ฌธ)

1
2
3
4
Given an image description, generate multiple-choice questions that verify if the image description is correct.
First extract elements from the image description. 
Then classify each element into a category (object, human, animal,food, activity, attribute, counting, color,material, spatial, location, shape, other).
Finally, generate questions for each element.
  1. VQA ๋ชจ๋ธ์„ ํ†ตํ•œ ๋‹ต๋ณ€ ๋น„๊ต
    • ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ์ž…๋ ฅ โ†’ ๋ชจ๋ธ์ด ๋‹ต๋ณ€ ์ƒ์„ฑ
    • ๋‹ต๋ณ€์ด ํ”„๋กฌํ”„ํŠธ ์กฐ๊ฑด๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธ
  2. ์ ์ˆ˜ํ™” (Faithfulness Score ์‚ฐ์ถœ)
    • ์กฐ๊ฑด ์ถฉ์กฑ ์—ฌ๋ถ€๋ฅผ ์ข…ํ•ฉํ•˜์—ฌ ์ถฉ์‹ค์„ฑ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐ

โœ… TIFA์˜ ์žฅ์ 

  • ํ…์ŠคํŠธ ์กฐ๊ฑด ์ถฉ์‹ค์„ฑ ํ‰๊ฐ€: ๋‹จ์ˆœํžˆ Top-K ์•ˆ์— ์ •๋‹ต์ด ์žˆ๋А๋ƒ๊ฐ€ ์•„๋‹ˆ๋ผ, ์กฐ๊ฑด์„ ์–ผ๋งˆ๋‚˜ ์ถฉ์‹คํžˆ ๋งŒ์กฑํ•˜๋Š”์ง€ ์ธก์ •
  • ํ•ด์„ ๊ฐ€๋Šฅ์„ฑ (Interpretability): ์–ด๋–ค ์กฐ๊ฑด์„ ๋งŒ์กฑ/๋ถˆ๋งŒ์กฑํ–ˆ๋Š”์ง€ ์งˆ๋ฌธ ๋‹จ์œ„๋กœ ํ™•์ธ ๊ฐ€๋Šฅ
  • ์ธ๊ฐ„ ํ‰๊ฐ€์™€ ๋†’์€ ์ƒ๊ด€์„ฑ: ์‹ค์ œ ์‚ฌ๋žŒ ํ‰๊ฐ€ ๊ฒฐ๊ณผ์™€ ์ž˜ ๋งž์•„๋–จ์–ด์ง

โš ๏ธ ํ•œ๊ณ„

์—ฌ๋Ÿฌ๊ฐœ ์งˆ๋ฌธ ๋งŒ๋“ค๊ณ , VLM์— ๋‹ต๋ณ€๋ฐ›๊ธฐ. ๋„ˆ๋ฌด ๋А๋ ค!!

  • VQA ๋ชจ๋ธ ์˜์กด์„ฑ: ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” ์„ฑ๋Šฅ์ด VQA ๋ชจ๋ธ์— ๋”ฐ๋ผ ํฌ๊ฒŒ ๋‹ฌ๋ผ์ง
  • ์ถ”๊ฐ€ ์—ฐ์‚ฐ ๋น„์šฉ: Recall์ฒ˜๋Ÿผ ๊ฐ„๋‹จํžˆ ๊ณ„์‚ฐ๋˜์ง€ ์•Š๊ณ , LLM + VQA ๊ณผ์ •์„ ๊ฑฐ์ณ์•ผ ํ•จ
  • ์•„์ง CIR ํ‘œ์ค€ ์•„๋‹˜: Recall@K, mAP์— ๋น„ํ•ด ์ฑ„ํƒ๋ฅ ์ด ๋‚ฎ์Œ

๐Ÿ‘‰ ์ •๋ฆฌ:
TIFA๋Š” โ€œํ…์ŠคํŠธ ์กฐ๊ฑด ์ถฉ์‹ค์„ฑ(faithfulness)โ€์„ ์ž๋™์œผ๋กœ ์ธก์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ๋„๊ตฌ์ž…๋‹ˆ๋‹ค.
Recall, mAP ๊ฐ™์€ ์ „ํ†ต์  ์ง€ํ‘œ๋ฅผ ๋ณด์™„ํ•˜๋ฉด์„œ, ์ธ๊ฐ„ ์นœํ™”์ ์ธ ํ•ด์„์„ ์ œ๊ณตํ•œ๋‹ค๋Š” ์ ์—์„œ ํ–ฅํ›„ CIR๊ณผ ์ƒ์„ฑ ๋ชจ๋ธ ํ‰๊ฐ€์˜ ํ•ต์‹ฌ ์ง€ํ‘œ๋กœ ์ž๋ฆฌ์žก์„ ๊ฐ€๋Šฅ์„ฑ์ด ํฝ๋‹ˆ๋‹ค.

  • ์•ผ๋“ค์ด ๋งŒ๋“  TIFA๋กœ ๊ธฐ์กด ์ด๋ฏธ์ง€ ์ƒ์„ฑ๋ชจ๋ธ์„ ํ‰๊ฐ€, Leaderboard๋ฅผ ๋งŒ๋“ค๊ธฐ๋„ํ•จ!!
    Image
This post is licensed under CC BY 4.0 by the author.