Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Could not read the parquet files: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/81/7d/817d1fcd79b853072d54434b70f695a2ca66a7ce5cf7a4276c00b45c33f8639c/967e156938f8383610d64a9db049bbbace09cce6e48c90439f3e0f9ee511d2fd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20250508%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250508T020915Z&X-Amz-Expires=3600&X-Amz-Signature=83d6f689132e44146a669935da4d7ca17acc39de1a2a9ab757a0186d4e0af0e6&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27train-06002-of-07630.parquet%3B%20filename%3D%22train-06002-of-07630.parquet%22%3B&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/81/7d/817d1fcd79b853072d54434b70f695a2ca66a7ce5cf7a4276c00b45c33f8639c/967e156938f8383610d64a9db049bbbace09cce6e48c90439f3e0f9ee511d2fd</Key><RequestId>9S7PBQ8DVR9S4JMF</RequestId><HostId>9KLLEdHMohI9MrQjU3UYYq96J0LCvcz6ennpbYxe5oqt6E0M4T1ksDlvQR6wsS7NhCFve+AxN479hR0neH7Wc3YEZYzdvkkp</HostId></Error>
Error code: FileSystemError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
title
string | abstract
string | field_of_study
sequence | publication_date
string | license
string | reference_paperhash
sequence | paperhash
string | full_text
string | figures
images list | figures_metadata
list | decision
bool | detailed_decision
int64 | reviews
list | comments
list | score
sequence | confidence
sequence | correctness
sequence | clarity
sequence | impact
sequence | ethics
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Customized Procedure Planning in Instructional Videos | Generating customized procedures for task planning in instructional videos poses a unique challenge for vision-language models. In this paper, we introduce Customized Procedure Planning in Instructional Videos, a novel task that focuses on generating a sequence of detailed action steps for task completion based on user requirements and the task's initial visual state. Existing methods often neglect customization and user directions, limiting their real-world applicability. The absence of instructional video datasets with step-level state and video-specific action plan annotations has hindered progress in this domain. To address these challenges, we introduce the Customized Procedure Planner (CPP) framework, a causal, open-vocabulary model that leverages a LlaVA-based approach to predict procedural plans based on a task's initial visual state and user directions. To overcome the data limitation, we employ a weakly-supervised approach, using the strong vision-language model GEMINI and the large language model (LLM) GPT-4 to create detailed video-specific action plans from the benchmark instructional video datasets (COIN, CrossTask), producing pseudo-labels for training. Discussing the limitations of the existing procedure planning evaluation metrics in an open-vocabulary setting, we propose novel automatic LLM-based metrics with few-shot in-context learning to evaluate the customization and planning capabilities of our model, setting a strong baseline. Additionally, we implement an LLM-based objective function to enhance model training for improved customization. Extensive experiments, including human evaluations, demonstrate the effectiveness of our approach, establishing a strong baseline for future research in customized procedure planning. | [
"Customized Procedure Planning",
"multi-modal models",
"vision-language models",
"applications to robotics, autonomy, planning"
] | 2024-10-04 | CC BY 4.0 | [
"|pour_in_milk",
"|whisk_milk_and_eggs_together",
"|dip_homemade_ciabatta_bread_in_the_egg_mixture",
"|melt_butter_in_the_pan",
"|place_the_soaked_ciabatta_bread_in_the_pan",
"|flip_the_bread_to_cook_evenly",
"|glaze_the_top_with_cream_cheese._task:_make_french_toast_task:_make_bread_and_butter_pickles_keywords:_overnight_brine",
"|cut_cucumbers_thick_and_slice_onions_thin",
"|sprinkle_salt,_mix_in_ice,_and_refrigerate_overnight",
"|add_vinegar,_sugar,_dill,_mustard_seed,_cayenne,_and_cold_water_to_create_the_brine",
"|combine_cucumbers_and_onions_in_the_brine;_pack_tightly_into_jars_and_seal",
"|process_jars_in_a_water_bath_canner;_confirm_sealing",
"bi|procedure_planning_in_instructional_videos_via_contextual_modeling_and_model-based_policy_learning",
"bi|procedure_planning_in_instructional_videos_via_contextual_modeling_and_model-based_policy_learning",
"bird|natural_language_processing_with_python:_analyzing_text_with_the_natural_language_toolkit",
"chang|procedure_planning_in_instructional_videos",
"huang|on_the_limitations_of_fine-tuned_judge_models_for_llm_evaluation",
"lakhmi|recurrent_neural_networks:_design_and_applications",
"li|skip-plan:_procedure_planning_in_instructional_videos_via_condensed_action_space_learning",
"liang|holistic_evaluation_of_language_models",
"liu|visual_instruction_tuning",
"liu|improved_baselines_with_visual_instruction_tuning",
"liu|llava-next:_improved_reasoning,_ocr,_and_world_knowledge",
"ravindu|why_not_use_your_textbook?_knowledgeenhanced_procedure_planning_of_instructional_videos",
"niu|schema:_state_changes_matter_for_procedure_planning_in_instructional_videos",
"openai|an_ai_language_model",
"sun|plate:_visuallygrounded_planning_with_transformers_in_procedural_tasks",
"tang|coin:_a_large-scale_dataset_for_comprehensive_instructional_video_analysis",
"team|gemini:_a_family_of_highly_capable_multimodal_models",
"vaswani|attention_is_all_you_need._advances_in_neural_information_processing_systems",
"wang|event-guided_procedure_planning_from_instructional_videos_with_text_supervision",
"wang|pdpp:_projected_diffusion_for_procedure_planning_in_instructional_videos",
"wang|pandalm:_an_automatic_evaluation_benchmark_for_llm_instruction_tuning_optimization",
"wu|open-event_procedure_planning_in_instructional_videos",
"zare|rap:_retrieval-augmented_planner_for_adaptive_procedure_planning_in_instructional_videos",
"zhang|bertscore:_evaluating_text_generation_with_bert",
"zhao|p3iv:_probabilistic_procedure_planning_from_instructional_videos_with_weak_supervision",
"zhu|judgelm:_fine-tuned_large_language_models_are_scalable_judges",
"zhukov|cross-task_weakly_supervised_learning_from_instructional_videos"
] | zare|customized_procedure_planning_in_instructional_videos|ICLR_cc_2025_Conference | introduction
Procedure planning in instructional videos (PPIV) involves generating a sequence of action steps, to transform an initial visual observation of a task into its completion (Chang et al., 2020; Bi et al., 2021a; Sun et al., 2022; Zhao et al., 2022; Wang et al., 2023a; b; Li et al., 2023; Niu et al., 2024; Zare et al., 2024; Nagasinghe et al., 2024) . Autonomous agents capable of performing this task can assist humans in efficiently completing complex, goal-oriented tasks and procedures in daily life. While humans intuitively understand the steps and reasoning needed to accomplish such tasks, machines face considerable challenges in replicating this ability. To overcome this gap, an autonomous agent requires a deep understanding of instructional procedures, their unique characteristics, related objects, the various states involved, and the transformations brought about by actions. This understanding is essential for generating a plausible, executable plan that leads to successful task completion.Despite considerable progress in recent studies, various obstacles still restrict its practical applications in the real world. Recent works on procedure planning in instructional videos have largely overlooked the importance of customization and user-specific directions. Most existing approaches rely on initial and final visual observations of a task, resulting in a non-causal formulation (Chang et al., 2020; Bi et al., 2021a; Sun et al., 2022; Zhao et al., 2022; Wang et al., 2023a; b; Li et al., 2023; Niu et al., 2024; Zare et al., 2024) , which limits their applicability in real-life scenarios. This reliance on visual information alone introduces a semantic gap, particularly in representing intermediate action steps that may depend on user-specific conditions but are not captured by the visual inputs. Consequently, the generated action plans often lack informativeness, producing generic se- quences blind to user-specific needs. This issue is illustrated in Fig. 1a , where the initial and final visual states do not distinguish between a generic and a more detailed plan, resulting in ambiguity and reinforcing the semantic gap.Some models, such as (Wang et al., 2023a) , which incorporate textual inputs, have made progress in bridging this semantic gap between visual observations and intermediate steps. However, they still fall short by conditioning planning solely on task-related textual information inferred from the observed states, without generating action steps tailored to user-specific directions or conditions necessary to complete a task from its current state.A model that fully addresses this limitation must go beyond simple visual inputs. It should be capable of processing both the current visual state of the task and user-specific requirements provided in textual form. This would allow the model to generate a more tailored plan, transforming the task toward completion in a way that aligns with both the visual state and the user's directions. Fig. 1b highlights the contrast between the two approaches: a model that incorporates user-specified needs, such as keyword conditions, can produce a more customized, detailed, and informative instructional plan with customized steps. This stands in contrast to a model that relies solely on task objectives, showcasing the practicality and relevance of customized procedure planning.Addressing this need cannot be adequately captured within the conventional closed-vocabulary setting under which this problem has been studied (Chang et al., 2020; Bi et al., 2021a; Sun et al., 2022; Zhao et al., 2022; Wang et al., 2023a; b; Li et al., 2023; Niu et al., 2024; Zare et al., 2024) , as it restricts plan prediction to predefined action labels. While recent works, such as Wu et al. (2024) , have made progress in expanding the problem of PPIV to an open-vocabulary setting, the challenge of generating detailed, user-specific action plans remains unresolved.A key challenge in extending PPIV to address user-specific needs has been the lack of suitable datasets for training. To train such a model, a large dataset of instructional videos is required, along with their corresponding detailed instructional plans, annotated with time-stamped procedural states. These detailed plans must be tailored to the specific characteristics of each video, which distinguish an instructional video from more generic ones, addressing unique user requirements. However, obtaining such annotations is both expensive and time-consuming. Existing benchmark datasets for this task, such as CrossTask and Coin (Zhukov et al., 2019; Tang et al., 2019) , provide step-level annotations of procedural states and generic plans, but they lack the detailed instructional plans and video-specific characteristics that make each instructional plans informative and unique in terms of user demands.Under review as a conference paper at ICLR 2025We tackle these challenges, by introducing the setting of Customized Procedure Planning in Instructional Videos (CPPIV) and proposing the Customized Procedure Planner (CPP) framework as a solution for this problem. We implement CPP as a LlaVa-based (Liu et al., 2023; 2024a; b) model, fine-tuned to generate detailed, open-vocabulary instructional plans for task completion, starting from an initial visual state and customized based on user-specified keywords.To overcome dataset limitations in training CPP, we adopt a weakly supervised approach. First, we leverage the powerful vision-language model, GEMINI (Team et al., 2023) , to extract video-specific, task-related keywords and generate descriptions that explain how these keywords are relevant to the video's action plan. This is applied to the CrossTask and COIN datasets. Using this customized information-key elements that differentiate the instructional content of each video from generic task plans-we conditionally generate a customized, video-specific instructional plan. To achieve this, we employ the strong LLM, GPT-4o (OpenAI, 2023) , to adapt the generic instructional ground truth plan for each video based on the extracted keywords. These customized plans serve as pseudolabels for training the CPP model. Additionally, during training, GPT-4o is integrated into the objective function to further enhance the model's ability to produce customized instructional plans.Extending Procedure Planning in Instructional Videos to an open-vocabulary setting presents challenges for traditional evaluation metrics, which rely on pre-defined, closed-vocabulary action step labels and fail to generalize effectively. To overcome this, we draw on recent works (Liang et al., 2023; Zhu et al., 2023; Wang et al., 2024; Huang et al., 2024) , and introduce a novel LLM-based approach-referred to as automatic metrics-to assess the quality of both planning and customization in detailed, varied, open-vocabulary plans. We evaluate our model on two widely used instructional video datasets, CrossTask and COIN. Additionally, we validate our model's performance by testing it on human-annotated customized plans from both datasets. Our model outperforms the state-ofthe-art (SoA) and establishes a strong baseline for the setting of customized procedure planning.Our main contributions are:-We emphasize the need for a more practical formulation of procedure planning in instructional videos that considers user directions and specific requirements and introduce the novel setting of customized procedure planning in instructional videos, aimed at generating instructional plans that cater to user task-specific needs rather than relying solely on generic task completion.-We propose the Customized Procedure Planner framework, which generates open-vocabulary instructional plans tailored to user-specified condition keywords, facilitating the transformation of initial visual states into task completion.-We propose a weakly supervised training approach that addresses the lack of customization annotations for CPPIV model training, allowing customized planning to be learned from unannotated videos.-We extend conventional procedure planning metrics to encompass open-vocabulary, varied, and detailed instructional plans, enabling a comprehensive assessment of planning and customization performance for predicted plans.
related works 2.1 procedure planning
Procedure planning from instructional videos involves generating effective task completion plans.Earlier works employed a two-branch architecture, sequentially predicting actions and states with recursive models (Jain & Medsker, 1999; Vaswani et al., 2017) to capture state transitions. More recent methods, such as Zhao et al. (2022) ; Wang et al. (2023b) , generate plans using a single-branch architecture that directly decodes actions, minimizing prediction error propagation. However, these approaches rely solely on visual observations of the initial and final states, resulting in a non-causal formulation that lacks adaptability to user-specific tasks and needs. Our work introduces CPP, a novel one-branch prediction framework that generates a detailed sequence of actions based on both the initial visual state and user-defined conditions, addressing this limitation in the existing literature.Under review as a conference paper at ICLR 2025The problem of customized procedure planning can be framed as Conditional Vision-Language Models for Sequence Generation. In this approach, the model generates an output sequence by conditioning on both visual input (i.e., the current visual state) and textual input (i.e., task and user requirements). To address the CPPIV challenge, we utilize the Conditional Vision-Language Models framework, leveraging models such as LLaVA (Liu et al., 2024a; 2023; 2024b) and GPT-4o (OpenAI, 2023) . The pipeline extracts task-specific keywords from the PPIV datasets using a vision-language model (VLM), which are then combined with a human-annotated generic plan to create pseudo labels for training customized instructional video datasets with the aid of a large language model (LLM). * Refer to prompts 1, 2, & 3 for the complete text.In this section, we introduce our proposed framework, the Customized Procedure Planner, designed for customized procedure planning in instructional videos. We also explore the weakly supervised learning approach employed to train the CPP in the absence of datasets containing customization annotations.Under review as a conference paper at ICLR 2025
setting: customized procedure planning
We define the novel setting of customized procedure planning in instructional videos as follows: Given an initial visual observation o s , a task objective Task, and a sequence of userspecified customization keywords Keywords = {k 1 , k 2 , . . . , k K }, the model generates a plan p = {a 1 , a 2 , . . . , a T }, where T represents the plan's length (i.e., the action horizon) and a i (for 1 ≤ i ≤ T ) is the detailed customized text for the i-th action step. This plan should effectively transform o s into the task objective while satisfying the specified customization conditions outlined by Keywords (see the bottom scenario in Fig. 1b ).To implement the Customized Procedure Planner (CPP), we employ a vision-language model built on LLaVa. We experiment with LLaVa-1.5 (Liu et al., 2024a) and LLaVa-NeXT (Liu et al., 2024b) as the backbone of our framework, fine-tuning these models with pseudo-customized labels, as described in section 3.3. The operation of CPP is illustrated in Fig. 2a . The model takes as input o s , a prompt containing the task objective Task, and user-defined conditions Keywords, and generates a sequence of customized action steps p. The zero-shot input prompt structure is shown in Prompt 1.Prompt 1: 'Objective: Compose a detailed sequence of action steps, in order, to complete the task "{Task}" depicted in the image, starting from its current state. Conditions: {Keywords}. Instructions: Ensure that the steps align with the specified conditions and lead to successful task completion.'Customizing Instructional Datasets. Customized Procedure Planning suffers from a lack of sufficient datasets for training. We overcome this limitation by leveraging recent advancements in vision-language models (VLMs) and the capabilities of large language models (LLMs). As shown in Fig. 2b , our novel pipeline collects customizations from the PPIV datasets to build customized instructional video datasets, to use as pseudo labels for training. First, we employ the off-the-shelf vision-language model, GEMINI-1.5-Flash, to extract customization terms for each video sample in the datasets. These keywords are designed to be task-specific and tailored to the video's unique characteristics, as outlined in Prompt 2, which we implement along with a one-shot example response.Prompt 2:'You will be provided with an instructional video that demonstrates a task through a series of ordered action steps (i.e., an instructional plan). Your response should identify up to 3 keywords for the video that are directly related to both the task and the action steps. These keywords should emphasize what distinguishes the video's instructional plan from a generic plan on the same task. For each term, provide a brief explanation of its relevance to the video, the task, and the action steps in one sentence.'Next, we process the extracted keywords and their descriptions of how they relate to the video's instructional plan, alongside the corresponding human-annotated generic plan for the video. Using the GPT-4o LLM, guided by Prompt 3 and a one-shot example response, we generate a customized plan for each video (i.e., pseudo labels for training).'Compose a customized plan for an instructional video, based on the task and the video characteristics. The video includes a sequence of action steps, action-plan, in order. Format the response in one line. Your response should map each action step from the action-plan to a corresponding tailored customized step, maintaining the sequence order, in the format "'action step': tailored step", separated by commas. If you need to include an additional step, use the term "added step".' Weak supervision. With the generated pseudo-labels, we train the Customized Procedure Planner (CPP) using a cross-entropy loss function (Liu et al., 2024a; b) . To improve the model's customization, we further incorporate the large language model (LLM) GPT-4o during training. GPT-4o is tasked with selecting of the best related plan to the Keywords, between two plans, a and bone being the model's prediction and the other the pseudo-label, with the positions of a and b randomized. GPT-4o returns the error rate of its prediction, which is used to modify the overall batch loss.The error rate is computed over the entire batch. For each sample in the batch, if GPT-4o's selection matches the pseudo-label, the sample accuracy is 1; otherwise, the accuracy is 0. The batch accuracy is the average accuracy across all samples in the batch, and the batch error rate is the complement of this accuracy, given by:EQUATIONwhere: N is the batch size, ŷi is the LLM judge's selected plan for the i-th sample, y i is the corresponding pseudo-label for the i-th sample, 1(ŷ i = y i ) is an indicator function that equals 1 if ŷi = y i , and 0 otherwise.The batch error rate is calculated as:EQUATIONThis error rate is scaled by a set positive factor λ and added to the cross-entropy loss to adjust the training process, as described by the following equation:L batch = L CE + λ • Error batch (3)Where L CE is the cross-entropy loss between the predictions and the pseudo-labels, and λ is a learnable scaling factor that controls the impact of the error rate on the batch loss.
experiments
We conduct experiments on two benchmark datasets, using novel evaluation metrics to validate the effectiveness of our proposed model, and further support our results through human evaluation.We evaluate our methodology using two instructional video datasets: CrossTask (Zhukov et al., 2019) and COIN (Tang et al., 2019) . The CrossTask dataset includes videos across 18 topics, such as "Make French Toast," with an average of 7.6 actions per video. These topics are split into 18 primary and 65 related events. In our study, we focus on the primary subset, which provides precise timestamps for each action, enabling a clear sequence of instructional steps and encompassing 2,750 videos. The COIN dataset contains 11,827 videos covering 778 distinct actions, with an average of 3.6 actions per video. Following recent works (Wang et al., 2023b; Bi et al., 2021b; Chang et al., 2020; Zhao et al., 2022) , we create training and testing splits with a 70/30 ratio. To further enrich the datasets, we apply a moving window approach to organize videos into plans with varying action horizons. Starting from the i-th action, the window extends until the plan is complete (i.e., T = |p| -i).Next, we apply the pseudo-label generation pipeline, as detailed in section 3.3 and Fig. 2b , to obtain customized plans for each dataset. This process leads to a more diverse set of action plans across the datasets. Fig. 3 illustrates the expansion of vocabulary in the action plans through word clouds, comparing the generic plans with the added vocabulary for four sample tasks. This emphasizes the open-vocabulary setting and the degree of customization achieved. (Bird et al., 2009) in the visualization.The performance of PPIV models is typically assessed using three standard metrics (Chang et al., 2020; Zhao et al., 2022; Sun et al., 2022; Bi et al., 2021a; Wang et al., 2023b ): 1) Mean Intersection over Union (mIoU) evaluates the overlap between predicted and ground truth action sequences, defined as |at∩ât| |at∪ât| . This metric indicates whether the model identifies the correct steps but does not account for action order or repetitions. 2) Mean Accuracy (mAcc) measures the alignment of actions at each step, taking into account the order and repetitions of actions. And 3) Success Rate (SR), the strictest metric, which considers a plan successful only if it precisely matches the ground truth.However, all these metrics rely on action labels in both predicted and ground truth sequences, restricting the PPIV setting to a closed-vocabulary framework. This limitation impedes the evaluation of more practical open-vocabulary and varied plan sequences. In this study, we introduce four novel evaluation metrics that retain the essence of the conventional metrics while accommodating this new setting.Automatic Metrics. This study has to quantify the performance of proposed plans in two different dimensions: Planning quality and customization quality. As mentioned, the nature of an openvocabulary framework necessitates a novel approach to standard planning metrics found in previous literature. To this end, we combine Few-Shot-In-Context Learning and LLMs to create automatic metrics that are able to robustly score plans based on the two dimensions.With regards to the quality of planning, we use Few-Shot-In-Context Learning combined with GPT-4o to create two types of sequence mappings from the predicted sequence to the closed-vocabulary generic ground-truth sequence. The first mapping is order mapping. Order mapping is a sequential process that iterates over the ground truth sequence. For each step s n = a Generic n,GT in the sequence, it tries to map it to a corresponding step p m = a Customized m,P red in the predicted sequence. If unable to find a valid corresponding step p m , it denotes the step s n as missing. The mapping proceeds with the next ground truth step s n+1 , which is only able to map to predicted sequence steps p m+1 , ..., p M . This approach preserves the order of the sequence and can be used to calculate mean accuracy (mAcc) and success rate (SR) by aligning open-vocabulary customized plans with their closed-vocabulary counterparts and labels. The second mapping is overlap mapping. This procedure is identical to order mapping except that if s n maps to p m , a follow up step s n+1 can be mapped to any step p 1 , ..., p M as long as s n and s n+1 are distinct. For identical steps, s n+1 cannot map to p m . This type of mapping preserves an understanding of which steps in the ground-truth sequence are present in the predicted sequence, regardless of order. Thus, mIoU can be calculated from this mapping.Under review as a conference paper at ICLR 2025 For each mapping type and dataset, there are between 15 and 20 human-created training examples that are provided to ChatGPT-4o as few-shot examples. We refer to these metrics as automatic SR, mAcc and mIoU (a-SR, a-mAcc, a-mIoU).To assess the quality of customization, we use Few-Shot-In-Context Learning with GPT-4o to generate a "relevance score" that evaluates how well the plan incorporates input keywords. A rubric, scored from 1 to 5, measures this customization, rewarding plans that meaningfully integrate the keywords and penalizing those that lack customization, regardless of overall planning success.1: The plan is not relevant to any of the keywords. 2: The plan is somewhat relevant to a few keywords, but lacks depth. 3: The plan demonstrates a good balance of relevance, either highly relevant to one keyword or moderately relevant to all. 4: The plan is relevant to most keywords, demonstrating a strong application. 5: The plan is highly relevant to all keywords, thoroughly integrating them with clear and meaningful content.Using this rubric, we create 20 examples each for CrossTask and COIN, providing them to the LLM in a few-shot learning setup.Aligned BERT Score (aBERT-Score). To effectively measure the similarity between the predicted sequences and the generic plan, we further introduce a novel metric called aligned BERT Score. This metric is based on BERT similarity score (Zhang et al., 2020) and is calculated by applying an optimal alignment algorithm to both sequences, utilizing a similarity matrix M [i][j]. This matrix captures the cosine similarity between the embeddings of each action pair from the ground-truth sequence a Generic GT and the customized predicted sequence a Customized P red , as defined by the following equation:EQUATIONIn this equation, a Generic i,GT denotes the i th reference action, while a Customized j,P red represents the j th hypothesis action. We then derive the similarity score associated with the trajectory corresponding to the optimal alignment path between the two sequences, which serves as a measure of their similarities. For further details on the workings of this metric, please refer to appendix B.We implement the customized dataset using GEMINI-1.5-Flash as the VLM and GPT-4o mini as the LLM, which also serves as the judge for assessing customization loss during training (eq. ( 2)). To expand the training dataset, we generate pseudo-label customized plans for each sample by leveraging all combinations of sample Keywords, K k , where 0 < k ≤ K, effectively increasing the dataset size.We perform LoRA fine-tuning on the LlaVa-improved (13B parameter) (Liu et al., 2024a) model and full fine-tuning on Llava-Next (Liu et al., 2024b) , training each for three and four epochs to optimize performance on the validation set, respectively. Model evaluation is conducted on a heldout unseen test set. We use an initial learning rate of 2 × 10 -4 with a cosine learning rate scheduler and a training batch size of 16. The training process utilizes four NVIDIA A100 GPUs with 40GB of memory for LlaVa-improved and eight GPUs for Llava-Next. During training, we set λ in eq. ( 3) to 5 × 10 -3 .We assess CPP's performance in comparison to existing large models capable of customized procedure planning for instructional videos. Specifically, we use GPT-4o, a widely recognized and powerful vision-language model, as a baseline under zero-shot and few-shot regimes. Similar to CPP, GPT-4o is prompted with an initial visual observation and corresponding instructions for further details). We compare its performance to CPP models utilizing LlaVA-improved (i.e., LlaVA-1.5) and LlaVA-Next (i.e., LlaVA-1.6) backbones. To distinguish models trained with the customization loss introduced in eq. ( 2), we label them as "with CL" and "w/o CL" (CL referring to customization loss). The results presented in table 1 and table 2 highlight CPP's superiority across automatic metrics, including SR, Acc, mIoU, and aBERT-Score, for datasets CrossTask and Coin, demonstrating its advantages in planning, customization, and overall similarity to ground-truth plans.Notably, CPP with the LlaVA-1.6 backbone outperforms GPT-4o's few-shot performance by 13.08%, 18.32%, and 29.76% in a-SR, a-mAcc, and a-mIoU, respectively, on the CrossTask dataset, and by 14.96%, 20.4%, and 28.87% on the COIN dataset.In terms of customization, CPP performs competitively against GPT-4o, exceeding the a-Relevance score on CrossTask. GPT-4o's high score in this metric, however, results from over-customization of action steps based on the input keywords. GPT-4o leverages its vast prior knowledge to overcustomize plans, adapting them beyond the natural levels found in instructional videos in an attempt to fully satisfy the input prompt.Fig. 4 presents two sample predictions based on the given conditions and visual state for CPP (with the LlaVA-1.6 backbone and CL). As shown, the model accurately understands the initial task state and generates a plan that successfully meets the keyword conditions through to completion.The integration of the customization loss into the overall objective function of the model (eq. ( 3)) significantly enhances CPP's performance, as illustrated in tables 3 and 4. In the CrossTask dataset, the a-Relevance score increases by 14 points, while the COIN dataset sees a rise of 25 points. Furthermore, this loss functions as a regularization mechanism, contributing to an overall improvement in planning scores, with a 1.26% increase in success rate for the COIN dataset. The tables also include the p-values for improvements in the a-Relevance score, highlighting the significance of these enhancements for each backbone. In this study, we tackled the novel challenge of customized procedure planning in instructional videos by developing the Customized Procedure Planner (CPP) framework. Unlike previous approaches in CPPIV, which were limited to using only initial and final visual observations for procedure induction, CPP generates plans in a causal setting based on initial observations, along with user and task-specific requirements. CPP surpasses the state-of-the-art in existing models. A key innovation is the use of weak supervision by customizing existing PPIV datasets. This is achieved by extracting video-specific customization information from video samples and utilizing advanced LLMs. Our model also incorporates a novel LLM-based objective function during training to further enhance customization. We evaluate CPP using new metrics designed specifically for this setting, demonstrating its superiority in CPPIV. Looking ahead, we see potential for applying CPP to more diverse scenarios and generating customized plans for unseen tasks. Additionally, developing a high-quality customized dataset will pave the way for more advanced applications in this field.
| [
{
"caption": "Figure 3: Expansion of vocabulary in action plans as the result of customization pipeline. The word clouds compare generic plans (top) with the added vocabulary (bottom) for four sample tasks, showcasing the open-vocabulary setting and customization on the CrossTask dataset. Stop-words are excluded (Bird et al., 2009) in the visualization.",
"figType": "Figure"
},
{
"caption": "Table 2: Comparison of CPP and state-of-the-art models on the COIN dataset. CPP demonstrates superior performance in planning and customization.",
"figType": "Table"
}
] | false | null | [
{
"limitations": null,
"main_review": null,
"paper_summary": "paper_summary: The paper addresses the issue of generating customized procedures for task planning in instructional videos. Existing methods face challenges like overlooking customization and lacking proper datasets. The contributions are significant. It presents a novel setting for customized procedure planning, emphasizing user - specific needs. The Customized Procedure Planner (CPP) framework is proposed, which utilizes LlaVa - based models and is trained with pseudo - labels generated through a weakly - supervised approach. New evaluation metrics are introduced to assess planning and customization quality. Experimental results on CrossTask and COIN datasets show CPP's superiority over baselines like GPT - 4o. The integration of customization loss further enhances performance. Overall, this research lays a strong foundation for future work in customized procedure planning.",
"questions": "questions: See details in 'Weakness' section.",
"review_summary": null,
"strength_weakness": "strength_weakness: 1. The CPP model is trained and evaluated on a specific set of instructional video tasks (mostly related to cooking and DIY activities in the used datasets). It is unclear how well the model would generalize to other types of tasks or domains that have different characteristics and action requirements. \n\n2. The process of creating pseudo - labels using GPT - 4o and GEMINI might introduce some biases or inaccuracies. \n\n3.The interpretation of the \"relevance score\" for customization quality assessment could be more straightforward. The rubric used to measure customization is somewhat subjective, and it might not be clear how different users would rate the relevance of a plan.\n\n4.The human evaluation seems to focus mainly on validating the model's performance rather than exploring potential areas for improvement. A more in - depth qualitative analysis of the human feedback could uncover additional insights into the strengths and weaknesses of the CPP model and guide further refinements.",
"title": null
},
{
"limitations": null,
"main_review": null,
"paper_summary": "paper_summary: The paper introduce a new task called customized procedure planning (CCP) as an extension to the task of procedure planning in instructional videos. This task generates action plans in natural languages conditioned on user-specific requirements and task objectives, utilizing a weakly supervised approach to overcome the lack of detailed customization annotations in existing datasets. The authors propose a training method, leveraging Large Language Models (LLMs) like GPT-4 for generating pseudo-labels and for enhancing customization through a novel objective function. The paper also introduces new LLM-based metrics to evaluate open-vocabulary, user-specific plans.",
"questions": "questions: Could you elaborate on the potential biases introduced by using GPT-4 and GEMINI for pseudo-labeling and how they may affect the quality of the generated plans? How does the model handle cases where the user-specified conditions conflict with one another or with the task objective?",
"review_summary": null,
"strength_weakness": "strength_weakness: 1. While the new metrics are interesting, the reliance on LLM-based evaluation could be perceived as less interpretable and overly dependent on the LLM’s performance and biases.\n\n2. Rather than simply framing this task as catering to user-specific needs, the primary distinction lies in how the goal is represented. Traditional procedure planning approaches are goal-oriented, often defining the goal using a single image. In contrast, this approach defines the goal using an Objective along with specific Conditions, providing a more nuanced and customizable representation.\n\n3. The paper does not thoroughly address the potential limitations and biases introduced by pseudo-labeling, especially given that human-annotated datasets remain scarce.\n\n4. The results on CrossTask and COIN are also somewhat difficult to interpret. Since these datasets lack ground truth action plans expressed in natural language, the evaluation relies on pseudo-labels generated by LLMs. This introduces a challenge: comparing model outputs, which are also generated by LLMs, against pseudo-labels from the same or similar models raises questions about the objectivity and robustness of the evaluation process.",
"title": null
},
{
"limitations": null,
"main_review": null,
"paper_summary": "paper_summary: To address the potential challenges of lacking details in action steps in procedure planning in videos, the authors propose Customized Procedure Planner (CPP) framework to predict detailed action steps. They also used foundation models to create detailed action labels for benchmark dataset COIN and CrossTask. They also propose automated LLM-based metrics to evaluate the proposed models, therefore setting baselines.",
"questions": "questions: -How is user requirements and detailed action steps fundamentally different from previous task instruction and action outputs? Can I view them as merely adding more detail to the data in the same modality?\n\n-What solving this task matter? Can you prove or is there evidence that it might have impact other fields (e.g. in real world robotics?)\n\n-Have you tried other combination of foundation models to solve your task?\n\n-How should you reproduce your experiment results since commercial foundation model outputs are not reproducible?\n\n-Why choose COIN and CrossTask datasets? There are more recent datasets (e.g. Ego4D).",
"review_summary": null,
"strength_weakness": "strength_weakness: -The authors failed to advocate the gravity and scientific significance of the problem (e.g. lack of detailed action steps or user requirements) that they were trying to solve. \n\n-It seems that both are achievable just expanding the input/output spaces of previous tasks. \n\n-The proposed framework lacks novelty. It is a combination of foundation models, designed to solve a very specific task. \n\n-The authors only compare their proposed model with two foundation model baselines. \n\n-Using foundation models to do the evaluation are not robust because the definition of task success is not based on ground-truth.",
"title": null
},
{
"limitations": null,
"main_review": null,
"paper_summary": "paper_summary: This paper investigates a more practical formulation of PPIV that considers user directions, called customized procedure planning in instructional videos.To overcome data limitations, the authors built a novel pipeline to collect customizations from existing PPIV datasets. Finally, a Customized Procedure Planner (CPP) framework (based on Llava) with a customization loss is proposed.",
"questions": "questions: Please refer to weaknesses.",
"review_summary": null,
"strength_weakness": "strength_weakness: 1. This paper builds a pipeline for generating Customized Plans. The authors need to provide more examples and statistical results to demonstrate the effectiveness of the generated plans. Additionally, most of the keywords provided as examples in the manuscript are materials used in the production process, which does not quite align with the notion of customization.\n2. In the data collection pipeline, does the VLM input include a Generic Plan? The prompt in Figure 2 does not seem to contain this information. Does the VLM model have such capabilities or knowledge?\n3. In this task, the ground truth (GT) usually consists of gerund phrases, while the model generates full sentences. Could adding related prompts to constrain the model's output improve performance?\n4. The authors should provide more examples to demonstrate the effectiveness of the proposed model and modules outlined in the text, Instead of just using numerical results.\n5. The impact of the Customization Loss is marginal.",
"title": null
}
] | [
{
"comment": "Thanks for your response. My concerns are partially addressed. I decide to keep my score, as the paper still some distance from the acceptance threshold and needs further refinement.",
"title": "Response to the authors"
},
{
"comment": "Thank you for your response. My concerns are partially addressed.",
"title": null
},
{
"comment": "Dear Reviewer,\n\nThank you for your feedback. We’ve addressed your comments in our response on OpenReview. As the discussion phase ends on November 26, we’d appreciate it if you could confirm if your concerns are resolved and consider updating your scores.\n\nThank you!",
"title": null
},
{
"comment": "Dear Reviewer,\n\nThank you for your feedback. We’ve addressed your comments in our response on OpenReview. As the discussion phase ends on November 26, we’d appreciate it if you could confirm if your concerns are resolved and consider updating your scores.\n\nThank you!",
"title": null
},
{
"comment": "We deeply appreciate the insightful feedback provided by the reviewers. We are pleased that Reviewer wx1o acknowledged the novelty and creativity of our CPP framework, highlighting its well-written problem definition, motivating introduction, and clear technical approach. Reviewer UHFc commended CPPIV for its contribution to extending traditional procedure planning and recognized the innovation of our LLM-based metrics for evaluating open-vocabulary plan customization. Reviewer m1et appreciated our identification of the gap in detailed action steps and the thoroughness of our experiments, while Reviewer Pgdi praised the significance of our weakly supervised training approach in addressing the challenge of lacking customization annotations. We are grateful for the recognition of the impact and importance of our work by all reviewers.\n\nWe have addressed their concerns below, and hope they will update their scores if they find our responses clarifying.",
"title": null
},
{
"comment": "> Response to point 5:\n\nThank you for your comment. While the improvements may appear small, they are statistically significant, as demonstrated by the significance tests in Tables 3 and 4. We believe this shows that the customization loss acts as a regularizer, leading to a meaningful improvement in the model's performance.",
"title": null
},
{
"comment": "> Response to point 4:\n\nThank you for your feedback. We appreciate your suggestion and will include additional examples in the camera-ready version to better demonstrate the effectiveness of our proposed model and the modules outlined in the text, to ensure a more comprehensive understanding of our model's performance alongside the numerical results.",
"title": null
},
{
"comment": "> response to point 3:\n\nThank you for your comment. There exists a trade-off. While constraining the model’s outputs to gerund phrases could improve alignment with ground truth for evaluation, it would also limit the model’s ability to generate detailed, customized plans. \n\nWe intentionally did not impose this restriction, as we aimed to preserve the open-vocabulary quality of our model and allow for diverse, detailed responses tailored to user needs. Our experiments indicated that limiting outputs to phrases improved alignment but resulted in worse customization. Therefore, we prioritized maintaining the flexibility and adaptability of the model to generate more nuanced, user-specific plans.",
"title": null
},
{
"comment": "> Response to point 2:\n\nThank you for your question. Yes, in the data collection pipeline, the VLM (Vision-Language Model) does indeed take the generic plan as part of the input text. For the sake of clarity and space in the figure, we did not include this information in the shortened prompt shown in Figure 2. However, the actual prompt (Prompt 2) does include the generic steps. We will make this point clearer in the camera-ready version for further clarity.",
"title": null
},
{
"comment": "> Response to point 1:\n\nThank you for your feedback. We appreciate your suggestions and will include more example outputs in the camera-ready version to better demonstrate the effectiveness of the generated plans. \n\nRegarding your point on customization keywords, we believe one of the strengths of our work lies in how we collect these keywords. Rather than predefining keywords or constraints, we directly extract customization keywords from a large-scale set of real-life YouTube instructional videos. Some videos are more generic, while others are highly specific and customized, reflecting the diversity of real-world content. This approach avoids introducing customization bias, as it reflects the natural variability in how users express their needs, rather than simulating customized keywords. \n\nBy processing and extracting keywords directly from the dynamic content of real-world videos, our model generates more authentic and user-aligned plans. This method is less imposed and more reflective of actual user behavior, where customization evolves from the content itself, ensuring that the generated plans are truly tailored to the task at hand.",
"title": null
},
{
"comment": "> Response to Question 1:\n\nNo, user requirements and detailed action steps are not merely additional detail within the same modality. While the added detail is a benefit of our framework, the main focus is on defining the task objectives and conditions. The output must not only be more detailed, but also bring about a state transition that aligns with both the conditions and objectives specified by various modalities and user directions. This requires the model to integrate visual input and user-specific requirements, resulting in a more dynamic and customized approach to task execution.\n> Response to Question 2:\n\nThank you for your question. Solving this task is important because it addresses the gap in generating **customized, user-specific procedure plans**—a capability that has far-reaching implications across multiple fields. \n\n1. **Impact on Real-World Robotics**: \n In robotics, tasks often require high levels of customization to adapt to varying user needs, environmental conditions, and specific goals. Our framework's ability to incorporate user-specific directions and handle complex, open-vocabulary objectives is directly applicable to **personalized robotics**, where tasks must be performed based on individual preferences or dynamic environments. For example, in assistive robotics, customizing task instructions (e.g., for elderly care or rehabilitation) based on user needs would significantly improve task effectiveness and user satisfaction.\n\n> Response to Question 3:\n\n2. **Broader Applications**: \n Beyond robotics, our approach can impact fields like **adaptive learning systems**, **human-computer interaction**, and **personalized training**. The ability to generate task plans that are tailored to specific user requirements could enhance systems that require personalized instructions, such as educational platforms, smart assistants, or industrial automation tools.\n\n3. **Evidence of Impact**: \n While this work is foundational, there is growing interest in applying customized procedure planning in areas like **autonomous systems** and **personalized AI assistants**, where context and user specifications are critical. We envision that future studies and applications of this model will demonstrate its utility in real-world scenarios, as seen with the growing adoption of personalized AI in health, education, and assistive technologies.\n\nThus, solving this task addresses a core need in many practical domains, paving the way for more **adaptive, user-centric systems** with significant real-world impact.\n> Response to Question 4:\n\nThank you for your question. We primarily experimented with LLaVA, LLaVA Next, and GPT-4o for our task. The main focus of our work was on the novel setting and supervision method, as well as leveraging the capabilities of these models to handle user-specific customization. While we recognize that other foundation models could be explored, our primary contribution lies in the unique task formulation and the novel approach to weak supervision, which we believe distinguishes this work.\n> Response to Question 5:\n\nThank you for your comment. We specifically chose LLaVA and LLaVA Next for this problem due to their open-source nature, which ensures reproducibility. While we have introduced a systematic framework for maximizing reproducibility with commercial AI models, we acknowledge that exact reproduction of results with commercial foundation models remains an ongoing challenge in the AI field. Addressing this issue is critical and continues to be an area of focus for future research in the broader AI community.\n> Response to Question 6:\n\nThank you for your comment. We chose the COIN and CrossTask datasets primarily because they are widely used in the community for evaluating multi-modal procedure planning, offering a well-established benchmark for this specific task. These datasets provide detailed annotations that are essential for evaluating the performance of customized procedure planning models in instructional video contexts. \n\nWhile newer datasets like Ego4D are available and offer valuable insights, it’s important to note that **Ego4D** is focused on egocentric problems, which is a different problem and domain from instructional procedure planning. Ego4D is a controlled dataset designed for egocentric task understanding, and it often involves much longer action sequences that may include irrelevant actions not directly related to the main task. This makes it less suitable for evaluating the specific goals of instructional procedure planning, which requires detailed, task-specific instructions for generating customized plans. \n\nGiven the focus of our work, we selected COIN and CrossTask because they better align with the goals of our research. However, we plan to extend our experiments to newer datasets, including Ego4D, in future work to evaluate the scalability and robustness of our approach across different domains.",
"title": null
},
{
"comment": "> Response to comment 5: \n\nThank you for your comment. We respectfully disagree and would like to clarify that our evaluation metrics are indeed based on ground-truth generic plans, as detailed in Section 4.2, *Automatic Metrics*. This study evaluates performance along two critical dimensions: **planning quality** and **customization quality**. \n\n1. **Ground-Truth Alignment**: \n For planning quality, our approach aligns open-vocabulary customized plans with closed-vocabulary generic ground-truth sequences. This alignment is performed using two robust sequence mappings: \n\n - **Order Mapping**: \n This sequential process maps each ground-truth step to a corresponding predicted step while preserving the sequence order. Steps without valid matches are marked as missing, enabling the calculation of metrics such as **mean accuracy (mAcc)** and **success rate (SR)**. \n\n - **Overlap Mapping**: \n Similar to order mapping but less constrained, this method identifies the presence of ground-truth steps in the predicted sequence regardless of order. This mapping supports the calculation of **mean Intersection over Union (mIoU)**, providing additional insights into sequence coverage. \n\n2. **Few-Shot In-Context Learning**: \n To achieve accurate mappings, we use a a systematic approach and use Few-Shot In-Context Learning with GPT-4o, providing 20 human-created training examples per dataset. These examples serve as benchmarks, ensuring robust and consistent evaluations of the predicted plans against ground-truth sequences. \n\n3. **Novel Metrics for Open-Vocabulary Frameworks**: \n Given the nature of our open-vocabulary framework, traditional closed-vocabulary metrics are insufficient. Our proposed automatic metrics (a-SR, a-mAcc, a-mIoU) extend standard planning metrics to robustly quantify performance in this more complex setting, ensuring both planning quality and alignment with ground truth. \n\nWe hope this explanation clarifies that our evaluation framework is grounded in rigorous alignment with ground-truth plans and combines robust methodologies to address the complexities of open-vocabulary procedure planning.",
"title": null
},
{
"comment": "> Response to comment 4: \n\nThank you for your feedback. We appreciate your concern regarding the choice of baselines. For our evaluations, we intentionally selected the strongest foundation model baselines to assess the task at hand. Specifically, we compared our proposed model with: \n\n1. **GPT-4o**: A state-of-the-art model widely regarded for its performance and robustness in generating text and understanding complex inputs. \n2. **LLaVA 1.5 and LLaVA-Next**: Two state-of-the-art open-source models specifically designed for vision-language tasks, ensuring a strong and relevant comparison for our customized procedure planning framework. \n\nBy evaluating against these leading models, we ensured a rigorous assessment of our approach. We hope this clarification addresses your concern.",
"title": null
},
{
"comment": "> Response to comment 3\n\nThank you for your comment. To clarify, we would like to summarize the main contributions of our paper: \n\n1. **Novel Setting of Customized Procedure Planning**: \n We introduce a more practical formulation of procedure planning for instructional videos, shifting the focus to generating instructional plans that cater to user-specific needs rather than generic task completion. This novel setting emphasizes incorporating user directions and requirements into task planning. \n\n2. **Customized Procedure Planner Framework**: \n Our proposed CPP framework generates open-vocabulary instructional plans tailored to user-specified condition keywords, enabling the transformation of initial visual states into task completion in a way that aligns with user goals. \n\n3. **Weakly Supervised Training Approach**: \n We address the challenge of limited customization annotations by proposing a weakly supervised training approach. This allows the CPP framework to learn customized planning from unannotated videos, overcoming a key barrier in model training for this setting. \n\n4. **Extended Evaluation Metrics**: \n We extend conventional procedure planning metrics to assess open-vocabulary, varied, and detailed instructional plans. This comprehensive evaluation framework measures both planning and customization performance, capturing dimensions previously overlooked in the field. \n\nWe hope this clarification highlights the significance and originality of our work.",
"title": null
},
{
"comment": "> Response to the second comment\n\nThank you for your comment. We respectfully disagree, as expanding the input/output spaces alone captures only one aspect of the solution. Our setting fundamentally differs from prior tasks in several key ways: \n\n1. **Dual Goal Representation**: \n Traditional procedure planning approaches are primarily goal-oriented, typically defining the goal using a single image. In contrast, our approach represents the goal through an *Objective* and specific *Conditions*, creating a more nuanced and customizable representation tailored to user requirements. \n\n2. **Beyond Generic Transitions**: \n Prior tasks focus on decoding generic transitions between two visual states. Our approach significantly expands on this by introducing a setting that addresses a dual goal-oriented problem: satisfying both the task-specific visual objectives and user-defined directions. \n\n3. **Integration of Visual and Textual Inputs**: \n Our framework requires a model to process visual content, understand the current state, identify involved objects, comprehend task objectives, and align these with user-specified directions. This combination results in a more practical and comprehensive approach, addressing real-world applications where customization and adaptability are essential. \n\n4. **New Training and Evaluation Framework**: \n Training and evaluating such models require a novel framework not addressed in prior tasks. The outputs of these models expand beyond classic metrics like SR, ACC, and IoU by incorporating subjectivity and user customization as added dimensions. This introduces unique challenges, as models must perform not only on structural accuracy but also on alignment with user-specific needs and conditions. Our proposed evaluation framework addresses these new dimensions, further emphasizing the distinction of our setting from traditional approaches. \n\nThis fundamental shift highlights the limitations of prior methods and the advancements introduced in our work, providing a more practical and effective solution for customized procedure planning.",
"title": null
},
{
"comment": "> Response to first comment\n\nThank you for your feedback. We respectfully disagree and believe the manuscript adequately addresses the gravity and scientific significance of the problem. \n\n1. **Introduction (Lines 046–079)**: \n We highlight the *semantic gap* in generating detailed and customized plans—a key limitation in prior literature—underscoring the need for user-specific, tailored plans rather than generic task objectives. \n\n2. **Figure 1 Illustration**: \n Figure 1 contrasts our approach with prior methods, which generate generic action labels. Our framework produces detailed, customized plans that incorporate user-specific needs and task conditions. \n\n3. **Lines 086–093**: \n We explain the need for models to integrate both the current visual state and user-specific textual requirements, enabling tailored, user-aligned plans. Figure 1b further emphasizes the practical value of this approach in comparison to traditional models. \n\n4. **Lines 094–099**: \n We discuss how closed-vocabulary settings restrict predictions to predefined labels, leaving the challenge of generating detailed, user-specific plans unresolved. \n\n5. **Lines 099–107**: \n We address the significant gap in datasets for training customized procedure planners. Existing datasets, such as CrossTask and COIN, lack detailed, time-stamped plans tailored to specific videos and user requirements. This limitation is further explored in Section 3.3. \n\n6. **Lines 123–131**: \n We introduce a novel evaluation framework, addressing the inadequacy of traditional metrics for open-vocabulary, customized plans. We propose LLM-based metrics and validate our model’s performance on CrossTask and COIN datasets, establishing a strong baseline that outperforms state-of-the-art approaches. \n\nThese points collectively advocate the significance of the problem and highlight our contributions in addressing these challenges. However, we appreciate your feedback and would welcome specific suggestions on areas you feel were overlooked so we can further improve the manuscript.",
"title": null
},
{
"comment": "Thank you for your thoughtful feedbacks. We provide the following responses to points made:\n> 1\n\nWe agree that automatic metrics, including those based on LLMs, inherently carry some biases. However, we have taken several steps to minimize these biases and maximize accuracy: \n\n1. **Coupling with Objective Metrics**: \n Our LLM-based metrics (aSR, aAcc, aIoU, and aRelevance) are designed to complement each other and objective metrics like aBERT-Score. This balanced framework captures nuanced aspects of customization through LLMs while maintaining interpretability via objective, quantitative measures.\n \n2. **Systematic Design with Rubrics and Examples**: \n To standardize LLM evaluations, we provide a detailed rubric and use few-shot in-context learning examples (4 examples per rubric rule) to anchor the model's outputs. This ensures that evaluations follow a consistent and systematic framework, reducing variability and improving reliability. \n\n3. **Transparent Evaluation Process**: \n To enhance transparency, for each evaluation instance, the LLM is asked to provide reasoning for order mapping, alignment to generic plans, and relevance before scoring based on the rubric. This systematic reasoning ensures logical consistency and provides traceability for the model’s ratings. \n\n4. **Human Validation and Performance Consistency**: \n Validation tests demonstrated high inter-rater agreement (>85%), confirming the reliability and consistency of our evaluation approach. By combining LLM-based evaluations with human validation and objective metrics, we minimize potential biases while maximizing the robustness of our framework. \n\nWe believe this combined approach provides a reliable and interpretable evaluation methodology while addressing the inherent challenges of automatic metrics. \n\n> 2 \n\nThank you for your insightful observation. We agree and believe that this distinction is indeed a strength of our framework. As correctly identified, defining the goal using an Objective along with specific Conditions enables a more nuanced and customizable approach compared to traditional goal-oriented methods.\n\nWe will revise the introduction to bring greater attention to this key contribution.\n\n> 3 & 4\n\nWe agree that potential bias is an inherent Challenge of using models like GPT-4o and GEMINI for generating pseudo-labels and weak-supervision frameworks. However, we have implemented several measures to minimize these biases and ensure the reliability of our pseudo-labels: \n\n1. **Limited Reliance on Models:**\nWhile GPT-4o and GEMINI generate customization details, critical elements like plan actions and timestamps are manually annotated, reducing model bias.\n\n2. **Rigorous Quality Assessment**:\nWe manually evaluated the pseudo-labels on subsets of the COIN and CrossTask datasets (Section A), with high scores (e.g., 4.18/5 for customization on CrossTask), confirming label reliability.\n\n3. **Independent Models**:\nGEMINI extracts customization keywords, and GPT-4o evaluates them, ensuring independence. Both models, trained on diverse datasets, reduce the risk of overlapping biases. We also use objective metrics like aBERT-Score (Table C) for unbiased model evaluation.\n\n4. **Error Correction During Training**:\nGPT-4o is incorporated into the objective function to iteratively refine predictions, reducing the impact of pseudo-label inaccuracies.\n\n5. **Acknowledgment of Limitations**:\nWe recognize that biases are an inherent challenge in weak supervision. While pseudo-labels enhance model robustness, we will include a more detailed discussion of these limitations in the paper for transparency.\n\n> **Response to Reviewer Question:** \n\n\n\n1. **Mismatch Between User Directions and Images**: \n In our dataset, there is minimal mismatch between user-specified conditions and images, as both were sourced from aligned YouTube instructional videos. However, pseudo-labels generated by GPT-4o and GEMINI may still be biased toward task objectives, especially when the model prioritizes common patterns or assumptions from the training data.\n\n2. **Task-Irrelevant Keyword Leakage**: \n Despite careful dataset selection, pseudo-labels can be influenced by irrelevant information, such as frequently encountered task-related keywords or sequences. This bias may cause the model to over-prioritize certain objectives or generate labels based on less relevant data. Section C presents failure case studies that illustrate how these biases affect the model, particularly in edge cases or complex user conditions.",
"title": null
},
{
"comment": "> 1\n\nThank you for raising the question of generalizability. CPP’s generalization can be evaluated across three dimensions:\n\n**Open-Vocabulary in Natural Language:**\nCPP operates in an open-vocabulary setting, enabling it to handle diverse natural language inputs and outputs, adapting flexibly to user-defined requirements (Tables [1, 2]).\n\n**Within-Domain Task Generalization:**\nWithin defined domains, we evaluated CPP’s ability to perform on unseen tasks not encountered during training. As shown in the table below, CPP significantly outperforms baseline models on 8 unseen tasks from various domains within the COIN dataset (comprising 180 tasks), demonstrating its strong generalization capability to new tasks within familiar domains.\n\n| Models | a-SR↑ (%) | a-mAcc↑ (%) | a-mIoU↑ (%) | a-Relevance↑ | aBERT-Score↑ |\n|-------------------------------|-----------|-------------|-------------|--------------|--------------|\n| GPT-4o mini (10-shot) | 16.50 | 41.18 | 23.07 | 4.11 | 0.58 |\n| CPP (LlaVa-1.6 backbone) on unseen tasks | 22.11 | 48.91 | 34.62 | 3.84 | 0.64 |\n\n**Cross-Domain Generalization**:\nGeneralizing across different domains (e.g., DIY tasks vs. computer graphics) is challenging due to structural data differences. While not the focus of this study, our experiments suggest that domain-specific adaptation, such as fine-tuning or few-shot learning, may be needed for optimal performance. Addressing these challenges is an exciting direction for future research.\n>2\n\nWe agree that using GPT-4o and GEMINI to generate pseudo-labels may introduce biases. However, we have implemented several measures to minimize these biases and ensure label reliability:\n\n1. **Limited Reliance on Models**: \n While GPT-4o and GEMINI generate customization details, critical elements like plan actions and timestamps are manually annotated, reducing model bias.\n\n2. **Rigorous Quality Assessment**: \n We manually evaluated the pseudo-labels on subsets of the COIN and CrossTask datasets (Section A), with high scores (e.g., 4.18/5 for customization on CrossTask), confirming label reliability.\n\n3. **Independent Models**: \n GEMINI extracts customization keywords, and GPT-4o evaluates them, ensuring independence. Both models, trained on diverse datasets, reduce the risk of overlapping biases. We also use objective metrics like aBERT-Score (Table C) for unbiased model evaluation.\n\n4. **Error Correction During Training**: \n GPT-4o is incorporated into the objective function to iteratively refine predictions, reducing the impact of pseudo-label inaccuracies.\n\n5. **Acknowledgment of Limitations**: \n We recognize that biases are an inherent challenge in weak supervision. While pseudo-labels enhance model robustness, we will include a more detailed discussion of these limitations in the paper for transparency. \n\n>3\n\nThank you for your feedback. We view the subjectivity of the relevance score as a strength, as customization inherently requires human judgment.\n\n1. **Subjectivity by Design**: \n Customization quality depends on how well a plan aligns with user-defined keywords, making subjectivity essential to capture nuanced aspects that automated metrics may miss. We formalized this with a structured rubric to ensure consistency while maintaining interpretive flexibility. \n\n2. **Balanced Evaluation**: \n To complement the relevance score, we use objective metrics like SR, a-mAcc, and aBERT-Score, which measure alignment, order, and similarity. This dual framework ensures a comprehensive evaluation, balancing human satisfaction with structural correctness. \n\n3. **Scoring and Validation**: \n The rubric includes clear guidelines and examples for each score level (1–5) and is reinforced with few-shot in-context learning (4 examples per rubric rule). Validation tests showed >85% inter-rater agreement, confirming its reliability. \n\n4. **Dual Metrics Rationale**: \n Combining subjective and objective metrics allows a holistic evaluation of customization quality—capturing both human satisfaction and the model’s alignment and accuracy. \n\n>4\n\nThank you for your suggestion regarding deeper qualitative analysis of human feedback. Our current human evaluation focuses on validating the model's performance by assessing alignment, plan effectiveness, and customization quality using structured rubrics. While this approach demonstrates the model's effectiveness, we agree that a more detailed human evaluation would be valuable for uncovering improvement areas and is an interesting direction for future work.",
"title": null
}
] | [
0.6000000000000001,
0.6000000000000001,
0.4,
0.4
] | [
0.75,
0.75,
1,
0.75
] | [
0.6666666666666666,
0.6666666666666666,
0.3333333333333333,
0.6666666666666666
] | [
0.3333333333333333,
0.3333333333333333,
0.6666666666666666,
0.6666666666666666
] | [
0.3333333333333333,
0.3333333333333333,
0.3333333333333333,
0.3333333333333333
] | [
null,
null,
null,
null
] |
|
A Black Swan Hypothesis: The Role of Human Irrationality in AI Safety | "Black swan events are statistically rare occurrences that carry extremely high risks. A typical vie(...TRUNCATED) | ["AI Safety","Risk","Reinforcement Learning","alignment, fairness, safety, privacy, and societal con(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["agarwal|model-based_reinforcement_learning_with_a_generative_model_is_minimax_optimal","agrawal|re(...TRUNCATED) | lee|a_black_swan_hypothesis_the_role_of_human_irrationality_in_ai_safety|ICLR_cc_2025_Conference | "introduction\nTo successfully deploy machine learning (ML) systems in open-ended environments, thes(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Figure 1: Value distortion function u and probability distortion function w. The gray l(...TRUNCATED) | true | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: This paper challenges the co(...TRUNCATED) | [{"comment":"Dear Reviewer WW6X, \n\nThank you for your thoughtful feedback, especially on how the r(...TRUNCATED) | [
0.6000000000000001,
0.6000000000000001,
0.6000000000000001,
0.4
] | [
0.5,
0.25,
0.25,
0.5
] | [
0.6666666666666666,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [
1,
0.3333333333333333,
0.6666666666666666,
0.3333333333333333
] | [
0.6666666666666666,
0.3333333333333333,
0.3333333333333333,
0.3333333333333333
] | [
null,
null,
null,
null
] |
Network-based Active Inference and its Application in Robotics | "This paper introduces Network-based Active Inference (NetAIF), a novel robotic framework that enabl(...TRUNCATED) | ["Active Inference (AIF)","Free Energy Principle (FEP)","Robotics","Trajectory generation","Random d(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["anonymous|network-based_active_inference_for_adaptive_and_cost-efficient_real-world_applications:_(...TRUNCATED) | yoon|networkbased_active_inference_and_its_application_in_robotics|ICLR_cc_2025_Conference | "introduction 1.overcoming automation challenges with advanced learning methods\nThe World Energy Em(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [
{
"caption": "Table 2: Summary of time taken to generate values by the network",
"figType": "Table"
}
] | false | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: The work at first sight is v(...TRUNCATED) | [{"comment":"Dear Authors,\nThanks so much for your effort to address answer to my questions. While (...TRUNCATED) | [
0.2,
0.6000000000000001,
0.2,
0
] | [
1,
0.75,
0.5,
0.75
] | [
0.3333333333333333,
0.6666666666666666,
0.3333333333333333,
0
] | [
0.6666666666666666,
0.6666666666666666,
0,
0
] | [
0.6666666666666666,
1,
0.3333333333333333,
0
] | [
null,
null,
null,
null
] |
VR-Sampling: Accelerating Flow Generative Model Training with Variance Reduction Sampling | "Recent advancements in text-to-image and text-to-video models, such as Stable Diffusion 3 (SD3), Fl(...TRUNCATED) | [
"Flow Generative Models",
"Training Acceleration",
"Diffusion Models",
"generative models"
] | 2024-10-04 | CC BY 4.0 | ["abramson|accurate_structure_prediction_of_biomolecular_interactions_with_alphafold_3","arjevani|lo(...TRUNCATED) | "pan|vrsampling_accelerating_flow_generative_model_training_with_variance_reduction_sampling|ICLR_cc(...TRUNCATED) | "introduction\nDiffusion models (Song et al., 2021b; Ho et al., 2020; Dhariwal & Nichol, 2021; Song (...TRUNCATED) | [] | false | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: The authors theoretically id(...TRUNCATED) | [{"comment":"Dear Authors,\n\nThank you for your clarification. My concerns are mostly addressed. Th(...TRUNCATED) | [
0.2,
0.8,
0.4,
0.8
] | [
0.75,
0.5,
0.5,
0.5
] | [
0,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [
0.3333333333333333,
0.3333333333333333,
0.6666666666666666,
0.6666666666666666
] | [
0.3333333333333333,
0.6666666666666666,
0.3333333333333333,
0.6666666666666666
] | [
null,
null,
null,
null
] |
|
What Matters in Hierarchical Search for Combinatorial Reasoning Problems? | "Combinatorial reasoning problems, particularly the notorious NP-hard tasks, remain a significant ch(...TRUNCATED) | ["deep learning","search","subgoals","hierarchical reinforcement learning","imitation learning","rei(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["achiam|constrained_policy_optimization","andrychowicz|what_matters_in_on-policy_reinforcement_lear(...TRUNCATED) | "zawalski|what_matters_in_hierarchical_search_for_combinatorial_reasoning_problems|ICLR_cc_2025_Conf(...TRUNCATED) | "introduction\nFigure 1 : Performance comparison of hierarchical methods (AdaSubS, kSubS) and low-le(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Figure 1: Performance comparison of hierarchical methods (AdaSubS, kSubS) and low-level(...TRUNCATED) | false | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: This paper reports on an emp(...TRUNCATED) | [{"comment":"Thank you for your constructive feedback that helped us strengthen our work. Let us sum(...TRUNCATED) | [
0.4,
0.6000000000000001,
0.4,
0.6000000000000001
] | [
0.75,
1,
0.5,
0.75
] | [
0.6666666666666666,
1,
0.3333333333333333,
0.6666666666666666
] | [
0.3333333333333333,
1,
0.3333333333333333,
0.3333333333333333
] | [
0.3333333333333333,
0.3333333333333333,
0.3333333333333333,
0.3333333333333333
] | [
null,
null,
null,
null
] |
Self-Play Preference Optimization for Language Model Alignment | "Standard reinforcement learning from human feedback (RLHF) approaches relying on parametric models (...TRUNCATED) | ["self play","preference optimization","large language model","RLHF","alignment, fairness, safety, p(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["ahmadian|back_to_basics:_revisiting_reinforce-style_optimization_for_learning_from_human_feedback_(...TRUNCATED) | wu|selfplay_preference_optimization_for_language_model_alignment|ICLR_cc_2025_Conference | "introduction\nLarge Language Models (LLMs) (e.g., Ouyang et al., 2022; OpenAI et al., 2023) , have (...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Table 6: Another generation example of our fine-tuned model by SPPO at different iterat(...TRUNCATED) | true | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: The paper proposes a novel s(...TRUNCATED) | [{"comment":"Thanks for providing the detailed explanation. I increase my contribution score to 3. T(...TRUNCATED) | [
0.6000000000000001,
0.6000000000000001,
0.6000000000000001,
0.6000000000000001
] | [
0.75,
0.75,
0.5,
0.75
] | [
0.6666666666666666,
1,
0.6666666666666666,
0.6666666666666666
] | [
0.6666666666666666,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [
0.6666666666666666,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [
null,
null,
null,
null
] |
ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy | "Large-scale cell microscopy screens are used in drug discovery and molecular biology research to st(...TRUNCATED) | ["MAE","microscopy","transformers","SSL","linear probing","biology","high-content screening","founda(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["spearman|rxrx3),_b_*_=_12_mae-l/8_(rpi-93m),_b_=_24_mae-l/8_(rpi-93m),_b_*_=_15_mae-l/8_(pp-16m),_(...TRUNCATED) | "kenyondean|vitally_consistent_scaling_biological_representation_learning_for_cell_microscopy|ICLR_c(...TRUNCATED) | "introduction\nLarge-scale cell microscopy assays are used to discover previously unknown biological(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Table 4: Overview of vision transformer (ViT) encoders used and evaluated in this work.(...TRUNCATED) | false | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: The authors presented a new (...TRUNCATED) | [{"comment":"I've changed my score to 5, however, I still think that the current version is not yet (...TRUNCATED) | [
0.2,
0.4,
0.8,
0.4
] | [
0.75,
1,
0.5,
0.75
] | [
0.6666666666666666,
0.3333333333333333,
0.6666666666666666,
0
] | [
0.6666666666666666,
0.6666666666666666,
1,
1
] | [
0.3333333333333333,
0.3333333333333333,
0.6666666666666666,
0
] | [
null,
null,
null,
null
] |
Synthetic continued pretraining | "Pretraining on large-scale, unstructured internet text enables language models to acquire a signifi(...TRUNCATED) | ["large language model","synthetic data","continued pretraining","foundation or frontier models, inc(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["abdin|phi-2:_the_surprising_power_of_small_language_models","akyürek|deductive_closure_training_o(...TRUNCATED) | yang|synthetic_continued_pretraining|ICLR_cc_2025_Conference | "introduction\nLanguage models (LMs) have demonstrated a remarkable ability to acquire knowledge fro(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Figure 2: Accuracy on the QuALITY question set Qtest (y-axis) as a function of the synt(...TRUNCATED) | true | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: This paper addresses the pro(...TRUNCATED) | [{"comment":"Thanks! Will keep my positive score.","title":null},{"comment":"Thank you for your clar(...TRUNCATED) | [
0.8,
0.8,
0.8,
0.8
] | [
0.75,
0.5,
0.75,
0.5
] | [
1,
1,
0.3333333333333333,
1
] | [
1,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [
0.6666666666666666,
0.6666666666666666,
0.3333333333333333,
0.6666666666666666
] | [
null,
null,
null,
null
] |
On the Optimization Landscape of Low Rank Adaptation Methods for Large Language Models | "Training Large Language Models (LLMs) poses significant memory challenges, making low-rank adaptati(...TRUNCATED) | [
"large language model",
"LoRA",
"optimization",
"foundation or frontier models, including LLMs"
] | 2024-10-04 | CC BY 4.0 | ["aghajanyan|intrinsic_dimensionality_explains_the_effectiveness_of_language_model_fine-tuning","all(...TRUNCATED) | "liu|on_the_optimization_landscape_of_low_rank_adaptation_methods_for_large_language_models|ICLR_cc_(...TRUNCATED) | "introduction\nLarge Language Models (LLMs) have demonstrated impressive performance across various (...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Table 11: The mean and standard deviation of GaRare on various sizes of LLaMA models on(...TRUNCATED) | true | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: This paper builds on previou(...TRUNCATED) | [{"comment":"Dear Reviewer eja2,\n\nAs the discussion period is nearing its close, we would like to (...TRUNCATED) | [
0.6000000000000001,
0.8,
0.8,
0.4,
0.6000000000000001,
0.4
] | [
0.75,
0.5,
0.25,
0.5,
0.75,
0.75
] | [
0.6666666666666666,
1,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666,
0.6666666666666666
] | [0.6666666666666666,0.6666666666666666,0.6666666666666666,0.3333333333333333,0.6666666666666666,0.66(...TRUNCATED) | [0.6666666666666666,0.6666666666666666,0.6666666666666666,0.3333333333333333,0.3333333333333333,0.66(...TRUNCATED) | [
null,
null,
null,
null,
null,
null
] |
OSCAR: Operating System Control via State-Aware Reasoning and Re-Planning | "Large language models (LLMs) and large multimodal models (LMMs) have shown great potential in autom(...TRUNCATED) | ["Large Language Model","Autonomous Agent","Graphical User Interface","applications to robotics, aut(...TRUNCATED) | 2024-10-04 | CC BY 4.0 | ["achiam|shyamal_anadkat,_et_al._gpt-4_technical_report","chen|a_dataset_for_gui-oriented_multimodal(...TRUNCATED) | "wang|oscar_operating_system_control_via_stateaware_reasoning_and_replanning|ICLR_cc_2025_Conference(...TRUNCATED) | "introduction\nLarge Language Models (LLMs) (Ouyang et al., 2022; Achiam et al., 2023; Dubey et al.,(...TRUNCATED) | [{"src":"https://datasets-server.huggingface.co/assets/nhop/ReviewBench/--/{dataset_git_revision}/--(...TRUNCATED) | [{"caption":"Figure 4: Illustration of task-driven re-planning and code-centric control in OSCAR. Ba(...TRUNCATED) | true | null | [{"limitations":null,"main_review":null,"paper_summary":"paper_summary: - This work presents an LLM+(...TRUNCATED) | [{"comment":"Thanks for addressing my concerns. I've updated my rating.","title":null},{"comment":"T(...TRUNCATED) | [
0.8,
0.6000000000000001,
0.8,
0.6000000000000001
] | [
0.5,
0.75,
0.5,
0.75
] | [
0.6666666666666666,
1,
0.6666666666666666,
0.3333333333333333
] | [
1,
0.6666666666666666,
0.6666666666666666,
0.3333333333333333
] | [
0.6666666666666666,
0.6666666666666666,
0.6666666666666666,
0.3333333333333333
] | [
null,
null,
null,
null
] |
End of preview.
README.md exists but content is empty.
- Downloads last month
- 46