Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,916

METHOD FOR ANNOTATING TRACKS OF INDIVIDUALS IN A SEQUENCE OF IMAGES

Non-Final OA §102§103
Filed
Dec 19, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
COMMISSARIAT À L'ÉNERGIE ATOMIQUE ET AUX ÉNERGIES ALTERNATIVES
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/19/2023, 07/16/2024, and 07/26/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections Claim 1 is objected to because of the following informalities: the claim recites pronouns, “their” in line 8, “this” in line 14, and “them” in line 15. These pronouns render the scope of claim unclear. Appropriate correction is required. For the prior art purposes, “their signature” has been interpreted as “signature of said tracks”; “this” has been interpreted as “the validated identity”; and “them” has been interpreted as “the annotated individuals.” Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-7 and 11-16 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by us patent application publication no. 2013/0136298 to Yu et al. (hereinafter Yu). For claim 1, Yu as applied discloses a computer-implemented method for annotating tracks acquired from at least one camera (see, e.g., FIGS. 1 and 4), the method comprising the following steps: receiving at least one sequence of images acquired using at least one camera (see, e.g., pars. 15, 19-20 and 23-24 and FIGS. 1-4, which teach receiving live video streams from one or more cameras), extracting, from the at least one sequence of images, at least one track of an unknown individual (see, e.g., pars. 15-16, 20-21 and 31 and FIGS. 1, 4 and 6, which teach extracting appearance samples from the trackers), computing, for each track of individuals, a signature of the individual (see, e.g., pars. 13, 16 and 20-22 and FIGS. 4 and 5, which teach analyzing spatial and temporal properties, e.g., spatial locality and temporal continuity, of the tracking trajectories of the samples; the examiner interprets the spatial and temporal properties as the claimed signature because they represent an appearance signature of each cluster corresponding to the individual), executing multiple iterations of a constrained (CT) partitioning algorithm on said tracks based on their signature (see, e.g., pars. 26-29 and FIGS. 4 and 5, which teach generating clusters via spectral clustering and associating the clusters with the signature models), the algorithm being configured to partition the tracks into at least one group of recommended individuals or into a group of individuals without a recommendation (see, e.g., 29-30 and FIGSS. 4-5, which teach determining whether each cluster corresponds to one of the maintained signature models or to a new one), in each new iteration, providing the partitioning algorithm with at least one new constraint (CT) and executing a step of validating the identity of the individuals classified in the group of recommended individuals (see, e.g., pars. 18 and 26-29 and FIGS. 4 and 5, which teach providing furthering partitioning process with new pairwise constraints and associating the maintained appearance signature model with new samples), if the identity of the individual is validated, annotating the individual with this identity and transferring them to a group of annotated individuals (see, e.g., pars. 29-32 and FIGS. 5-6, which teach updating the maintained appearance signature model with the new samples, e.g., verifying the associated IDs of the maintained appearance signature models, and including the updated appearance signature model in the identity pool). For claim 2, Yu as applied discloses: a step of receiving a database (BDA) consisting of tracks of annotated individuals (see, e.g., pars. 16 and 29-30 and FIGS. 4-5, which teach receiving the identify pool consisting of the maintained appearance signature models), the computing of an individual signature and the partitioning algorithm being implemented for the tracks of annotated and unknown individuals (see, e.g., pars. 2, 29-32 and 36 and FIGS. 4-6, which teach determining an appearance signature for new sample data without prior knowledge of the respective person and associating the appearance signature, i.e., clusters, with the appearance signature models from prior time span), each group of recommended individuals comprising an annotated individual (see, e.g., pars. 16 and 29-30 and FIGS. 4-5, which teach receiving the identify pool consisting of the maintained appearance signature models). For claim 3, Yu as applied discloses that the validation step consists in validating the identity of the individuals classified in the group of recommended individuals as corresponding to the identity of the annotated individual (see, e.g., pars. 16-18 and 29-32 and FIGS. 5-6, which teach associating the clusters with the maintained appearance signature models, which corresponds to the identities/IDs of the individual). For claim 4, Yu as applied discloses that the step of validating the identity of the individuals classified in the group of recommended individuals is carried out by way of a user interaction or by way of a rule for comparing between the respective signatures of the individual whose identity is to be validated and of the annotated individual (see, e.g., pars. 16-18 and 29-32 and FIGS. 5-6, which teach associating the clusters with the maintained appearance signature models using a support vector machine executing an instruction/rule). For claim 5, Yu as applied discloses a step of supplementing the database (BDA) with the tracks of the group of annotated individuals (see, e.g., pars. 29-32, which teach supplementing the identify pool with the appearance signature models from the prior time span). For claim 6, Yu as applied discloses that the at least one sequence of images is acquired by way of multiple cameras with joined fields or with disjoint fields (see, e.g., FIGS. 3 and 6, which show a multi-camera tracking system with multiple views). For claim 7, Yu as applied discloses that the tracks of individuals are generated in 2D or 3D (see, e.g., pars. 19-20 and FIGS. 2-3, which teaches that the trackers are based on 3D ground plane). For claim 11, Yu as applied discloses, in each iteration, a step of receiving user constraints associated with the tracks of individuals, relating to the identity of an individual (see, e.g., pars. 16-18 and 22-27, which teach constructing and weighing constraints requiring two samples to belong or not belong to one person), said constraints being provided at input of the constrained partitioning algorithm (see, e.g., pars. 16-18 and 22-27 and FIG. 5, which teach providing the pairwise constraints at input of spectral clustering). For claim 12, Yu as applied discloses that the constraints on the partitioning algorithm are taken from among a constraint on association between multiple tracks of individuals or a constraint on non-association between multiple tracks of individuals (see, e.g., pars. 16-18 and 22-27 and FIG. 5, which teach that the constraints include must-link and cannot-link constraints). For claim 13, Yu as applied discloses that all of the tracks that share a constraint on association are grouped into the same group of recommended individuals (see, e.g., pars. 16-18 and 22-27 and FIG. 5, which teach that the constraints include must-link constraints). For claim 14, Yu as applied discloses that two tracks that share a constraint on non-association are placed in two different groups of individuals (see, e.g., pars. 16-18 and 22-27 and FIG. 5, which teach that the constraints include cannot-link constraints). For claim 15, Yu as applied discloses a computer program comprising instructions for executing the method according to claim 1 when the program is executed by a processor (see, e.g., pars. 18 and 34-35 and FIG. 1). For claim 16, Yu as applied discloses a processor-readable recording medium on which there is recorded a program comprising instructions for executing the method according to claim 1 when the program is executed by a processor (see, e.g., pars. 18 and 34-35 and FIG. 1). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu in view of a non-patent literature titled “Bag of Tricks and A Strong Baseline for Deep Person Re-identification” by Luo et al. (hereinafter Luo) published in 2019. For claim 8, Yu in view of Ting teaches that the step of computing a signature for each track of individuals is carried out by way of a reidentification neural network trained to generate similar signatures for identical individuals (see, e.g., pars. 16, 18 and 30-31, which teach using a machine learning based identify recognition system, e.g., support vector machine (SVM), that identifies a person through an appearance signature using an online learned and continuously updated identify pool). While Yu in view of Ting does not explicitly teaches that the machine learning model is a neural network, Luo in the analogous art teaches using a re-identification (ReID) neural network to similar identification scores for identical individuals (see, e.g., abstract, the last full par including three bullet points of section 1, Fig. 4, and Tables 2 and 4 of Luo). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Yu in view Ting to use the ReID neural network as taught by Luo doing so would yield predictable results of improving the model’s reidentification performance (see, e.g., the first and las full pars. of section 1 of Luo). For claim 9, Yu in view of Luo teaches a step of retraining the reidentification neural network from the database (BDA) of tracks of individuals obtained at the end of multiple iterations of the annotation method (see, e.g., pars. 30-31 of Yu, which teach updating the SMV when new training data becomes available). While Yu in view of Ting does not explicitly teaches that the machine learning model is a neural network, Luo in the analogous art teaches using a re-identification (ReID) neural network (see, e.g., abstract, the last full par including three bullet points of section 1, Fig. 4, and Tables 2 and 4 of Luo). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Yu in view Ting to use the ReID neural network as taught by Luo doing so would yield predictable results of improving the model’s reidentification performance (see, e.g., the first and las full pars. of section 1 of Luo). Allowable Subject Matter Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In regard to claim 10, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “a step of receiving constraints provided by at least one sensor and associated with the tracks of individuals, including: a constraint on the temporal and/or spatial localization of an individual, a constraint relating to the identity of the individual, a constraint on non-ubiquity of an individual, a constraint on non-teleportation of an individual, said constraints being provided at input of the constrained partitioning algorithm.” Additional Citations The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action. Citation Relevance Iyengar et al. (us pat. pub. 2025/0118051) Describes systems, methods, and devices that perform object detection on video streams. In one embodiment, asynchronously updating object detections within a video stream is provided. A first set of objects associated a first frame include a first object detected by a first detection model. Object detection is initiated on a second frame by a second detection model. A second set of objects are identified as being associated with a third frame that is subsequent to the first frame in the video stream. The first object is included in the second set based on tracking the first object from the first frame to the third frame. A second object is identified within the second frame based on the second detection model. When the first object corresponds to the second object but has a different attribute, an attribute of the first object is updated. When the first object does not correspond to the second object. The second object is fast-tracked into the third frame. Low (us pat. pub. 2007/0237357) Describes a method and apparatus for visual sensing and tracking of large numbers of moving objects. One embodiment of a method of tracking a plurality of targets can be broadly summarized by the following steps: capturing a plurality of images of a plurality of targets with a plurality of image capture devices; generating a target observation for each target, said target observation including at least a visual signature of the target and a time value; partitioning target observations according to similarities in their visual signatures; and producing primary tracks from the partitioned target observations, wherein each primary track includes ordered sequences of observation events having similarities in their visual signatures. Krahnstoever et al. (us pat. pub. 2010/0245567) Describes a system, method and program product for camera-based discovery of social networks. In one embodiment, the computer implemented method for identifying individuals and associating tracks with individuals in camera-generated images from a face capture camera(s) and a tracking camera(s), wherein the computer implemented method includes: receiving images of an individual from the face capture camera(s) on a computer; receiving images of a track(s) of an individual from the tracking camera(s) on a computer; automatically determining with the computer the track(s) from the images from the tracking camera(s); and associating with the computer the track(s) with the individual(s) and a unique identifier. Table 1 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and form 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Dec 12, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month