Prosecution Insights
Last updated: April 19, 2026
Application No. 18/581,644

SYSTEMS AND METHODS FOR SINGLE-OBJECT TRACKING USING MULTIPLE-OBJECT TRACKING

Non-Final OA §101§103§DP
Filed
Feb 20, 2024
Examiner
DIGUGLIELMO, DANIELLA MARIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Palantir Technologies Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
137 granted / 170 resolved
+18.6% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
33.1%
-6.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 170 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/27/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “112A” has been used to designate both image region and frame, and reference character “100” has been used to designate both object tracking system and object detection object. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The abstract of the disclosure is objected to because In line 7, “the MOT including” should read –the MOT includes–. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Please note that there is at least one common inventor and a potential partnership/affiliation between Palantir Technologies Inc. (instant application) and Microsoft Technology Licensing, LLC (copending application). Claims 1-19 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5-16, and 18-20 of copending Application No. 18/515,487 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-19 are generic to all that are recited in the claims of copending Application No. 18/515,487. As an example, the following is a chart comparing independent claims 1 and 11 of the instant application to claims 13 and 1 of copending Application No. 18/515,48, respectively. Claims 2-10 and 12-19 of the instant application incorporate the limitations of the independent claims by dependency. Instant Application (18/581,644) Copending Application (18/515,487) Claim 1 A method for object tracking, the method comprising: receiving an image frame in a sequence of image frames; identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository; generating a SOT output based on the identified object of interest; detecting one or more objects in the image frame using a multiple-object tracker (MOT), the MOT including a machine-learning model; conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result; and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result; wherein the method is performed using one or more processors. Claim 13 A method implemented in a data processing system for tracking objects in video content, the method comprising: obtaining video content that includes a target object to be tracked across frames of the video content and an object template providing a representation of the target object; analyzing the frames of the video content and the object template using a single object tracking (SOT) pipeline that analyzes the frames of the video content and the object template with a first machine learning model trained to identify a position of the target object in the frames of the video content, the SOT pipeline outputting a first tracking results comprising a first bounding box associated with the target object; analyzing the frames of the video content using a multiple object tracking (MOT) pipeline that analyzes the frames of the video content using a second machine learning model trained to track positions of multiple objects in the frames of the video content, the multiple objects including the target object and one or more distractor objects, the MOT pipeline outputting second tracking results comprising a second bounding box associated with the target object; comparing the first tracking results and the second tracking results to determine whether the first tracking results are consistent with the second tracking results by: comparing the first bounding box with the second bounding box to determine an overlap between the first bounding box and the second bounding box; and determining that the first tracking results are consistent with the second tracking results responsive to the overlap between the first bounding box and the second bounding box satisfying a similarity threshold; and tracking the target object using the first tracking results responsive to the first tracking results being consistent with the second tracking results. See “A method implemented in a data processing system” above Claim 11 A system for object tracking, the system comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising: receiving an image frame in a sequence of image frames; identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository; generating a SOT output based on the identified object of interest; detecting one or more objects in the image frame using a multiple-object tracker (MOT), the MOT including a machine-learning model; conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result; and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result. Claim 1 A data processing system comprising: a processor; and a machine-readable medium storing executable instructions that, when executed, cause the processor alone or in combination with other processors to perform operations comprising: obtaining video content that includes a target object to be tracked across frames of the video content and an object template providing a representation of the target object; analyzing the frames of the video content and the object template using a single object tracking (SOT) pipeline that analyzes the frames of the video content and the object template with a first machine learning model trained to identify a position of the target object in the frames of the video content, the SOT pipeline outputting first tracking results comprising a first bounding box associated with the target object; analyzing the frames of the video content using a multiple object tracking (MOT) pipeline that analyzes the frames of the video content using a second machine learning model trained to track positions of multiple objects in the frames of the video content, the multiple objects including the target object and one or more distractor objects, the MOT pipeline outputting second tracking results comprising a second bounding box associated with the target object; comparing the first tracking results and the second tracking results to determine whether the first tracking results are consistent with the second tracking results by: comparing the first bounding box with the second bounding box to determine an overlap between the first bounding box and the second bounding box; and determining that the first tracking results are consistent with the second tracking results responsive to the overlap between the first bounding box and the second bounding box satisfying a similarity threshold; and tracking the target object using the first tracking results responsive to the first tracking results being consistent with the second tracking results. Note: the other system claims (i.e., claims 6 and 18) could have also been used for claim mapping This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim 20 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 13-15 of copending Application No. 18/515,487 in view of “Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching” by Zhang et al. (hereinafter “Zhang”) and Varadarajan et al. (US 2020/0134837 A1, hereinafter “Varadarajan”). Regarding claim 20, copending Application No. 18/515,487 teaches, A method for object tracking, the method comprising (Claim 13: “A method implemented in a data processing system for tracking objects in video content, the method comprising”): receiving an image frame in a sequence of image frames (Claim 13: “obtaining video content that includes a target object to be tracked across frames of the video content and an object template providing a representation of the target object”); identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository (Claim 13: “analyzing the frames of the video content and the object template using a single object tracking (SOT) pipeline that analyzes the frames of the video content and the object template with a first machine learning model trained to identify a position of the target object in the frames of the video content, the SOT pipeline outputting first tracking results comprising a first bounding box associated with the target object”); generating a SOT output based on the identified object of interest (Claim 13: “analyzing the frames of the video content and the object template using a single object tracking (SOT) pipeline that analyzes the frames of the video content and the object template with a first machine learning model trained to identify a position of the target object in the frames of the video content, the SOT pipeline outputting first tracking results comprising a first bounding box associated with the target object”; Note: the first tracking results are the SOT output); detecting one or more objects in the image frame using a multiple-object tracker (MOT) (Claim 13: “analyzing the frames of the video content using a multiple object tracking (MOT) pipeline that analyzes the frames of the video content using a second machine learning model trained to track positions of multiple objects in the frames of the video content, the multiple objects including the target object and one or more distractor objects, the MOT pipeline outputting second tracking results comprising a second bounding box associated with the target object”); conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result (Claim 13: “comparing the first tracking results and the second tracking results to determine whether the first tracking results are consistent with the second tracking results by: comparing the first bounding box with the second bounding box to determine an overlap between the first bounding box and the second bounding box; and determining that the first tracking results are consistent with the second tracking results responsive to the overlap between the first bounding box and the second bounding box satisfying a similarity threshold; Note: the first tracking results are the SOT output, the second tracking results are that of each detected object, and determining that the first tracking results are consistent with the second tracking results is a match result), and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result (Claim 13: “and tracking the target object using the first tracking results responsive to the first tracking results being consistent with the second tracking results”; Claim 13: “comparing the first tracking results and the second tracking results to determine whether the first tracking results are consistent with the second tracking results by: comparing the first bounding box with the second bounding box to determine an overlap between the first bounding box and the second bounding box; and determining that the first tracking results are consistent with the second tracking results responsive to the overlap between the first bounding box and the second bounding box satisfying a similarity threshold; Note: the first tracking results are the SOT output, the second tracking results are that of each detected object, and determining that the first tracking results are consistent with the second tracking results is a match result; Note: the first tracking results are the SOT output and the second tracking results are that of each detected object, and determining that the first tracking results are consistent with the second tracking results is a match result), wherein the method is performed using one or more processors (Claim 13: “A method implemented in a data processing system for tracking objects in video content”). Copending Application No. 18/515,487 does not expressly disclose the following limitations: wherein the conducting a matching includes: determining an intersection of union (IOU) between the SOT output and each detected object of the one or more detected objects; determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Zhang teaches, wherein the conducting a matching includes: determining an intersection of union (IOU) between the SOT output and each detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: IoU matching is discussed in Step 4; Pg. 668-670, III. Approach); It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine determining an intersection of union (IOU) between the SOT output and each detected object as taught by Zhang with the method of copending Application No. 18/515,487 in order to achieve robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues (Zhang, Pg. 672, V. Discussion and Conclusion). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. The combination of copending Application No. 18/515,487 and Zhang does not expressly disclose the following limitations: determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Varadarajan teaches, determining whether the IOU is lower than an IOU threshold (Para. 0018: “If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame”; Para. 0048: “If one or more of the intersection over unions is below the threshold, the threshold comparator 214 determines that there is poor tracking quality”); and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold (Para. 0018: “The intersection over union is a ratio of (A) the intersection between all the blob (s) of the frame with all the tracked object (s) (e.g., bounding box (es) from a previous Al - based object detection) and (B) the union of all the blob (s) of the frame with all the tracked object (s). If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame. For example, if the threshold is 0.7 and the intersection over union value is below the threshold (e.g., 0.4), the dynamic object tracker 108 determines that there is a new object in the frame”). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine an IoU being below a threshold and identifying a detected object with a corresponding IoU lower than the threshold as taught by Varadarajan with the combined method of copending Application No. 18/515,487 and Zhang in order to improve efficiency of object tracking in video frames (Varadarajan, Abstract). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 20. This is a provisional nonstatutory double patenting rejection. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system and methods for object tracking. With respect to the analysis of claims 1 and 20 (claim 11 is similar to claim 1): Step 1: With regard to Step 1, claims 1 and 20 are directed to a method; and therefore, the claims are directed to one of the statutory categories of inventions. Step 2A, Prong One: With regard to Step 2A, Prong One, the following limitations in claim 1 (and similarly claim 11) as drafted recite an abstract idea: “identifying an object of interest in the image frame; generating a output based on the identified object of interest; detecting one or more objects in the image frame; conducting a matching between the output and each detected object of the one or more detected objects to generate a match result; and generating a tracker output based at least in part on the output, the one or more detected objects, and the match result”. The limitations recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitation manually or in the mind by a human. That is, a person can identify an object (i.e., object of interest) in an image, select the object (i.e., draw a bounding box around the object as an output), identify additional/other objects in the image, match/compare the selected object with the other objects in the image and determine how similar the objects are, and output whether the selected object being tracked is the same object or a new object. These are concepts that fall under the grouping of abstract idea mental processes, i.e., a concept performed in the human mind, evaluation, judgment, and/or opinion of a human. With regard to Step 2A, Prong One, the following limitations in claim 20 as drafted recite an abstract idea: “identifying an object of interest in the image frame; generating a output based on the identified object of interest; detecting one or more objects in the image frame; conducting a matching between the output and each detected object of the one or more detected objects to generate a match result, wherein the conducting a matching includes: determining an intersection of union (IOU) between the output and each detected object of the one or more detected objects; determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold; and generating a tracker output based at least in part on the output, the one or more detected objects, and the match result”. The limitations recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitation manually or in the mind by a human. That is, a person can identify an object (i.e., object of interest) in an image, select the object (i.e., draw a bounding box around the object as an output), identify additional/other objects in the image, match/compare the selected object with the other objects in the image and determine how similar the objects are, and output whether the selected object being tracked is the same object or a new object. Determining an intersection of union (IOU) between the output and each detected object of the one or more detected objects, determining whether the IOU is lower than an IOU threshold, and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold are mathematical calculations. These are concepts that fall under the grouping of abstract idea mathematical calculations and mental processes, i.e., a concept performed in the human mind, evaluation, judgment, and/or opinion of a human. Step 2A, Prong Two: The 2019 PEG defines the phrase “integration into a practical application” to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, there are no additional steps/elements/limitations in the claims, with the exception of the following in the method claims (claims 1 and 20) and system claim (claim 11): “receiving an image frame in a sequence of image frames”, “using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository”, “SOT output”, “using a multiple-object tracker (MOT), the MOT including a machine-learning model”, “wherein the method is performed using one or more processors” in claim 1, “at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising: receiving an image frame in a sequence of image frames”, “using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository”, “SOT output”, “using a multiple-object tracker (MOT), the MOT including a machine-learning model” in claim 11, and “receiving an image frame in a sequence of image frames”, “using a single-object tracker (SOT) based upon one or more templates associated with the object of interest in a template repository”, “SOT output”, “using a multiple-object tracker (MOT)”, “wherein the method is performed using one or more processors” in claim 20 . The receiving limitation is mere data gathering/data input. The SOT and MOT are not specialized models (i.e., do not recite details of how the model is trained). The SOT output is mere data output by a non-specialized model. The processor and memory are generic computer components. These are regarded as adding routine and conventional elements to perform the judicial exception, and do not apply it into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the claims recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the steps, amount to no more than insignificant routine and conventional elements. Mere instructions to apply an exception using generic components cannot provide an inventive concept. Therefore, claims 1, 11, and 20 are not patent eligible. Furthermore, with regard to claims 2-10 and 12-19 when viewed individually, these additional steps, under their broadest reasonable interpretation, provide extra-solution activities to cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, they are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-8, 10-13, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over “Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching” by Zhang et al. (hereinafter “Zhang”) in view of Sugio (US 2013/0058525 A1). Regarding claim 1, Zhang teaches, A method for object tracking, the method comprising (Abstract; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching): receiving an image frame in a sequence of image frames (Fig. 1: current frame t is input into the YOLOV4 detector and SiamRPN++ SOT; Pg. 667, II. Tracking Framework); identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); generating a SOT output based on the identified object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output. A set of candidate bounding boxes Dcan are output by combining tracking bounding box Dtra with Ddet; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); detecting one or more objects in the image frame using a multiple-object tracker (MOT), the MOT including a machine-learning model (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the ReID network as a machine learning model); conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the cascade matching result and IoU matching result in Pg. 668-669 as matching results that are generated); and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result (Abstract: prediction of pedestrian motion is realized; Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”); wherein the method is performed using one or more processors (Abstract; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching; Pg. 667, I. Introduction: the model in the MOT system can run on the GPU). Zhang does not expressly disclose the following limitation: in a template repository. However, Sugio teaches, in a template repository (Para. 0010: “a storage unit configured to store a template image of an object to be a track target”; As shown in Fig. 6, an initial object region is set in S101 and the initial template image is stored in S102). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being from a template repository as taught by Sugio with the method of Zhang in order to highly accurately track a target object even though there may be changes to the target object (Sugio, Para. 0009). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 1. Regarding claim 2, the combination of Zhang and Sugio teaches the limitations as explained above in claim 1. The combination of Zhang and Sugio further teaches, The method of claim 1 (see claim 1 above), wherein the match result indicates a match between the SOT output and one detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang: Pg. 667-668, II. Tracking Framework; Zhang: Pg. 668-670, III. Approach; Note: the Examiner interprets the cascade matching result and IoU matching result in Pg. 668-669 as matching results), wherein the generating a tracker output comprises generating the tracker output as the SOT output (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang: Pg. 667-668, II. Tracking Framework; Zhang: Pg. 668-670, III. Approach; Zhang, Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”). Regarding claim 3, the combination of Zhang and Sugio teaches the limitations as explained above in claim 2. The combination of Zhang and Sugio further teaches, The method of claim 2 (see claim 2 above), further comprising: generating a new template based on the SOT output (Zhang: Fig. 1; Zhang: Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output); and adding the new template to the template repository (Zhang, Fig. 1; Zhang: Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated). The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 2 apply to claim 3 and are incorporated herein by reference. Thus, the method recited in claim 3 is met by Zhang and Sugio. Regarding claim 7, the combination of Zhang and Sugio teaches the limitations as explained above in claim 1. The combination of Zhang and Sugio further teaches, The method of claim 1 (see claim 1 above), wherein the match result indicates no match being found (Zhang, Fig. 1: unmatched Xc and Ddet; Zhang, Step 4 in Pg. 668: “For Xu and Xc that did not match successfully, the Dtra output at the latest time and there is no matching relationship in the cascade matching is performed IoU matching with the remaining Ddet to alleviate the interference caused by the object apparent feature mutation or partial occlusion. If the obtained matching pair set is recorded as PI, the final matching pair set is P=Pc ∪ PI”; Zhang, Steps 6-7 in Pg. 668); wherein the method further comprises selecting a detected object from the one or more detected objects based on an association with a previous detected object corresponding to one template of the one or more templates (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang: Pg. 668-670, III. Approach; Note: the Examiner interprets matching as an association); wherein a confidence score associated with the selected detected object is higher than a threshold (Zhang, Pg. 668-669: a tracking quality evaluation standard qSOT is calculated. If qSOT is greater than the threshold, then the corresponding Dtra will be added to the matching sequence. The equation for qSOT depends on various parameters, including IoU; Note: the Examiner interprets tracking quality qSOT as a confidence score); wherein the generating a tracker output comprises generating the tracker output as the selected detected object (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang: Pg. 668-670, III. Approach; Zhang, Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”). Regarding claim 8, the combination of Zhang and Sugio teaches the limitations as explained above in claim 7. The combination of Zhang and Sugio further teaches, The method of claim 7 (see claim 7 above), wherein the method further comprises: generating a new template based on the selected detected object (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output); and adding the new template to the template repository (Zhang, Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated). The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 7 apply to claim 8 and are incorporated herein by reference. Thus, the method recited in claim 8 is met by Zhang and Sugio. Regarding claim 10, the combination of Zhang and Sugio teaches the limitations as explained above in claim 1. The combination of Zhang and Sugio further teaches, The method of claim 1 (see claim 1 above), further comprising: receiving a user input associated with an identified image portion on an initial image frame of the sequence of image frames (Zhang, Pg. 667, I. Introduction: “offline trained SOT tracker”; Zhang, Fig. 1: current frame t is input into YOLOV4 and SiamRPN++ SOT. A target template is also input into SiamRPN++ SOT; Zhang, Pg. 667-668: Steps 1 and 2; Zhang: Fig. 2; Zhang, Pg. 688, III. Approach; Sugio, Para. 0039: a user zooms a subject and the image input unit receives the zoomed frame image); generating an initial template based at least in part on the first identified image portion (Zhang, Fig. 1: current frame t is input into YOLOV4 and SiamRPN++ SOT. A target template is also input into SiamRPN++ SOT; Zhang, Pg. 667-668: Steps 1 and 2; Zhang: Fig. 2; Zhang, Pg. 688, III. Approach: the object area and template area size are determined and then short-term clue extraction is performed; Sugio, Para. 0039; Sugio, Paras. 0041: “The template input unit 5 receives the template image of the object of a track target”; Sugio, Para. 0042: “The template registration unit 7 stores the template image inputted to the template input unit 5 as an initial template in the template storage unit 8”); and initializing the SOT based at least in part on the initial template (Zhang, Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output; Zhang: Fig. 2; Zhang, Pg. 667-668, II. Tracking Framework: Steps 1-3; Zhang: Pg. 668, III. Approach). The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 1 apply to claim 10 and are incorporated herein by reference. Thus, the method recited in claim 10 is met by Zhang and Sugio. Regarding claim 11, Zhang teaches, A system for object tracking, the system comprising (Abstract; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching): at least one processor (Pg. 667, I. Introduction: the model in the MOT system can run on the GPU); and at least one memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations, the set of operations comprising (Abstract: the paper proposes an efficient online multi-object tracking algorithm; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching; Pg. 667, I. Introduction: the SOT model has an impact on memory consumption and the model in the MOT system can run on the GPU; Note: as shown above, the GPU (i.e., processor) carries out the object tracking algorithm/model (i.e., executes instructions)): receiving an image frame in a sequence of image frames (Fig. 1: current frame t is input into the YOLOV4 detector and SiamRPN++ SOT; Pg. 667, II. Tracking Framework); identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); generating a SOT output based on the identified object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output. A set of candidate bounding boxes Dcan are output by combining tracking bounding box Dtra with Ddet; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); detecting one or more objects in the image frame using a multiple-object tracker (MOT), the MOT including a machine-learning model (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the ReID network as a machine learning model); conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the cascade matching result and IoU matching result in Pg. 668-669 as matching results that are generated); and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result (Abstract: prediction of pedestrian motion is realized; Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”). Zhang does not expressly disclose the following limitation: in a template repository. However, Sugio teaches, in a template repository (Para. 0010: “a storage unit configured to store a template image of an object to be a track target”; As shown in Fig. 6, an initial object region is set in S101 and the initial template image is stored in S102). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being from a template repository as taught by Sugio with the method/operations of Zhang in order to highly accurately track a target object even though there may be changes to the target object (Sugio, Para. 0009). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 11. Regarding claim 12, the combination of Zhang and Sugio teaches the limitations as explained above in claim 11. The combination of Zhang and Sugio further teaches, The system of claim 11 (see claim 11 above), wherein the match result indicates a match between the SOT output and one detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang: Pg. 667-668, II. Tracking Framework; Zhang: Pg. 668-670, III. Approach; Note: the Examiner interprets the cascade matching result and IoU matching result in Pg. 668-669 as matching results), wherein the generating a tracker output comprises generating the tracker output as the SOT output (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang: Pg. 667-668, II. Tracking Framework; Zhang: Pg. 668-670, III. Approach; Zhang, Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”). Regarding claim 13, the combination of Zhang and Sugio teaches the limitations as explained above in claim 12. The combination of Zhang and Sugio further teaches, The system of claim 12 (see claim 12 above), further comprising: generating a new template based on the SOT output (Zhang: Fig. 1; Zhang: Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output); and adding the new template to the template repository (Zhang: Fig. 1; Zhang: Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated). The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 12 apply to claim 13 and are incorporated herein by reference. Thus, the system recited in claim 13 is met by Zhang and Sugio. Regarding claim 17, the combination of Zhang and Sugio teaches the limitations as explained above in claim 11. The combination of Zhang and Sugio further teaches, The system of claim 11 (see claim 11 above), wherein the match result indicates no match being found (Zhang, Fig. 1: unmatched Xc and Ddet; Zhang, Step 4 in Pg. 668: “For Xu and Xc that did not match successfully, the Dtra output at the latest time and there is no matching relationship in the cascade matching is performed IoU matching with the remaining Ddet to alleviate the interference caused by the object apparent feature mutation or partial occlusion. If the obtained matching pair set is recorded as PI, the final matching pair set is P=Pc ∪ PI”; Zhang: Steps 6-7 in Pg. 668); wherein the method further comprises selecting a detected object from the one or more detected objects based on an association with a previous detected object corresponding to one template of the one or more templates (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang: Pg. 668-670, III. Approach; Note: the Examiner interprets matching as an association); wherein a confidence score associated with the selected detected object is higher than a threshold (Zhang, Pg. 668-669: a tracking quality evaluation standard qSOT is calculated. If qSOT is greater than the threshold, then the corresponding Dtra will be added to the matching sequence. The equation for qSOT depends on various parameters, including IoU; Note: the Examiner interprets tracking quality qSOT as a confidence score); wherein the generating a tracker output comprises generating the tracker output as the selected detected object (Zhang, Abstract: prediction of pedestrian motion is realized; Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan (i.e., combination of tracking bounding boxes Dtra from SiamRPN++ SOT with Ddet) are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching in which a Hungarian matching algorithm is used. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu and a Hungarian matching algorithm is used; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang: Pg. 668-670, III. Approach; Zhang, Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”). Regarding claim 18, the combination of Zhang and Sugio teaches the limitations as explained above in claim 17. The combination of Zhang and Sugio further teaches, The system of claim 17 (see claim 17 above), wherein the set of operations further comprises: generating a new template based on the selected detected object (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output); and adding the new template to the template repository (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated). The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 17 apply to claim 18 and are incorporated herein by reference. Thus, the system recited in claim 18 is met by Zhang and Sugio. Claims 4, 9, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over “Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching” by Zhang et al. (hereinafter “Zhang”) in view of Sugio (US 2013/0058525 A1) and further in view of Nakajima et al. (US 2001/0048802 A1, hereinafter “Nakajima”). Regarding claim 4, the combination of Zhang and Sugio teaches the limitations as explained above in claim 3. The combination of Zhang and Sugio further teaches, The method of claim 3 (see claim 3 above), wherein the adding the new template to the template repository comprises adding the new template (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated) The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 3 apply to claim 4 and are incorporated herein by reference. The combination of Zhang and Sugio does not expressly disclose the following limitation: to a short-term template repository of the template repository. However, Nakajima teaches, to a short-term template repository of the template repository (Para. 0106; Para. 0111: templates are read from the template storing means and are input to the temporary template storing means; Note: the Examiner interprets a temporary template storing means as a short-term template repository). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being input/added to a short-term repository as taught by Nakajima with the combined method/operations of Zhang and Sugio in order to generate composite image data based on the selected template (Nakajima, Para. 0106). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 4. Regarding claim 9, the combination of Zhang and Sugio teaches the limitations as explained above in claim 8. The combination of Zhang and Sugio further teaches, The method of claim 8 (see claim 8 above), wherein the adding the new template to the template repository comprises adding the new template (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated) The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 8 apply to claim 9 and are incorporated herein by reference. The combination of Zhang and Sugio does not expressly disclose the following limitation: to a short-term template repository of the template repository. However, Nakajima teaches, to a short-term template repository of the template repository (Para. 0106; Para. 0111: templates are read from the template storing means and are input to the temporary template storing means; Note: the Examiner interprets a temporary template storing means as a short-term template repository). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being input/added to a short-term repository as taught by Nakajima with the combined method/operations of Zhang and Sugio in order to generate composite image data based on the selected template (Nakajima, Para. 0106). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 9. Regarding claim 14, the combination of Zhang and Sugio teaches the limitations as explained above in claim 13. The combination of Zhang and Sugio further teaches, The system of claim 13 (see claim 13 above), wherein the adding the new template to the template repository comprises adding the new template (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated) The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 13 apply to claim 14 and are incorporated herein by reference. The combination of Zhang and Sugio does not expressly disclose the following limitation: to a short-term template repository of the template repository. However, Nakajima teaches, to a short-term template repository of the template repository (Para. 0106; Para. 0111: templates are read from the template storing means and are input to the temporary template storing means; Note: the Examiner interprets a temporary template storing means as a short-term template repository). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being input/added to a short-term repository as taught by Nakajima with the combined method/operations of Zhang and Sugio in order to generate composite image data based on the selected template (Nakajima, Para. 0106). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 14. Regarding claim 19, the combination of Zhang and Sugio teaches the limitations as explained above in claim 18. The combination of Zhang and Sugio further teaches, The system of claim 18 (see claim 18 above), wherein the adding the new template to the template repository comprises adding the new template (Zhang: Fig. 1; Zhang, Pg. 667-668, II. Tracking Framework: As shown in Step 4, IoU matching is performed with the remaining Ddet and unmatched Xc, and a final matching pair set is determined. As shown in Step 5, the matching relationship in the set is referred to and features of Dcan are saved to the feature set of X that matches it; Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Sugio, Para. 0080: the template image in the template storage unit is updated) The proposed combination as well as the motivation for combining the Zhang and Sugio references presented in the rejection of claim 18 apply to claim 19 and are incorporated herein by reference. The combination of Zhang and Sugio does not expressly disclose the following limitation: to a short-term template repository of the template repository. However, Nakajima teaches, to a short-term template repository of the template repository (Para. 0106; Para. 0111: templates are read from the template storing means and are input to the temporary template storing means; Note: the Examiner interprets a temporary template storing means as a short-term template repository). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being input/added to a short-term repository as taught by Nakajima with the combined method/operations of Zhang and Sugio in order to generate composite image data based on the selected template (Nakajima, Para. 0106). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 19. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over “Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching” by Zhang et al. (hereinafter “Zhang”) in view of Sugio (US 2013/0058525 A1) and further in view of Nakajima et al. (US 2001/0048802 A1, hereinafter “Nakajima”) and Chen et al. (CN 103473542 B, see provided machine translation; hereinafter “Chen”). Regarding claim 5, the combination of Zhang, Sugio, and Nakajima teaches the limitations as explained above in claim 4. The combination of Zhang, Sugio, and Nakajima further teaches, The method of claim 4 (see claim 4 above), further comprising: assigning a new weight associated with the new template (Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Fig. 1: cascade matching and IoU matching is performed in which a Hungarian matching algorithm is used. The Hungarian algorithm outputs values such as 0.2, 0.6, 0.85; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Note: Hungarian algorithm is an algorithm that solves weighted matching problems. The Examiner interprets the values 0.2, 0.6, 0.85 as weights); The combination of Zhang, Sugio, and Nakajima does not expressly disclose the following limitation: wherein the new weight is higher than a weight associated with one of the one or more templates in the template repository. However, Chen teaches, wherein the new weight is higher than a weight associated with one of the one or more templates in the template repository (Pg. 5 of entire document: “For the sub-area, the template to be tracked is used as a template, and N templates of the same weight are initialized for the sub-area in which the sub-area is located, where N is a positive integer, and the correlation coefficient of the target to be tracked is calculated according to the N templates… If the correlation coefficient is greater than the preset value, the weight of the first template is increased”; Note: the Examiner interprets the first template with increased weight as a higher weight). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine the new weight being higher than that of the template as taught by Chen with the combined method/operations of Zhang, Sugio, and Nakajima in order to improve the robustness of target tracking (Chen Pg. 2 of entire document). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 5. Regarding claim 15, the combination of Zhang, Sugio, and Nakajima teaches the limitations as explained above in claim 14. The combination of Zhang, Sugio, and Nakajima further teaches, The system of claim 14 (see claim 14 above), further comprising: assigning a new weight associated with the new template (Zhang, Pg. 668-669, III. Approach: the SOT is updated and “adding part of the Dtra to the matching sequence helps the system confirm whether the current SOT output result maintains a high similarity with the historical appearance, and realizes the suppression of tracking drift through the final trajectory update result”; Zhang, Fig. 1: cascade matching and IoU matching is performed in which a Hungarian matching algorithm is used. The Hungarian algorithm outputs values such as 0.2, 0.6, 0.85; Zhang, Pg. 668: a final matching pair set is obtained and a new SOT tracker is created and the tracking prediction result of Xc in the current frame is output; Note: Hungarian algorithm is an algorithm that solves weighted matching problems. The Examiner interprets the values 0.2, 0.6, 0.85 as weights); The combination of Zhang, Sugio, and Nakajima does not expressly disclose the following limitation: wherein the new weight is higher than a weight associated with one of the one or more templates in the template repository. However, Chen teaches, wherein the new weight is higher than a weight associated with one of the one or more templates in the template repository (Pg. 5 of entire document: “For the sub-area, the template to be tracked is used as a template, and N templates of the same weight are initialized for the sub-area in which the sub-area is located, where N is a positive integer, and the correlation coefficient of the target to be tracked is calculated according to the N templates… If the correlation coefficient is greater than the preset value, the weight of the first template is increased”; Note: the Examiner interprets the first template with increased weight as a higher weight). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine the new weight being higher than that of the template as taught by Chen with the combined method/operations of Zhang, Sugio, and Nakajima in order to improve the robustness of target tracking (Chen Pg. 2 of entire document). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 15. Claims 6, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over “Multi-object tracking based on spatio-temporal cue fusion and optimized cascade matching” by Zhang et al. (hereinafter “Zhang”) in view of Sugio (US 2013/0058525 A1) and further in view of Varadarajan et al. (US 2020/0134837 A1, hereinafter “Varadarajan”). Regarding claim 6, the combination of Zhang and Sugio teaches the limitations as explained above in claim 1. The combination of Zhang and Sugio further teaches, The method of claim 1 (see claim 1 above), wherein the conducting a matching includes: determining an intersection of union (IOU) between the SOT output and each detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: IoU matching is discussed in Step 4; Zhang, Pg. 668-670, III. Approach); The combination of Zhang and Sugio does not expressly disclose the following limitations: determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Varadarajan teaches, determining whether the IOU is lower than an IOU threshold (Para. 0018: “If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame”; Para. 0048: “If one or more of the intersection over unions is below the threshold, the threshold comparator 214 determines that there is poor tracking quality”); and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold (Para. 0018: “The intersection over union is a ratio of (A) the intersection between all the blob (s) of the frame with all the tracked object (s) (e.g., bounding box (es) from a previous Al - based object detection) and (B) the union of all the blob (s) of the frame with all the tracked object (s). If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame. For example, if the threshold is 0.7 and the intersection over union value is below the threshold (e.g., 0.4), the dynamic object tracker 108 determines that there is a new object in the frame”). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine an IoU being below a threshold and identifying a detected object with a corresponding IoU lower than the threshold as taught by Varadarajan with the combined method/operations of Zhang and Sugio in order to improve efficiency of object tracking in video frames (Varadarajan, Abstract). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 6. Regarding claim 16, the combination of Zhang and Sugio teaches the limitations as explained above in claim 11. The combination of Zhang and Sugio further teaches, The system of claim 11 (see claim 11 above), wherein the conducting a matching includes: determining an intersection of union (IOU) between the SOT output and each detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: IoU matching is discussed in Step 4; Zhang: Pg. 668-670, III. Approach); The combination of Zhang and Sugio does not expressly disclose the following limitations: determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Varadarajan teaches, determining whether the IOU is lower than an IOU threshold (Para. 0018: “If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame”; Para. 0048: “If one or more of the intersection over unions is below the threshold, the threshold comparator 214 determines that there is poor tracking quality”); and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold (Para. 0018: “The intersection over union is a ratio of (A) the intersection between all the blob (s) of the frame with all the tracked object (s) (e.g., bounding box (es) from a previous Al - based object detection) and (B) the union of all the blob (s) of the frame with all the tracked object (s). If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame. For example, if the threshold is 0.7 and the intersection over union value is below the threshold (e.g., 0.4), the dynamic object tracker 108 determines that there is a new object in the frame”). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine an IoU being below a threshold and identifying a detected object with a corresponding IoU lower than the threshold as taught by Varadarajan with the combined method/operations of Zhang and Sugio in order to improve efficiency of object tracking in video frames (Varadarajan, Abstract). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 16. Regarding claim 20, Zhang teaches, A method for object tracking, the method comprising (Abstract; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching): receiving an image frame in a sequence of image frames (Fig. 1: current frame t is input into the YOLOV4 detector and SiamRPN++ SOT; Pg. 667, II. Tracking Framework); identifying an object of interest in the image frame using a single-object tracker (SOT) based upon one or more templates associated with the object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); generating a SOT output based on the identified object of interest (Fig. 1: a target template is input into the SiamRPN++ SOT and short-term clues (i.e., tracking bounding box Dtra) are output. A set of candidate bounding boxes Dcan are output by combining tracking bounding box Dtra with Ddet; Fig. 2; Pg. 667-668, II. Tracking Framework: Steps 1-3; Pg. 668, III. Approach); detecting one or more objects in the image frame using a multiple-object tracker (MOT) (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the ReID network as a machine learning model); conducting a matching between the SOT output and each detected object of the one or more detected objects to generate a match result (Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Note: the Examiner interprets the cascade matching result and IoU matching result in Pg. 668-669 as matching results that are generated), wherein the conducting a matching includes: determining an intersection of union (IOU) between the SOT output and each detected object of the one or more detected objects (Zhang, Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Zhang, Pg. 667-668, II. Tracking Framework: IoU matching is discussed in Step 4; Pg. 668-670, III. Approach); and generating a tracker output based at least in part on the SOT output, the one or more detected objects, and the match result (Abstract: prediction of pedestrian motion is realized; Fig. 1: illustrates a framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching. The set of candidate bounding boxes Dcan are input to an ReID network as well as the historical appearance of target trajectory X. Image features are then extracted by the ReID for cascade matching. If Xc and Ddet are unmatched, they are input into IoU matching along with unconfirmed tracks Xu; Pg. 667-668, II. Tracking Framework; Pg. 668-670, III. Approach; Pg. 672: “the proposed method achieves robust multi-object tracking of pedestrians in real scenes by rationally fusing long and short-term video cues”), wherein the method is performed using one or more processors (Abstract; Fig. 1: framework of multi-object tracking based on spatiotemporal cues fusion and optimized cascade matching; Pg. 667, I. Introduction: the model in the MOT system can run on the GPU). Zhang does not expressly disclose the following limitations: in a template repository; determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Sugio teaches, in a template repository (Para. 0010: “a storage unit configured to store a template image of an object to be a track target”; As shown in Fig. 6, an initial object region is set in S101 and the initial template image is stored in S102). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine templates being from a template repository as taught by Sugio with the method of Zhang in order to highly accurately track a target object even though there may be changes to the target object (Sugio, Para. 0009). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. The combination of Zhang and Sugio does not expressly disclose the following limitations: determining whether the IOU is lower than an IOU threshold; and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold. However, Varadarajan teaches, determining whether the IOU is lower than an IOU threshold (Para. 0018: “If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame”; Para. 0048: “If one or more of the intersection over unions is below the threshold, the threshold comparator 214 determines that there is poor tracking quality”); and identifying a detected object of the one or more detected objects with a corresponding IOU lower than the IOU threshold (Para. 0018: “The intersection over union is a ratio of (A) the intersection between all the blob (s) of the frame with all the tracked object (s) (e.g., bounding box (es) from a previous Al - based object detection) and (B) the union of all the blob (s) of the frame with all the tracked object (s). If the intersection over union value is below a threshold, the dynamic object tracker 108 determines that there is a new object in the frame. For example, if the threshold is 0.7 and the intersection over union value is below the threshold (e.g., 0.4), the dynamic object tracker 108 determines that there is a new object in the frame”). It would have been obvious before the effective filing date of the claimed invention, to one of ordinary skill in the art, to combine an IoU being below a threshold and identifying a detected object with a corresponding IoU lower than the threshold as taught by Varadarajan with the combined method/operations of Zhang and Sugio in order to improve efficiency of object tracking in video frames (Varadarajan, Abstract). Therefore, one of ordinary skill in the art would be capable to have combined the elements as claimed by known methods and that in combination, each element merely performs the same function as it does separately. It is for at least the aforementioned that the Examiner has reached a conclusion of obviousness with respect to claim 20. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “SOT for MOT” by He et al. “Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking” by Wan et al. “Online Multi-Object Tracking with Instance-Aware Tracker and Dynamic Model Refreshment” by Chu et al. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniella M. DiGuglielmo whose telephone number is (571)272-0183. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniella M. DiGuglielmo/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Feb 20, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586401
SYSTEMS AND METHODS FOR REPRESENTING AND SEARCHING CHARACTERS
2y 5m to grant Granted Mar 24, 2026
Patent 12567228
IMAGE DATA PROCESSING METHOD, IMAGE DATA PROCESSING APPARATUS, AND COMMERCIAL USE
2y 5m to grant Granted Mar 03, 2026
Patent 12567266
IMAGE RECOGNITION SYSTEM AND IMAGE RECOGNITION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555372
IMAGE SENSOR EVALUATION METHOD USING COMPUTING DEVICE INCLUDING PROCESSOR
2y 5m to grant Granted Feb 17, 2026
Patent 12548147
Systems and Methods Related to Age-Related Macular Degeneration
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+26.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 170 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month