Prosecution Insights
Last updated: April 19, 2026
Application No. 17/977,720

Apparatus and Method for Re-Identifying Object

Non-Final OA §103
Filed
Oct 31, 2022
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Industry-Academic Cooperation Foundation Yonsei University
OA Round
5 (Non-Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/7/2026 has been entered. Claims pending 1,4,5,6,7,8,9,10 & 11,14,15,16,17,18,19,20 claims canceled 2,3 & 12,13: PNG media_image1.png 511 136 media_image1.png Greyscale Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation: Claim 11, lines 17-20: limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold) is: Regarding the interpretation of claim 11, as mapped to MPEP 2111.04 II. CONTINGENT LIMITATIONS and vice versa: MPEP 2111.04 II. CONTINGENT LIMITATIONS mapped into claim 11 in the following pages: A. Claim 11’s non-restrictive (non-limiting) comma phrase “, by a processor,”1 does not limit claim 11 under the broadest reasonable interpretation because the enclosed comma phrase is not grammatically limiting; and B. The examiner does “not need to present evidence of the…steps…that are not required to be performed” (MPEP 2111.04 II. CONTINGENT LIMITATIONS: 3rd para, last S) of method claim 11, ll. 17-20 under the broadest reasonable interpretation: limiting (“limiting” step is “not required to be performed” since “similarity between the object representations and the attribute representations exceeds a predetermined threshold” is not “happening”2 in claim 11), by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold: 11. (Currently Amended) A method comprising: training, by a processor,3 an object representation extraction model to train attribute representations; inputting, by the processor, an image obtained by a camera to the trained object representation extraction model; extracting, by the processor, object representations from the image using the trained object representation extraction model; and performing, by the processor, object re-identification based on the object representations, training, by the processor, the object representation extraction model using a loss function so that the object representations are trained to be similar to an attribute prototype corresponding to the object representations, wherein the loss function considers an object ID label of an ith image, ith image object representations, and a probability of accurately predicting the object ID label of the ith image using the ith image object representations, wherein training of the object representation extraction model further includes: limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold in the loss function, to prevent a degradation in re-identification performance caused by collapse or over-convergence of representation vectors of different objects with identical attributes. Regarding applicant’s apparatus claim 1 and method claim 11 (of 1/7/2-26) with corresponding contingent limitations as mapped to MPEP 2111.04 II. CONTINGENT LIMITATIONS and vice versa MPEP 2111.04 II. CONTINGENT LIMITATIONS mapped to claim 11: A. Claim 11’s non-restrictive (non-limiting) comma phrase“, by a processor,” (still inside the box) does not limit claim 11 under the broadest reasonable interpretation; and B. The examiner does “not need to present evidence of the…steps…that are not required to be performed” (3rd para, last S) of method claim 11, lines 17-20 under the broadest reasonable interpretation: limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold: II. CONTINGENT LIMITATIONS The broadest reasonable interpretation of a method (or process) claim (11) having contingent limitations (“the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold“, Claim 11, ll. 18-20) requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent (“once the similarity between the object representations and the attribute representations exceeds a predetermined threshold“, Claim 11, ll. 18-20) are not met. For example, assume a method claim requires step A if a first condition happens and step B if a second condition happens. If the claimed invention may be practiced without either the first or second condition happening, then neither step A or B is required by the broadest reasonable interpretation of the claim. If the claimed invention requires the first condition to occur, then the broadest reasonable interpretation of the claim requires step A. If the claimed invention requires both the first and second conditions to occur, then the broadest reasonable interpretation of the claim requires both steps A and B. The broadest reasonable interpretation of a system (or apparatus or product) claim (1:ALLOWED) having structure (“a processor”, claim 1, line 2) that performs a function (“limit”, claim 1, line 21), which only needs to occur if a condition precedent (“once the similarity exceeds a predetermined threshold in the loss function”, claim 1, line 22) is met, requires structure (“a processor”, claim 1, line 2) for performing the function (“limit”, claim 1, line 21), should the condition (“once the similarity exceeds a predetermined threshold in the loss function”, claim 1, line 22) occur. The system claim (1: ALLOWED) interpretation differs from a method (11: REJECTED via 35 USC 103) claim interpretation because the claimed structure (“a processor”, claim 1, line 2) must be present in the system (apparatus claim 1) regardless of whether (or not) the condition (“once the similarity exceeds a predetermined threshold in the loss function”, claim 1, line 22) is met and the function (“limit”, claim 1, line 21) is actually performed. See Ex parte Schulhauser, Appeal 2013-007847 (PTAB April 28, 2016) for an analysis of contingent claim limitations (reproduced below for A. apparatus claim 1: ALLOWED and B. method claim 11: REJECTED via 35 USC 103) in the context of both method claims (11) and system claims (1). In Schulhauser, both method claims (11) and system claims (1) recited the same contingent step ( A. apparatus claim 1: limit a similarity between the object representations and the attribute representations within an embedding space such that the similarity is not further increased once the similarity exceeds a predetermined threshold B. method claim 11: limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold). When analyzing the claimed method (11) as a whole, the PTAB determined that giving the claim its broadest reasonable interpretation, "[i]f the condition [“the similarity between the object representations and the attribute representations exceeds a predetermined threshold”, claim 11] for performing a contingent step [“limiting…such that the similarity is not further increased”, claim 11] is not satisfied (I don’t see in claim 11 that the “similarity” actually meets/reaches the “predetermined threshold”), the performance recited by the step [“limiting…such that the similarity is not further increased”, claim 11] need not be carried out in order for the claimed method (11) to be performed" (quotation omitted). Schulhauser at 10. When analyzing the claimed system (1) as a whole, the PTAB determined that "[t]he broadest reasonable interpretation of a system claim (1) having structure [“a processor”, claim 1, line 2] that performs a function [“limit”, claim 1, line 21], which only needs to occur if a condition [“once the similarity exceeds a predetermined threshold“, claim 1] precedent is met, still requires structure [“a processor”, claim 1, line 2] for performing the function [“limit”, claim 1, line 21] should the condition occur [“once the similarity exceeds a predetermined threshold“, claim 1]." Schulhauser at 14. Therefore "[t]he Examiner did not need to present evidence of the obviousness of the [ ] method steps [“limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased”, method claim 11] of claim 1(11) that are not required to be performed under a broadest reasonable interpretation of the claim (e.g., instances in which the electrocardiac signal data is not within the threshold electrocardiac criteria such that the condition precedent for the determining step and the remaining steps of claim 1 has not been met);" however to render the claimed system (1) obvious, the prior art must teach the structure [“a processor”, claim 1, line 2] that performs the function (“limit”, claim 1, line 21) of the contingent step (“limit a similarity between the object representations and the attribute representations within an embedding space such that the similarity is not further increased once the similarity exceeds a predetermined threshold”, claim 1) along with the other recited claim limitations. Schulhauser at 9, 14. See also MPEP § 2143.03. 11. (Currently Amended: REJECTED via 35 USC 103) A method comprising: training, by a processor,4 an object representation extraction model to train attribute representations; inputting5, by the processor, an image obtained by a camera to the trained object representation extraction model; extracting, by the processor, object representations from the image using the trained object representation extraction model; and performing, by the processor, object re-identification based on the object representations, training, by the processor, the object representation extraction model using a loss function so that the object representations are trained to be similar to an attribute prototype corresponding to the object representations, wherein the loss function considers an object ID label of an ith image, ith image object representations, and a probability of accurately predicting the object ID label of the ith image using the ith image object representations, wherein training of the object representation extraction model further includes: limiting, by the processor, a similarity between the object representations and the attribute representations within an embedding space, such that the similarity is not further increased once6 the similarity between the object representations and the attribute representations exceeds a predetermined threshold7 in the loss function, to prevent a degradation in re-identification performance caused by collapse or over-convergence of representation vectors of different objects with identical attributes. 1. (Currently Amended: ALLOWED) An apparatus8 comprising: a processor; a non-transitory storage medium coupled to the processor, the storage medium storing instructions that, when executed by the processor, cause the processor to: train an object representation extraction model to learn attribute representations; input an image obtained by a camera to the trained object representation extraction model; extract object representations from the image using the trained object representation extraction model; and perform object re-identification based on the object representations, wherein the storage medium stores further instructions that, when executed by the processor, further cause the processor to: train the object representation extraction model using a loss function so that the object representations are trained to be similar to an attribute prototype corresponding to the object representations, wherein the loss function considers an object ID label of an ith image, ith image object representations, and a probability of accurately predicting the object ID label of the ith image using the ith image object representations, wherein the storage medium stores instructions that, when executed by the processor, further cause the processor to: limit a similarity between the object representations and the attribute representations within an embedding space such that the similarity is not further increased once the similarity exceeds a predetermined threshold in the loss function, to prevent a degradation in re-identification performance caused by collapse or over-convergence of representation vectors of different objects with identical attributes. Thus claim 11 is rejected via 35 USC 103 while claim 1 is allowed in view of MPEP 2111.04 II. CONTINGENT LIMITATIONS. Claim - 35 USC § 101 In view of the above broadest reasonable interpretation in the Claim Interpretation section, claims 1 and 11 are still statutory under 35 USC 101 for the same reasons as in the Office action of 5/29/2025, starting page 14,15: “improved” “memory space” via the claimed “input” of claim 1 or “inputting” of claim 11. Response to Arguments Prior Art Rejections Applicant’s arguments, see remarks, pages 8-10, filed 1/7/2026, with respect to 35 USC 103 have been fully considered and are persuasive. The 35 USC 103 rejection of system claim 1 has been withdrawn. Applicant’s arguments, see remarks, page 9, filed 1/7/2026, with respect to the rejection(s) of method claim(s) 11 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103: Claim(s) 11,14,15,16,17,18,19,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (PROVID: Progressive and Multimodal Vehicle Reidentification for Large-Scale Urban Surveillance) in view of Wang et al. (Inter-Domain Adaptation Label for Data Augmentation in Vehicle Re-Identification) and Mirjalili (SEMI-ADVERSARIAL NETWORKS FOR IMPARTING DEMOGRAPHIC PRIVACY TO FACE IMAGES) and KIM (KR 20220014461 A) with machine translation: wherein KIM teaches a proper training method (“Drawing 6” below) of preventing degradation of classification learning performance on unseen “truck” (fig. 4: “cat-truck”) classes (un-used “truck”-classes during training/learning) and teaches improperly focusing on non-used (“truck”) classes (unseen-“truck”-classes during training) causes degradation of used (“cat-dog”) classes (seen “cat-dog” classes) resulting in 8.1% accuracy improvement (fig. 4: “58.1”) compared to other art in the seen-“cat”-“dog” attribute-labels: PNG media_image2.png 690 972 media_image2.png Greyscale PNG media_image3.png 1423 1121 media_image3.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 11,14,15,16,17,18,19,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (PROVID: Progressive and Multimodal Vehicle Reidentification for Large-Scale Urban Surveillance) in view of Wang et al. (Inter-Domain Adaptation Label for Data Augmentation in Vehicle Re-Identification) and Mirjalili (SEMI-ADVERSARIAL NETWORKS FOR IMPARTING DEMOGRAPHIC PRIVACY TO FACE IMAGES) and KIM (KR 20220014461 A) with machine translation: PNG media_image4.png 722 589 media_image4.png Greyscale Re claim 11. (Currently Amended), Liu teaches A method comprising: training, by a processor, an object representation (“for vehicle appearance”, pg. 648, B. The Null-Space-Based FACT Model, 1st para, last S) extraction model (resulting in a “pre-trained” CNN”, pg. 648, lcol, 1st para, penult S: fig. 3:”Deep CNN”) to train attribute representations (or the “many de-tailed attributes”-“CompCars dataset”, pg. 648, lcol, 1st full para, penult S); inputting,9 by the processor,10 an image (via “CompCars dataset”, pg. 548, lcol, 1st para, penult S) obtained by a camera to the trained object representation extraction model; extracting (the “feature extractor” “deep convolutional neural network (CNN)”, pg. 648, lcol, penult S: fig. 3: “Deep CNN”), by the processor11, object representations from the image using the trained object representation extraction model; and performing, by the processor,12 object re-identification (“in unconstrained scenes”, pg. 650, 2nd para, 1st S: fig. 3: “3. Spatiotemporal Property based Re-ranking”) based on the object representations, training, by the processor,13 the object representation extraction model using a loss function (or “cost”14-“loss”15, pg. 649, rcol, 1st full para, last S) so that the object representations are trained to be similar (“texture, shape, color, and type”, pg. 647, III. Overview of the PROVID Framework) to an attribute prototype corresponding to the object representations, wherein the loss function considers an object ID label (or “VehicleID” “attributes”16) of an ith image (“from all training vehi-cle images”, pg. 648, rcol, last S), ith image object representations (“from all training vehi-cle images”, pg. 648, rcol, last S), and a probability of accurately predicting (via a “model”17, pg. 647, III. Overview of the PROVID Framework, 3rd S, bullet “1)”) the object ID label of the ith image using the ith image object representations, wherein training of the object representation extraction model further includes: limiting18 (via a “max” comprising a limitation that indicates--expressing the action of the verb limits of “something that limits”--the full extent, degree, etc., of something), by the processor, a (“formulated”) similarity (via similarity eqn (6): “max”19, pg. 650) between the object representations and the attribute representations within an (“latent”, pg. 648, rcol, 1st S) embedding space, such that the similarity is not further increased once20 the similarity between the object representations and the attribute representations exceeds a predetermined threshold in the loss function, to prevent a degradation in re-identification performance (“framework”. Pg. 649, rcol, last S) caused21 by22 [(A) (“environmental factors” (i.e., “different colors”; “ similar…color”), pg. 656, lcol: last 4 Ss & rcol, 4th S) collapse23 (via “inter-class distance”, pg. 648, rcol, 1st full S, last S: fig. 4, or “inter-class differences”, pg. 649, lcol, 2nd para, penult S) or (B) over-convergence2425] of (“each vehicle”, pg. 652, rcol, bullet 2), last S) representation vectors of different objects with identical (“vehicle Re-Id”, pg. 653, rcol, 2nd S) attributes: Liu does not teach the difference of claim 11 of: a) loss function …26 b) an (attribute)27 prototype … c) the loss function … d) a probability of accurately (predicting)… e) (limiting)…to prevent28 a degradation29… (caused30 by31 (A) collapse32 or (B) over-convergence33). Wang teaches the difference (a) c) d)) of claim 11 of: a) (“The final”) loss function (pg. 1034, lcol: “Formula (1)”) … b) an (attribute) prototype … c) the loss function (“can be formulated as Formula (2)”, pg. 1034) … d) a probability (or a “probability…label distribution”, pg. 1032: III. Overview of the Proposed Framework: 2nd para, 4th & 5th Ss) of accurately (“when ε is closer to 1”, pg. 1037: C. Parameter Analysis, 2nd para, penult S: fig. 7,8: accurately seeing the ground-truth label) (predicting)… e) (limiting)…to prevent a degradation… (caused by (A) collapse or (B) over-convergence). Since Liu teaches that the CNN is roughly trained and that other data is used to finely train the CNN by providing rich data, Liu: pg. 648, lcol, 1st para, one of skill in the art of CNN’s can make Liu’s be as Wang’s (Inter-Domain Adaptation Label for Data Augmentation in Vehicle Re-Identification) recognizing the change constructing “free and rich data” (Liu: pg. 1032, lcol, bullet “2)”) and improving “the robust capability of CNN models” (Wang: Inter-Domain Adaptation Label for Data Augmentation in Vehicle Re-Identification, pg. 1035: lcol, 2nd S) such as improving the robust feature deep learning of Liu (Liu: pg. 653, lcol, last S): PNG media_image5.png 906 1021 media_image5.png Greyscale Liu of the combination of Liu,Wang does not teach the remaining difference of claim 11: b) an (attribute) prototype … e) (limiting)…to prevent a degradation… (caused by (A) collapse or (B) over-convergence). Mirajilili teaches the difference (b)) of claim 11: b) an (attribute) prototype (or “an opposite-attribute prototype”, pg. 72, penult S: fig. 4.2: “Decoder Prototype (Same/Opposite attribute”): fig. 4.4: face templates) … e) (limiting)…to prevent a degradation… (caused by (A) collapse or (B) over-convergence). Since Wang of the combination Liu,Wang teaches a problem of GANs (Generative Adversarial Networks) maintaining or preserving image identification and instead creates noisy images (Wang: pg. 1035, rcol, penult para, 2nd S) one of skill the art faced with the same problem would of reasonably looked to others for the solution to GANs and thus make Wang’s (Inter-Domain Adaptation Label for Data Augmentation in Vehicle Re-Identification) GANs of the combination of Liu,Wang be as Mirajlili’s recognizing that the change “demonstrates excellent performance in challenging… datasets” (Mirajlili, page 42, footnote 3) overcoming the identification “limitations of conventional GANs in preserving the recognition capability” (Mirajlili, page 22, 2nd full para, last S) as the solution to the problem: PNG media_image6.png 1629 1115 media_image6.png Greyscale Liu of the combination of Liu,Wang,Mirajalili does not teach the last difference of claim 11: e) (limiting)…to prevent a degradation… (caused by (A) collapse or (B) over-convergence). KIM teaches the last difference of claim 11: e) (limiting) (resulting in “completed34 learning”, pg. 5, 2nd txt blk)…to prevent a (learning-class) degradation (“of unused class data during learning”, pg 1, last txt blk)… (caused by (A) collapse or (B) over-convergence). Since Liu of the combination of Liu, Wang, Mirialili teaches classification, one of skill in the art of classification can make Liu’s of the combination of Liu, Wang, Mirialili be as KIM’s seeing the change “prevent” “class” “performance degradation”, KIM, pg. 1, last txt blk: PNG media_image7.png 997 1132 media_image7.png Greyscale Re 14. (Previously Presented), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 11, wherein extracting of the object representations includes: extracting, by the processor, a full feature and a partial feature (or “whole and part…attributes”, pg. 653, section 4) Semantic feature learned by CNN (GoogLeNet):, 2nd S) of an object in the image. Re 15. (Previously Presented), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 11, wherein extracting of the object representations includes: extracting, by the processor, first object representations (Liu: fig. 13: rows of cars in said Null Space) from a first image (Liu: fig. 13(a): “Query 1 Cam 2”) using the trained object representation extraction model; and extracting, by the processor, second object representations (fig. 13: rows of cars in said Null Space) from a second image (fig. 13(a): “Query 264 Cam 14”) using the trained object representation extraction model. Re 16. (Previously Presented), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 15, wherein extracting of the object representations includes: determining, by the processor, a similarity (via said similarity equation (6)) between the first object representations and the second object representations; determining, by the processor, that a first object in the first image and a second object in the second image are the same object, when the determined similarity is greater than a predetermined threshold (this contingent limitation is not satisfied in method claims 11,15,16 and thus does not limit claim 16); and determining, by the processor, that the first object and the second object are different objects, when the determined similarity is less than or equal to the predetermined threshold (this contingent limitation is not satisfied in method claims 11,15,16 and thus does not limit claim 16). Re 17. (Original), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 16, wherein determining of the similarity between the first object representation and the second object representation includes: finally determining, by the processor, a similarity (via said Liu I’s equation (6) in the rejection of canceled claim 3) between the first object representations and the second object representations by applying a (SNN “neural net-work”, Liu I: pg, 649, rcol, 2nd full para, 1st S) weight. Re 18. (Previously Presented), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 11, wherein training of the object representation extraction model includes: classifying (resulting in a “labeled” car dataset, pg. 648, lcol, 1st para, penult S) and grouping (resulting in a grouped Null Space: fig. 4), by the processor, pieces (or classify “Each vehicle” in a collection, pg. 651, rcol. Section 2) Rich attribute labels, 1st S) of attribute information of a predefined object depending on a predetermined classification condition (via classification distance equations (2) & (3) in page 648); generating, by the processor, a semantic (via “high-level semantic features”, pg. 648, lcol, 5th S) ID (or “semantic…Vehicle ID”, pg. 653, lcol, section 8) Null space Fusion of Attribute and Color feaTures (NuFACT): 1st S) using combination of the pieces of attribute information in the grouped group; and returning (via “return…the top-five lists using…Nu-FACT” attributes, pg. 655, rcol, penult S: fig. 3: “Null Space”: top five listed vehicles boxed as green), by the processor, attribute representations (as said top-five as boxed green cars) corresponding to the semantic ID. Re 19. (Original), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 18, further comprising: calculating, by the processor, a similarity (via said equation (6)) between the returned (top-five-NuFACT) attribute representations and the (top-five-car) object representations (making up the top-five); and classifying (or re-classifying the top five, “re-rank the previous results”, pg. 647, section III. Overview of the PROVID Framework, last S), by the processor, an object attribute based on the similarity between the returned attribute representations and the object representations. Re 20. (Original), Liu of the combination of Liu,Wang,Mirajalili,KIM teaches The method of claim 11, wherein the object representations are (via referring to figure 3 as delineated below) a same size (left-query) as the attribute representations (via fig. 3, zoomed-in: PNG media_image8.png 702 825 media_image8.png Greyscale ). . Allowable Subject Matter Claims 1,4,5,6,7,8,9,10 allowed: PNG media_image9.png 577 351 media_image9.png Greyscale The following is an examiner’s statement of reasons for allowance: The claims are allowed for the same reasons regarding claim 1 in applicant’s remarks of 1/7/2026, pages 9,10. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action: Citation Relevance Zou et al. (Weakly Supervised Visual Understanding) Zou teaches a focused “hard pseudo label”35 (pg. 31: Confidence Regularized Self-Training, 3.1 Introduction: 2nd para, 1st S) causes degradation, pg. 36: “As mentioned in Section 3.1, we leverage confidence regularization to prevent the over-minimization of entropy that could lead to degraded performance in self-training.” as the closest to the claimed “prevent a degradation in re-identification performance…caused by collapse”36. Gronberg (PLANKTON RECOGNITION FROM IMAGING FLOWCYTOMETER DATA USING CONVOLUTIONAL NEURAL NETWORKS) Gronberg teaches, pg. 24, 1st para: “the network’s classification… accuracy saturates and then starts to decrease rapidly. This decrease is not due to overfitting. This network accuracy degradation can be prevented by using a residual network architecture and this has enabled the training of very deep networks with hundreds of layers” as the closest to the claimed “prevent a degradation in re-identification performance…caused by collapse”37. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676 1 It is not clear that a “processor” is a system (claim), wherein system is defined: Computers. a working combination of hardware, software, and data communications devices. (Dictionary.com) 2 MPEP 2111.04 II. CONTINGENT LIMITATIONS, 1st para, 3rd S: If the claimed invention may be practiced without either the first or second condition happening, then neither step A or B is required by the broadest reasonable interpretation of the claim. 3 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase from a main clause (Dictionary.com): since the phrase --, by a processor,-- is non-limiting (nonrestrictive phrase), “, by a processor,” is not limiting/not restrictive under the broadest reasonable interpretation of method claim 11. 4 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase from a main clause (Dictionary.com): since the phrase --, by a processor,-- is non-limiting (nonrestrictive phrase), “a processor” is not required under the broadest reasonable interpretation of method claim 11. 5 “inputting” implies putting data in a computer (RAM); however, “inputting” is not clearly a system (claim) 6 once: if or when at any time; if ever. (Dictionary.com) 7 MPEP 2111.04 II. CONTIGENT LIMITATIONS, 1st para 1st S & 3rd para, last S: The broadest reasonable interpretation of a method (or process) claim having contingent limitations (via “once”: if or when at any time; if ever) requires only those steps (“training…inputting…extracting… performing… training”) that must be performed and does not include steps (“limiting…such that the similarity is not further increased”) that are not required to be performed because the condition(s) precedent (“the similarity between the object representations and the attribute representations exceeds a predetermined threshold”) are not met (claim 11 does not positively state that “the similarity” exceeds the “threshold”)… Therefore "[t]he Examiner [Rosario] did not need to present evidence … of the [ ] method steps [“limiting…”] of claim 1 [11] that are not required to be performed under a broadest reasonable interpretation of the claim [11]…” 8 i.e., a system: Computers. a working combination of hardware, software, and data communications devices. (Dictionary.com) 9 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“by the processor”) from a main clause (claim 11) (Dictionary.com) 10 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“by the processor”) from a main clause (claim 11) (Dictiopnary.com) 11 “, by the processor,” is a Non-Limiting Phrase (NLP) under the broadest reasonable interpretation of claim 11. In contrast see claim 1. 12 NLP 13 NLP 14 cost: a sacrifice, loss, or penalty. (Dictionary.com) 15 loss: Electricity. a measure of the power lost in a system, as by conversion to heat, expressed as a relation between power input and power output, as the ratio of or difference between the two quantities, wherein relation is defined: Mathematics. A) a property that associates two quantities in a definite order, as equality or inequality. B) a single- or multiple-valued function. (Dictionary.com) 16 attribute: Grammar. a word or phrase that is syntactically subordinate to another and serves to limit, identify, particularize, describe, or supplement the meaning of the form with which it is in construction. In the red house, red is an attribute of house, wherein describe is defined: to pronounce, as by a designating term, phrase, or the like; label (Dictionary.com) 17 model: a simplified representation or description of a system or complex entity, esp one designed to facilitate calculations and predictions (Dictionary.com) 18 -ing (of “limiting”): a suffix of nouns formed from verbs (limit), expressing the action (what is the action of limit of the claimed “limiting? There is none. The claimed “limiting” is more directed to a result (i.e., the claimed “such that…”) than an action: contrast to claim 1’s “limit a similarity”) of the verb (limit) or its result (the result “such that” is in an un-satisfied contingent limitation), product, material, etc. (Dictionary.com) 19 maximum: Mathematics. a) Also called relative maximum,. Also called local maximum. the value of a function at a certain point in its domain, which is greater than or equal to the values at all other points in the immediate vicinity of the point. b) the point in the domain at which a maximum occurs, wherein point is defined: a particular aim, end, or purpose, wherein end is defined: a point, line, or limitation that indicates the full extent, degree, etc., of something; limit; bounds, wherein limitation is defined: something that limits. (Dictionary.com) 20 “once” indicates a contingent limitation (“the similarity is not further increased once the similarity between the object representations and the attribute representations exceeds a predetermined threshold”) that is not satisfied in method claim 11 and thus not part of the broadest reasonable interpretation of method claim 11. 21 past participle: a participial form of verbs (cause) used to modify a noun (“a degradation”) that is logically the object of a verb (“to prevent”), also used in certain compound tenses and passive forms of the verb in English and other languages (Dictionary.com) 22 Markush element of Markush alternatives follows: [(A) or (B)] 23 BROAD CLAIM LANGUAGE: collapse: a sudden, complete failure; breakdown, wherein breakdown is defined: an analysis or classification of something; division into parts, categories, processes, etc. (Dictionary.com) 24 Regarding the Markush element [(A) or (B)] in view of applicant’s disclosure [0110]: [0110] Hereinabove, although the present disclosure has been described with reference to exemplary embodiments and the accompanying drawings, the present disclosure is not limited thereto, but may be variously modified and altered by those skilled in the art to which the present disclosure pertains without departing from the spirit and scope of the present disclosure claimed (via said Markush element) in the following claims. Therefore, embodiments of the present disclosure are not intended to limit the technical spirit of the present disclosure, but provided only for the illustrative purpose. The scope of the present disclosure should be construed on the basis of the accompanying claims, and all the technical ideas within the scope (via said Markush alternative (A): collapse: classification of something: Dictionary.com) equivalent to the claims should be included in the scope (e.g., “The processor may classify”, applicant’s disclosure [0016]) of the present disclosure, wherein scope is defined: Linguistics, Logic. the range of words or elements (e.g., “The processor may classify”, applicant’s disclosure [0016] or any word up for grammatical modification in applicant’s disclosure) of an expression over which a modifier (e.g., a patent examiner) or operator (e.g., me) has control. (Dictionary.com) 25 Since Markush alternative (A) is taught the Markush element [(A) or (B)] (both alternatives have similar structure: junction as a joint) is taught under the broadest reasonable interpretation of claim 11. 26 ellipses (…) represent claim limitations already taught 27 (italics) represent claim limitations already taught 28 THE CLAIMED INVENTION AS A WHOLE: regarding “prevent” in view of applicant’s disclosure: The problem (not apparent in Liu) faced by applicants is: [0084] However, when the cosine similarity between the person representation and the attribute representation is greater than a predefined threshold, the training loss function 510 may allow the cosine similarity not to be increased any longer. This is to prevent a problem of inducing representations of two different persons with the same attribute to be more the same than necessary and causing degradation of re-identification performance, when the similarity between the person representation and the attribute representation is increased to the threshold or more. The solution to this problem is above. I don’t see “the training loss function…may allow the cosine similarity not to be increased any longer” in claim 11 under the broadest reasonable interpretation since the contingent limitation of method claim 11 does not limit claim 11. This non-limiting contingent limitation in method claim 11 under the broadest reasonable interpretation is an indication of obviousness. In contrast, see system claim 1’s contingent limitation that limits claim 1. 29 “degradation” further modified by the participial Markush phrase “caused by (A) collapse or (B) over-convergence”. 30 past participle: a participial form of verbs (cause) used to modify a noun (“a degradation”) that is logically the object of a verb (“to prevent”), also used in certain compound tenses and passive forms of the verb in English and other languages (Dictionary.com) 31 Markush element of Markush alternatives follows: [(A) or (B)] 32 BROAD CLAIM LANGUAGE: collapse: a sudden, complete failure; breakdown, wherein breakdown is defined: an analysis or classification of something; division into parts, categories, processes, etc. (Dictionary.com) 33 Since Markush alternative (A) is taught the Markush element [(A) or (B)] is taught under the broadest reasonable interpretation of claim 11. 34 complete: to bring to an end; finish, wherein end is defined: a point, line, or limitation that indicates the full extent, degree, etc., of something; limit; bounds. (Dictionary.com) 35 label: a word or phrase indicating that what follows belongs in a particular category or classification. (Dictionary.com) 36 BROAD CLAIM LANGUAGE: collapse: a sudden, complete failure; breakdown, wherein breakdown is defined: an analysis or classification of something; division into parts, categories, processes, etc. (Dictionary.com) 37 BROAD CLAIM LANGUAGE: collapse: a sudden, complete failure; breakdown, wherein breakdown is defined: an analysis or classification of something; division into parts, categories, processes, etc. (Dictionary.com)
Read full office action

Prosecution Timeline

Oct 31, 2022
Application Filed
Dec 08, 2024
Non-Final Rejection — §103
Feb 17, 2025
Response Filed
Mar 28, 2025
Final Rejection — §103
May 14, 2025
Request for Continued Examination
May 15, 2025
Response after Non-Final Action
May 23, 2025
Non-Final Rejection — §103
Aug 07, 2025
Response Filed
Oct 21, 2025
Final Rejection — §103
Jan 07, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month