Prosecution Insights
Last updated: April 19, 2026
Application No. 17/689,043

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM

Final Rejection §101§102
Filed
Mar 08, 2022
Examiner
SACKALOSKY, COREY MATTHEW
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
16 granted / 25 resolved
+9.0% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
42.0%
+2.0% vs TC avg
§103
38.0%
-2.0% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§101 §102
DETAILED ACTION This Office Action is in response to the amendments filed on 08/01/2025. Claims 1, 7, and 8 currently amended. Claims 1-8 are pending in this application and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments In reference to Applicant’s arguments on page(s) 7-10 regarding rejections made under 35 U.S.C. 101: Claims 1-8 rejected under 35 U.S.C. § 101 because they are directed to an abstract idea without significantly more. Applicant respectfully traverses. Applicant respectfully submits that claim 1 is patent eligible under Prong Two of the revised Step 2A of the Alice test. In Prong Two, examiners evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception. If the recited exception is integrated into a practical application of the exception, then the claim is eligible at Prong Two of revised Step 2A. Even if it is assumed claim 1 recites a judicial exception, which Applicant does not concede, Applicant respectfully submits that the claim is patent eligible under prong two of the revised Step 2A of the Alice test because the claim integrates the alleged judicial exception into a practical application. MPEP 2106.04(II)(A)(2) provides that in "Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. If the additional elements in the claim integrate the recited exception into a practical application of the exception, then the claim is not directed to the judicial exception." Without any admissions and solely in an effort to expedite prosecution of the present application, amended claim 1 recites "generating a correspondence relation between a position of each of the first target objects in each piece of the inference target data and a position of each of the first target objects in the aggregated data; applying the aggregated data to a second learned model to infer the first target object as secondary inference; inferring each of the first target objects in each piece of the inference target data by using each of the first target objects in a result of the secondary inference and the correspondence relation; and displaying each of the first target objects in each piece of the inference target data inferred by using each of the first target objects in a result of the secondary inference and the correspondence relation to a display device." Accordingly, claim 1 integrates any possible judicial exception into a practical application of any alleged exception, and accordingly is patent eligible under Prong Two of the revised Step 2A of the Alice test. Moreover, Applicant respectfully submits that even if it is assumed the claim is directed to an abstract idea, which is not conceded, independent claim 1 recites significantly more than any allegedly abstract idea. In particular, MPEP 2106.05(I)(A)(v) indicates that in evaluating Step 2B, an additional element or combination of elements "[adds] a specific limitation other than what is well- understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application," has been found to qualify as "significantly more" when recited in a claim with a judicial exception. Applicant submits that claim 1, as amended, provides an "inventive concept," and does not simply append well-understood, routine or conventional activities. For at least these reasons discussed above, Applicant respectfully submits that claim 1 and similarly claims 7 and 8 are directed to patent eligible subject matter. Claims 2-6 are patentable at least by virtue of their dependencies. Examiner’s response: Applicant’s arguments have been fully considered but are found to be not persuasive in light of the amendments made on the claims. Applicant states that they believe that the amended claims are eligible under Prong Two of the revised Step 2A of the Alice test, however they only present the amended claims and no reasoning as to why the amendments make the claims now eligible other than a recitation of the appropriate MPEP section regarding the Alice test. Without any substantive arguments presented as to why the amended claims are now eligible, the claims are still rejected. In light of the amendments made on the claims, the rejections made under 35 U.S.C. 101 are maintained and updated below. In reference to Applicant’s arguments on page(s) 10-12 pages regarding rejections made under 35 U.S.C. 102: Claims 1-5, 7, and 8 are rejected under 35 U.S.C. § 102 as allegedly being anticipated by Yao (US 11,244,191). Applicant respectfully traverses the rejection. Yao discloses inferring multiple target objects within a single image, as illustrated in FIGS. 6 and 7 of Yao. However, Yao does not teach or suggest "generating aggregated data that is data having a smaller quantity than the inference target data by using a plurality of the first target objects inferred in the primary inference; generating a correspondence relation between a position of each of the first target objects in each piece of the inference target data and a position of each of the first target objects in the aggregated data; applying the aggregated data to a second learned model to infer the first target object as secondary inference; inferring each of the first target objects in each piece of the inference target data by using each of the first target objects in a result of the secondary inference and the correspondence relation; and displaying each of the first target objects in each piece of the inference target data inferred by using each of the first target objects in a result of the secondary inference and the correspondence relation to a display device," as recited in claim 1. Accordingly, independent claim 1 is patentable over the cited references because each and every feature of the claim is not disclosed by the cited references. To the extent independent claims 7 and 8 recite features similar to those discussed above with respect to claim 1, Applicant respectfully submits claims 7 and 8 are patentable over the cited references for similar reasons. Claims 2-6 are patentable at least by virtue of their dependencies. Examiner’s response: Applicant’s arguments have been fully considered but are moot in light of the amendments made on the claims. In light of the amendments made on the independent claims, the rejections made under 35 U.S.C. 102 are withdrawn and new grounds for rejection is presented below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more. Step 1 analysis: Independent Claim 1 recites, in part, an information processing device, therefore falling into the statutory category of machine. Independent Claim 7 recites, in part, an information processing method, therefore falling into the statutory category of process. Independent Claim 8 recites, in part, a non-transitory computer-readable recording medium embodying a program, therefore falling into the statutory category of machine. Regarding Claim 1: Step 2A: Prong 1 analysis:Claim 1 recites in part: “generating aggregated data that is data having a smaller quantity than the inference target data by using a plurality of the first target objects inferred in the primary inference”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses generating data that is smaller in quantity than calculated data. “generating a correspondence relation between a position of each of the first target object objects in each piece of the inference target data and a position of each of the first target objects in the aggregated data”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses generating corresponding data in two data sets. “inferring each of the first target objects in each piece of the inference target data by using each of the first target objects in a result of the secondary inference and the correspondence relation”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses inferring a target object based on corresponding data. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “a memory”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (a memory) (See MPEP 2106.05(f)). “at least one processor coupled to the memory”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (a processor) (See MPEP 2106.05(f)). “applying each piece of the inference target data to a first learned model to infer the first target object as primary inference”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (machine learning model) (See MPEP 2106.05(f)). “applying the aggregated data to a second learned model to infer the first target object as secondary inference”. This additional element is recited at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (machine learning model) (See MPEP 2106.05(f)). “displaying each of the first target objects in each piece of the inference target data inferred by using each of the first target objects in a result of the secondary inference and the correspondence relation to a display device”. This additional elements is recited at a high level of generality and amounts to extra-solution activity of outputting/displaying data. Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element(s) of “a memory”, “at least one processor coupled to the memory”, “applying each piece of the inference target data to a first learned model to infer the first target object as primary inference”, and “applying the aggregated data to a second learned model to infer the first target object as secondary inference” is/are recited at a high-level of generality such that it/they amount(s) to no more than mere instructions to apply the exception using generic computer components (See MPEP 2106.05(f)). The additional element(s) of “displaying each of the first target objects in each piece of the inference target data inferred by using each of the first target objects in a result of the secondary inference and the correspondence relation to a display device” is/are recited at a high level of generality and amount(s) to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying/outputting a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”). Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 2: Step 2A: Prong analysis: Claim 2 recites in part: “inferring a second target object having a predetermined positional relationship with the first target object”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses guessing what a second object is positionally related to a first object. “generating the aggregated data and the correspondence relation by using the first target object and the second target object”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses generating data using two different target objects. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 3: Step 2A: Prong 1 analysis: Claim 3 recites in part: “using, among the second target objects included in a result of the primary inference, the second target object in which the first target object in the positional relationship is not included in the result of the primary inference”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses using a second object that is not positionally related to the first. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 4: Step 2A: Prong 1 analysis: Claim 4 recites in part: “executing predetermined processing on at least a part of the first target object included in the result of the primary inference before generating the aggregated data.”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses performing a predetermined data process before any data has been aggregated. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 5: Step 2A: Prong 1 analysis: Claim 5 recites in part: “generating a first learning data set used for learning of the first learned model by using a first data set”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses generating a dataset using a given dataset. “generating a second learning data set used for learning of the second learned model by using at least one of the first data set and the first learning data set”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses generating a dataset using a given dataset. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The judicial exception is not integrated into practical application. In particular, the claim recites the additional elements of: “generating the first learned model by using the first learning data set”. This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. “generating the second learned model by using the second learning data set”. This additional element is recited at a high level of generality such that the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. Accordingly at Step 2A: Prong 2, the additional elements individually or in combination do not integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional element(s) of “generating the first learned model by using the first learning data set” and “generating the second learned model by using the second learning data set” is/are recited at a high-level of generality such that the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished (See MPEP 2106.05(f)). Accordingly, at Step 2B, the additional elements individually or in combination do not amount to significantly more than the judicial exception. Regarding Claim 6: Step 2A: Prong 1 analysis: Claim 6 recites in part: “switching at least one of the primary inference, the secondary inference, and data aggregation based on a predetermined load or throughput in the information processing device”. As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, or opinion) or with the aid of pencil and paper. For example, this limitation encompasses switching data based on physical device limitations. Accordingly, at Step 2A: Prong 1, the claim is directed to an abstract idea. Step 2A: Prong 2 analysis: The claim does not recite any additional elements that integrate the judicial exception into a practical application. Step 2B analysis: In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. Regarding Claim 7: Due to claim language similar to that of Claim 1, Claim 7 is rejected for the same reasons as presented above in the rejection of Claim 1. Regarding Claim 8: Due to claim language similar to that of Claims 1 and 7, Claim 8 is rejected for the same reasons as presented above in the rejection of Claims 1 and 7. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 7, and 8 is/are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Yao et al (US 11244191 B2, hereinafter Yao). Regarding Claim 1: Yao teaches An information processing device comprising: a memory (Yao [Col 10 lines 49-56]: "computing device 100 may include other components that may or may not be physically and electrically coupled to the board 2. These other components include, but are not limited to, volatile memory (e.g., DRAM) 8, non-volatile memory (e.g., ROM) 9, flash memory (not shown), a graphics processor 12, a digital signal processor (not shown), a crypto processor (not shown)"); at least one processor coupled to the memory, the processor performing operations, the operations comprising (Yao [Col 10 lines 49-56]: "computing device 100 may include other components that may or may not be physically and electrically coupled to the board 2. These other components include, but are not limited to, volatile memory (e.g., DRAM) 8, non-volatile memory (e.g., ROM) 9, flash memory (not shown), a graphics processor 12, a digital signal processor (not shown), a crypto processor (not shown)"): acquiring a plurality of images as inference target data in which at least part of data includes a first target object from a camera (Yao [Col 6 lines 38-41]: "The process of FIG. 2 starts at 202 with pre-training a deep CNN model, such as the VGGNet model (proposed by the Visual Geometry Group of University of Oxford), using a large scale object classification dataset such as ImageNet."); applying each piece of the inference target data to a first learned model to infer the first target object as primary inference (Yao [Col 7 lines 52-55]: "As described herein, embodiments use a Hyper Feature which combines the feature maps from multiple convolutional layers of a pre-trained CNN model to represent image or region content."); generating aggregated data that is data having a smaller quantity than the inference target data by using a plurality of the first target objects inferred in the primary inference (Yao [Col 9 lines 59-60]: "at 806 multiple different layers of convolution are performed on the image to generate feature maps."; [Col 10 lines 19-21]: "at 810 the reshaped feature maps are grouped together by sequential concatenation to form a single combined feature map."; (EN): it can be seen in Figs 6 and 7 that the region proposals for some objects rely on smaller amounts of data than others; Fig. 8 is a process flow diagram of region generation and object detection ); generating a correspondence relation between a position of each of the first target objects in each piece of the inference target data and a position of each of the first target objects in the aggregated data (Yao [Col 10 lines 21-29]: "at 812 region proposals are generated using the combined feature map by scoring bounding box regions of image. the region proposals may be generated by first generating a score for the detection of an object and then generating bounding box regression. the score and regression may then be combined with the combined feature map to generate region proposals. the bounding box regression may include location offsets for objects in the combined feature map."; (EN): it can be seen in Figs 6 and 7 that the number of region proposals created as a result of the first model is decreased and more accurately surround the detected objects); applying the aggregated data to a second learned model to infer the first target object as secondary inference (Yao [Col 10 lines 21-29]: "at 812 region proposals are generated using the combined feature map by scoring bounding box regions of image. the region proposals may be generated by first generating a score for the detection of an object and then generating bounding box regression. the score and regression may then be combined with the combined feature map to generate region proposals. the bounding box regression may include location offsets for objects in the combined feature map."; (EN): it can be seen in Fig 1 that there are 2 models being used, one model for region proposal and one model for detecting objects within the proposed regions); inferring each of the first target objects in each piece of the inference target data by using each of the first target objects in a result of the secondary inference and the correspondence relation (Yao [Col 10 lines 21-29]: "at 812 region proposals are generated using the combined feature map by scoring bounding box regions of image. the region proposals may be generated by first generating a score for the detection of an object and then generating bounding box regression. the score and regression may then be combined with the combined feature map to generate region proposals. the bounding box regression may include location offsets for objects in the combined feature map."; (EN): it can be seen in Fig 1 that there are 2 models being used, one model for region proposal and one model for detecting objects within the proposed regions). displaying each of the first target objects in each piece of the inference target data inferred by using each of the first target objects in a result of the secondary inference and the correspondence relation to a display device (Yao [Col 11 lines 26-34]: "The cameras 32 are coupled to an image processing chip 36 to perform format conversion, coding and decoding, region proposal generation and object detection and classification as described herein. The processor 4 is coupled to the image processing chip to drive the processes, set parameters, etc. The display is coupled to the processors to show the proposed regions and detected and classified objects as shown in FIGS. 1, 6 and 7.") Regarding Claim 2: Yao teaches The information processing device according to claim 1, wherein the operations further comprise: inferring a second target object having a predetermined positional relationship with the first target object (Yao [Col 10 lines 21-29]: "at 812 region proposals are generated using the combined feature map by scoring bounding box regions of image. the region proposals may be generated by first generating a score for the detection of an object and then generating bounding box regression. the score and regression may then be combined with the combined feature map to generate region proposals. the bounding box regression may include location offsets for objects in the combined feature map."; (EN): it can be seen in Figs 6 and 7 that the region proposals created as a result of the first model are decreased and more accurately surround the detected objects) generating the aggregated data and the correspondence relation by using the first target object and the second target object (Yao [Col 9 lines 59-60]: "at 806 multiple different layers of convolution are performed on the image to generate feature maps."; [Col 10 lines 19-21]: "at 810 the reshaped feature maps are grouped together by sequential concatenation to form a single combined feature map."; (EN): it can be seen in Figs 6 and 7 that the region proposals for some objects rely on smaller amounts of data than others; Fig. 8 is a process flow diagram of region generation and object detection). Regarding Claim 3: Yao teaches The information processing device according to claim 2, wherein the operations further comprise: using, among the second target objects included in a result of the primary inference, the second target object in which the first target object in the positional relationship is not included in the result of the primary inference (Yao [Col 10 lines 21-29]: "at 812 region proposals are generated using the combined feature map by scoring bounding box regions of image. the region proposals may be generated by first generating a score for the detection of an object and then generating bounding box regression. the score and regression may then be combined with the combined feature map to generate region proposals. the bounding box regression may include location offsets for objects in the combined feature map."; (EN): it can be seen in Figs 6 and 7 that not all of the region proposals created as a result of the first model are used in the object detection results). Regarding Claim 4: Yao teaches The information processing device according to claim 1, wherein the operations further comprise: executing predetermined processing on at least a part of the first target object included in the result of the primary inference before generating the aggregated data (Yao [Col 9 lines 59-60]: "at 806 multiple different layers of convolution are performed on the image to generate feature maps."; (EN): Fig 8 shows at steps 806 and 808 that some preprocessing is performed on the data before the feature maps are aggregated in step 810) Regarding Claim 5: Yao teaches The information processing device according to claim 1, wherein the operations further comprise: generating a first learning data set used for learning of the first learned model by using a first data set (Yao [Col 7 lines 14-24]: "the data for training and testing the region proposal HyperNet are collected as follows: (1) for region proposal scoring, samples having >0.45 Intersection over Union (IoU) with the ground truth are selected as positives, and negative samples have <0.3 IoU; (2) for BBR, positive samples have >0.4 IoU and negative samples also have <0.3 IoU; (3) the ratio of positive samples to negative samples is 1:3; (4) region proposal samples are generated with sliding windows, we use 6 sizes (W, H)∈{10, 20, 40, 80, 160, 320}, with 3 aspect ratios r E {½, 1, 2}. Totally, about 20K region proposals are generated in each input image."); generating a second learning data set used for learning of the second learned model by using at least one of the first data set and the first learning data set (Yao [Col 7 lines 25-28]: "The data for training and testing the object detection HyperNet is directly collected as the top 200 proposals (with descending Objectness scores) obtained from a region proposal HyperNet training model"); generating the first learned model by using the first learning data set (Yao [Col 7 lines 14-24]: "the data for training and testing the region proposal HyperNet are collected as follows: (1) for region proposal scoring, samples having >0.45 Intersection over Union (IoU) with the ground truth are selected as positives, and negative samples have <0.3 IoU; (2) for BBR, positive samples have >0.4 IoU and negative samples also have <0.3 IoU; (3) the ratio of positive samples to negative samples is 1:3; (4) region proposal samples are generated with sliding windows, we use 6 sizes (W, H)∈{10, 20, 40, 80, 160, 320}, with 3 aspect ratios r E {½, 1, 2}. Totally, about 20K region proposals are generated in each input image."; (EN): the use of data for "training and testing" is analogous to generating a learned model); generating the second learned model by using the second learning data set (Yao [Col 7 lines 25-28]: "The data for training and testing the object detection HyperNet is directly collected as the top 200 proposals (with descending Objectness scores) obtained from a region proposal HyperNet training model"; (EN): the use of data for "training and testing" is analogous to generating a learned model). Regarding Claim 7: Due to claim language similar to that of Claim 1, Claim 7 is rejected for the same reasons as presented above in the rejection of Claim 1. Regarding Claim 8: Due to claim language similar to that of Claims 1 and 7, Claim 8 is rejected for the same reasons as presented above in the rejection of Claims 1 and 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20210271912 A1 – object detection using multiple neural network configurations US 11100316 B2 – automated activity recognition, and in particular to an automated driver assistance system US 20190147372 A1 – systems, methods, tangible non-transitory computer-readable media, and devices for object detection, tracking, and motion prediction Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY M SACKALOSKY whose telephone number is (703)756-1590. The examiner can normally be reached M-F 7:30am-3:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached on (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COREY M SACKALOSKY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Mar 08, 2022
Application Filed
Apr 22, 2025
Non-Final Rejection — §101, §102
Jul 10, 2025
Applicant Interview (Telephonic)
Jul 14, 2025
Examiner Interview Summary
Aug 01, 2025
Response Filed
Oct 24, 2025
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596932
METHOD AND SYSTEM FOR DEPLOYMENT OF PREDICTION MODELS USING SKETCHES GENERATED THROUGH DISTRIBUTED DATA DISTILLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591759
PARALLEL AND DISTRIBUTED PROCESSING OF PROPOSITIONAL LOGICAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12572441
FULLY UNSUPERVISED PIPELINE FOR CLUSTERING ANOMALIES DETECTED IN COMPUTERIZED SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12518197
INCREMENTAL LEARNING WITHOUT FORGETTING FOR CLASSIFICATION AND DETECTION MODELS
2y 5m to grant Granted Jan 06, 2026
Patent 12487763
METHOD AND APPARATUS WITH MEMORY MANAGEMENT AND NEURAL NETWORK OPERATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+49.4%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month