Prosecution Insights
Last updated: April 19, 2026
Application No. 18/248,391

DNN CONTRACTION DEVICE AND ONBOARD COMPUTATION DEVICE

Final Rejection §103
Filed
Apr 10, 2023
Examiner
MAIDO, MAGGIE T
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Hitachi Astemo, Ltd.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
85%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
23 granted / 36 resolved
+8.9% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
51 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
25.6%
-14.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment The amendment filed on 4 March 2026 has been entered. Claims 1-9 are pending. Claim 2 is cancelled. Claims 1, 3 are amended. Claims 1, 3-9 will be pending. Applicant’s amendments to the Claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed 18 December 2025. Response to Arguments Applicant’s remarks, regarding the rejections of claims under 35 USC 103, have been fully considered. Applicant notes amended Claim 1 recites a deep neural network (DNN) contraction device that "outputs a contracted DNN to a DNN computation unit that performs a DNN computation using an internal memory," and comprises "an output data size measurement unit that measures, in an iteration, an output data size in at least one DNN layer of a plurality of DNN layers from DNN network information; and a data contraction unit that: sets, in the iteration, a contraction number of the at least one DNN layer based on the output data size and a memory size of the internal memory such that the output data size in the at least one DNN layer is equal to or less than the memory size of the internal memory, and contracts, in the iteration, a DNN according to the set contraction number." Applicant submits the cited references, alone or in any combination, do not teach or suggest the subject matter of the independent claim. Applicant’s arguments have been considered, but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-7 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (U.S. Pre-Grant Publication No. 20200226451, hereinafter ‘Liu'), in view of Yang et al. (NPL : "NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications", hereinafter 'Yang'). Regarding claim 1, Liu teaches A deep neural network (DNN) contraction device that outputs a contracted DNN to a DNN computation unit that performs a DNN computation using an internal memory, the DNN contraction device comprising ([0053] FIG. 2 illustrates layer contraction of a neural network 210, according to one or more embodiments.): a data contraction unit that sets a contraction number of the DNN layer based on the output data size and a memory size of the internal memory ([0079] In operations 404 and 405, for layer contraction through approximation of the inference process in the operation 403, the processor 110 may determine a data contraction unit that sets a contraction number of the DNN layer layer contraction parameters that define an affine transformation relationship between the input layer and the output layer. In other words, the processor 110 may determine a single weight matrix Q denoting a weight W described in Equation 5, a bias vector q denoting a bias b, and a binary mask m. The binary mask m may be a vector for performing activation masking by replacing an operation of the activation function performed in each of the hidden layers.) Liu fails to teach an output data size measurement unit that measures, in an iteration, an output data size in at least one DNN layer of a plurality of DNN layers from DNN network information; and a data contraction unit that: sets, in the iteration, a contraction number of the at least one DNN layer based on the output data size and a memory size of the internal memory such that the output data size in the at least one DNN layer is equal to or less than the memory size of the internal memory, and contracts, in the iteration, a DNN according to the set contraction number. Yang teaches an output data size measurement unit that measures, in an iteration, an output data size in at least one DNN layer of a plurality of DNN layers from DNN network information ([3.4 Fast Resource Consumption Estimation, pg. 7-8] As mentioned in Sec. 3.3, NetAdapt uses empirical measurements to determine the number of filters to keep in a layer given the resource constraint.; We solve this problem by building in an iteration layer-wise look-up tables with pre-measured output data size measurement unit that measures resource consumption of an output data size in at least one DNN layer of a plurality of DNN layers from DNN network information each layer. When executing the algorithm, we look up the table of each layer, and sum up the layer-wise measurements to estimate the network-wise resource consumption, which is illustrated in Fig. 3.); and a data contraction unit that: sets, in the iteration, a contraction number of the at least one DNN layer based on the output data size and a memory size of the internal memory such that the output data size in the at least one DNN layer is equal to or less than the memory size of the internal memory ([3.1 Problem Formulation, pg. 4-5] NetAdapt aims to solve the following non-convex constrained problem: maximize Net Acc(Net) subject to Resj(Net) ≤ Budj, j = 1,...,m, (1) where Net is a simplified network from the initial pretrained network, Acc(·) computes the accuracy, Resj(·) evaluates the direct metric for resource con sumption of the jth resource, and Budj is the budget of the jth resource and the constraint on the optimization. The resource can be latency, energy, memory footprint, etc., or a combination of these metrics. Based on an idea similar to progressive barrier methods [1], NetAdapt breaks this problem into the following series of easier problems and solves it iteratively: maximize Neti Acc(Neti) subject to Resj(Neti) ≤ Resj(Neti−1) −∆Ri,j, j = 1,...,m, (2) where Neti is the network generated by the ith iteration,andNet0 is the initial pretrained network. As the number of iterations increases, the a data contraction unit that: sets, in the iteration, a contraction number of the at least one DNN layer constraints (i.e., currentresourcebudgetResj(Neti−1)−∆Ri,j)graduallybecometighter.∆Ri,j, which is larger than zero, indicates how much the constraint tightens for the jth resource in the ith iteration and can vary from iteration to iteration. This is referred to as “resource reduction schedule”, which is similar to the concept of learning rate schedule. The algorithm terminates when Resj (Neti−1) −∆ Ri,j is based on the output data size and a memory size of the internal memory such that the output data size in the at least one DNN layer is equal to or less than the memory size of the internal memory equal to or smaller than Budj for every resource type. It outputs the final adapted network and can also generate a sequence of simplified networks(i.e., the highest accuracy network from each iteration Net1,...,Neti) to provide the efficient frontier of accuracy and resource consumption trade-offs.), and contracts, in the iteration, a DNN according to the set contraction number ([3.2 Algorithm Overview, pg. 6] Fig.2. This figure visualizes the according to the set contraction number algorithm flow of NetAdapt. At in the iteration each iteration, NetAdapt decreases the resource consumption by simplifying (i.e., contracts a DNN removing filters from) one layer. In order to maximize accuracy, it tries to simplify each layer individually and picks the simplified network that has the highest accuracy. Once the target budget is met, the chosen network is then fine-tuned again until convergence.). Liu and Yang are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Liu, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Yang to Liu before the effective filing date of the claimed invention in order to automatically and progressively simplify a pre-trained network until the resource budget is met while maximizing the accuracy (cf. Yang, [Abstract., pg. 1] This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile plat form given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource bud get is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art auto mated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7× speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).). Regarding claim 3, Liu, as modified by Yang, teaches The DNN contraction device of claim 1. Yang teaches wherein the data contraction unit includes: a contraction number setting unit that sets the contraction number of the at least one DNN layer such that the output data size is an integral multiple of the memory size of the internal memory ([3.1 Problem Formulation, pg. 4-5] NetAdapt aims to solve the following non-convex constrained problem: maximize Net Acc(Net) subject to Resj(Net) ≤ Budj, j = 1,...,m, (1) where Net is a simplified network from the initial pretrained network, Acc(·) computes the accuracy, Resj(·) evaluates the direct metric for resource con sumption of the jth resource, and Budj is the budget of the jth resource and the constraint on the optimization. The resource can be latency, energy, memory footprint, etc., or a combination of these metrics. Based on an idea similar to progressive barrier methods [1], NetAdapt breaks this problem into the following series of easier problems and solves it iteratively: maximize Neti Acc(Neti) subject to Resj(Neti) ≤ Resj(Neti−1) −∆Ri,j, j = 1,...,m, (2) where Neti is the network generated by the ith iteration,andNet0 is the initial pretrained network. As the number of iterations increases, the wherein the data contraction unit includes: a contraction number setting unit that sets the contraction number of the at least one DNN layer constraints (i.e., currentresourcebudgetResj(Neti−1)−∆Ri,j)graduallybecometighter.∆Ri,j, which is larger than zero, indicates how much the constraint tightens for the jth resource in the ith iteration and can vary from iteration to iteration. This is referred to as “resource reduction schedule”, which is similar to the concept of learning rate schedule. The algorithm terminates when Resj (Neti−1) −∆ Ri,j is such that the output data size is an integral multiple of the memory size of the internal memory equal to or smaller than Budj for every resource type. It outputs the final adapted network and can also generate a sequence of simplified networks(i.e., the highest accuracy network from each iteration Net1,...,Neti) to provide the efficient frontier of accuracy and resource consumption trade-offs.); and a contraction execution unit that contracts the DNN according to the set contraction number ([3.2 Algorithm Overview, pg. 6] Fig.2. This figure visualizes the a contraction execution unit that contracts the DNN according to the set contraction number algorithm flow of NetAdapt. At each iteration, NetAdapt decreases the resource consumption by simplifying (i.e., removing filters from) one layer. In order to maximize accuracy, it tries to simplify each layer individually and picks the simplified network that has the highest accuracy. Once the target budget is met, the chosen network is then fine-tuned again until convergence.). Liu and Yang are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 4, Liu, as modified by Yang, teaches The DNN contraction device of claim 3. Liu teaches comprising a recognition accuracy confirmation unit that causes the contraction number setting unit to reduce the contraction number in a case where recognition accuracy at a time of using the contracted DNN is less than a threshold, and causes the contraction number setting unit to increase the contraction number in a case where the recognition accuracy is larger than a threshold ([0024] The at least one processor may be configured to: determine whether to update the reference sample; and in response to determining to update of the reference sample, update the reference sample to a current input sample, and a recognition accuracy confirmation unit that causes the contraction number setting unit to reduce the contraction number update the layer contraction parameters based on the updated reference sample, wherein the current input sample is an input sample proceeding the reference sample among the sequential input samples.; [0108] Referring to FIG. 6B, according to another example, the processor 110 may determine whether to update the existing reference sample 621, 622, or 623 by comparing mean-square error (MSE) values (e.g., pixel-wise square difference) between a current input sample and the existing reference sample 621, 622, or 623. Unlike the method of FIG. 6A, in the method of FIG. 6B, which may be referred to as the MSE on an input method, inference using a layer-contracted neural network generated to be optimal to a current input sample may be performed by accurately determining an input sample (frame) where actual transformation is generated. For example, when an causes the contraction number setting unit to increase the contraction number in a case where the recognition accuracy is larger than a threshold MSE between a current input sample and the existing reference sample is greater than or equal to a predetermined threshold value, the processor 110 may determine to update the existing reference sample to be the current input sample. As another example, when the in a case where recognition accuracy at a time of using the contracted DNN is less than a threshold MSE between a current input sample and the existing reference sample is less than the predetermined threshold value, the processor 110 may determine not to update the existing reference sample.). Liu and Yang are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 5, Liu, as modified by Yang, teaches The DNN contraction device of claim 4. Liu teaches wherein the recognition accuracy confirmation unit is configured to confirm recognition accuracy of the contracted DNN by using test image data and test correct answer data prepared in advance ([0077] In operation 402, the processor 110 may configured to confirm recognition accuracy of the contracted DNN by using test image data and test correct answer data prepared in advance determine a reference sample from among the sequential input samples. For example, when the sequential input samples are individual frames of video data, the processor 110 may determine image data of a first frame of the frames to be a first reference sample. The reference sample may be a sample to be used to obtain an output activation in operation 403, for example.). Liu and Yang are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 6, Liu, as modified by Yang, teaches The DNN contraction device of claim 4. Liu teaches wherein the DNN computation unit is configured to recognize an object from external information sensed by a main sensor ([0050] Referring to FIG. 1, the computing system 1 for processing wherein the DNN computation unit is configured to recognize an object a neural network may perform inference using a neural network on various types of input data such as image data, video data, audio/voice data, from external information sensed by a main sensor sensed data measured by using an external/internal sensor, and/or network data received through a network. In this state, an inference result of a neural network may include various types (e.g., such as an image recognition result, a video surveillance, a voice recognition result, anomaly detection, and/or biosignal monitoring), and thus the computing system 1 may be a system that can be employed in various technical fields (e.g., such as autonomous driving, Internet of Things (IoT), and/or medical monitoring).), and the recognition accuracy confirmation unit is configured to compare information of an object recognized from external information sensed by a sub sensor different from the main sensor with information of the object recognized by the DNN computation unit ([0076] Referring to FIG. 4, in operation 401, the processor 110 may the recognition accuracy confirmation unit is configured to compare information obtain, as input data, sequential input samples to be processed by a neural network including an input layer, one or more hidden layers, and an output layer. In this state, each of the sequential input samples may be one corresponding to each of consecutive frames of video data, but the present disclosure is not limited thereto. In other words, the sequential input samples may correspond to voice/audio data samples or may be various other types of data samples such as bio signal data samples. The different from the main sensor with information of the object recognized by the DNN computation unit sequential input samples may be of an object recognized from external information sensed by a sub sensor video/image/voice/audio data received from an external network of the computing apparatus 10 or data measured or acquired using a sensor provided in the computing apparatus 10. In other words, sources of the input data may be various.), and confirm recognition accuracy of the contracted DNN ([0108] Referring to FIG. 6B, according to another example, the processor 110 may determine whether to confirm recognition accuracy of the contracted DNN update the existing reference sample 621, 622, or 623 by comparing mean-square error (MSE) values (e.g., pixel-wise square difference) between a current input sample and the existing reference sample 621, 622, or 623. Unlike the method of FIG. 6A, in the method of FIG. 6B, which may be referred to as the MSE on an input method, inference using a layer-contracted neural network generated to be optimal to a current input sample may be performed by accurately determining an input sample (frame) where actual transformation is generated. For example, when an MSE between a current input sample and the existing reference sample is greater than or equal to a predetermined threshold value, the processor 110 may determine to update the existing reference sample to be the current input sample. As another example, when the MSE between a current input sample and the existing reference sample is less than the predetermined threshold value, the processor 110 may determine not to update the existing reference sample.). Liu and Yang are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 7, Liu, as modified by Yang, teaches The DNN contraction device of claim 6. Liu teaches wherein the sub sensor includes a plurality of sub sensors ([0126] The sensor module 850 may collect information around an electronic apparatus in which the electronic system 800 is mounted. The sensor module 850 may sense or receive signals from the outside of an electronic apparatus, for example, an image signal, a voice signal, a magnetic signal, a bio signal, or a touch signal, and transform the sensed or received signal to data. To this end, the sensor module 850 may include at least one of various types of sensing apparatuses, for example, a microphone, a photographing apparatus, an image sensor, wherein the sub sensor includes a plurality of sub sensors a light detection and ranging (LIDAR) sensor, an ultrasonic sensor, an infrared sensor, a bio sensor, and a touch sensor.), and the recognition accuracy confirmation unit is configured to, in a case where the information of the object recognized by the DNN computation unit is different from the information of the object recognized from the external information sensed by at least one of the sub sensors, the contraction number setting unit reduces the contraction number ([0102] In operation 503 of the algorithm, when it is the recognition accuracy confirmation unit is configured to, in a case where the information of the object recognized by the DNN computation unit is different from the information of the object recognized from the external information sensed by at least one of the sub sensors determined to update a reference sample, the input sample x* corresponding to the reference sample may be reset to a new input sample xt. Inference on an updated reference sample may be performed by the original neural network, and thus the binary mask mk may be updated and an inference result on an updated reference sample may be obtained. Then, a new layer-contracted neural network having new layer contraction parameters (Q, q) based on the reference sample may be determined. In other words, the the contraction number setting unit reduces the contraction number layer contraction parameters and the layer-contracted neural network may be updated (changed) together according to the update (change) of the reference sample.). Liu and Yang are combinable for the same rationale as set forth above with respect to claim 1. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Liu, in view of Yang, and further in view of Luciw et al. (U.S. Pre-Grant Publication No. 20180330238, hereinafter 'Luciw'). Regarding claim 8, Liu, as modified by Yang, teaches The DNN contraction device of claim 1. Liu, as modified by Yang, fails to teach the onboard computation device comprising the DNN computation unit that performs a DNN computation using an internal memory. Luciw teaches the onboard computation device comprising the DNN computation unit that performs a DNN computation using an internal memory ([0018] L-DNN techniques can be applied to visual, structured light, LIDAR, SONAR, RADAR, or audio data, among other modalities. For visual or similar data, L-DNN techniques can be applied to visual processing, such as enabling whole-image classification (e.g., scene detection), bounding box-based object recognition, pixel-wise segmentation, and other visual recognition tasks. They can also perform non-visual recognition tasks, such as classification of non-visual signal, and other tasks, such as updating Simultaneous Localization and Mapping (SLAM) generated maps by incrementally adding knowledge as the robot, the onboard computation device self-driving car, drone, or other device is navigating the environment.; [0019] Memory consolidation in an comprising the DNN computation unit L-DNN keeps that performs a DNN computation using an internal memory memory requirements under control in Module B as the L-DNN learns more entities or events (in visual terms, ‘objects’ or ‘categories’). Additionally, the L-DNN methodology enables multiple Edge computing devices to merge their knowledge (or ability to classify input data) across Edges.; [0047] FIG. 1 provides an overview of L-DNN architecture where multiple devices, including a master edge/central server and several compute edges (e.g., drones, robots, smartphones, or other IoT devices), running L-DNNs operate in concert. Each device receives sensory input 100 and feeds it to a corresponding L-DNN 106 comprising a slow learning Module A 102 and a fast learning Module B 104. Each Module A 102 is based on a pre-learned (fixed weight) DNN and serves as feature extractor. It receives the input 100, extracts the relevant features into compressed representations of objects and feeds these representations to the corresponding Module B 104.). Liu, Yang, and Luciw are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Liu and Yang, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Luciw to Liu before the effective filing date of the claimed invention in order to enable real-time learning from continuous data streams, bypassing the need to store input data for multiple iterations of backpropagation learning (cf. Luciw, [0016] A Lifelong Deep Neural Network (L-DNN) enables continuous, online, lifelong learning in Artificial Neural Networks (ANN) and Deep Neural Networks (DNN) in a lightweight compute device (Edge) without requiring time consuming, computationally intensive learning. An L-DNN enables real-time learning from continuous data streams, bypassing the need to store input data for multiple iterations of backpropagation learning.). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Liu, in view of Yang, Luciw, and further in view of Miyagawa et al. (U.S. Pre-Grant Publication No. 20150242694, hereinafter 'Miyagawa'). Regarding claim 9, Liu, as modified by Yang and Luciw, teaches The onboard computation device of claim 8. Liu, as modified by Yang and Luciw, fails to teach comprising a route generation unit that generates a route of a vehicle using information of an object recognized by the DNN computation unit. Miyagawa teaches comprising a route generation unit that generates a route of a vehicle using information of an object recognized by the DNN computation unit ([0028] FIG. 1 is a block diagram showing a configuration of an using information of an object recognized by the DNN computation unit object identifying apparatus 10 according to an embodiment of the present invention. FIG. 2 is a schematic perspective view of a vehicle 12 in which the object identifying apparatus 10 shown in FIG. 1 is incorporated.; [0047] As shown in FIG. 4A, it is assumed that the ECU 22 acquires a captured image Im in one frame at a given time from the camera 14. The captured image Im represents an original image area 60 (hereinafter referred to as an “image area 60”) having a horizontally elongate rectangular shape made up of horizontal rows of 1200 pixels and vertical columns of 600 pixels, for example. The comprising a route generation unit that generates a route of a vehicle image area 60 includes a road region 62 (hereinafter referred to simply as a “road 62”) along which the vehicle 12 travels, a plurality of utility pole regions 64 (hereinafter referred to simply as “utility poles 64”), which are installed at substantially regular intervals along the road 62, and a pedestrian region 66 (hereinafter referred to simply as a “pedestrian 66”) on the road 62.). Liu, Yang, Luciw, and Miyagawa are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Liu, Yang, and Luciw, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Miyagawa to Liu before the effective filing date of the claimed invention in order to maintain the accuracy with which an object can be identified while reducing the amount of data that makes up integral images (cf. Miyagawa, [0005] The present invention has been made with the aim of solving the aforementioned problems. An object of the present invention is to provide an object identifying apparatus, which is capable of maintaining the accuracy with which an object can be identified while greatly reducing the amount of data that makes up the integral images.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAGGIE MAIDO whose telephone number is (703) 756-1953. The examiner can normally be reached M-Th: 6am - 4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MM/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Dec 05, 2025
Non-Final Rejection — §103
Mar 04, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602603
MULTI-AGENT INFERENCE
2y 5m to grant Granted Apr 14, 2026
Patent 12596933
CONTEXT-AWARE ENTITY LINKING FOR KNOWLEDGE GRAPHS TO SUPPORT DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12579463
GENERATIVE REASONING FOR SYMBOLIC DISCOVERY
2y 5m to grant Granted Mar 17, 2026
Patent 12579452
EVALUATION SCORE DETERMINATION MACHINE LEARNING MODELS WITH DIFFERENTIAL PERIODIC TIERS
2y 5m to grant Granted Mar 17, 2026
Patent 12566941
EXTENSION OF EXISTING NEURAL NETWORKS WITHOUT AFFECTING EXISTING OUTPUTS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
85%
With Interview (+20.7%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month