Prosecution Insights
Last updated: April 19, 2026
Application No. 18/168,603

INFERENCE PROCESSING SYSTEM CAPABLE OF REDUCING LOAD WHEN EXECUTING INFERENCE PROCESSING, EDGE DEVICE, METHOD OF CONTROLLING INFERENCE PROCESSING SYSTEM, METHOD OF CONTROLLING EDGE DEVICE, AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Feb 14, 2023
Examiner
CHOI, DAVID E
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Kabushiki Kaisha
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
448 granted / 595 resolved
+20.3% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
613
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
1.9%
-38.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§102 §103
CTNF 18/168,603 CTNF 87279 DETAILED ACTION 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. 2. This action is responsive to the following communication: Original claims filed 2/14/23. This action is made non-final. 3. Claims 1-20 are pending in the case. Claims 1-7, 13 and 15 are being examined in response to the Applicant’s elections on 3/6/26. Claim Rejections - 35 USC § 102 07-07-aia AIA 07-07 4. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 07-08-aia AIA (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 07-15 AIA 5. Claim 1, 2, 13 and 15 are rejected under 35 U.S.C. 102( a)(1 ) as being rejected by anticipated by Kurauchi (US 20220004817) . Regarding claim 1, Kurauchi discloses an inference processing system that includes a first terminal and a second terminal and performs inference processing using a plurality of neural networks (a method of transmitting intermediate data obtained by performing inference calculation based on machine learning halfway from the instrument to the device may be used. In this case, the device continuously performs the inference calculation based on the machine learning from the received intermediate data and obtains an analysis result, paragraph 0004), wherein the first terminal executes inference processing by a first neural network using acquired data as an input thereto, and outputs intermediate data to the second terminal (see at least FIG. 4 wherein the input data goes from (12) to a learned neural network to input at (22) which then goes into a second neural network), the intermediate data being obtained by executing processing operations in intermediate layers, up to a predetermined intermediate layer, of the first neural network, which are commonized with a second neural network (the conversion process involves outputting the compression data which is an output of a predetermined intermediate layer obtained as a result of processing the observation data received via an input layer of a learned neural network prepared in advance using portions ranging from the input layer to the intermediate layer, the device includes an analysis unit that performs an analysis process of obtaining an analysis result of the observation data from the compression data, the analysis process involves inputting the compression data to an intermediate layer subsequent to the predetermined intermediate layer, inputting data obtained by decoding the compression data, which is an output of the subsequent intermediate layer, to an output layer configured using a CNN (Convolutional Neural Network) model, paragraph 0008); and wherein the second terminal executes processing operations in intermediate layers, after the predetermined intermediate layer, of the second neural network using the intermediate data as an input thereto (one learned neural network 18 is separated by a predetermined intermediate layer h2, layers ranging from the input layer h1 to the predetermined intermediate layer h2 are included in the learned neural network 18A, and layers ranging from an intermediate layer h3 subsequent to the predetermined intermediate layer h2 to the output layer h4 are included in the learned neural network 18B, paragraph 0037). Regarding claim 2, Kurauchi discloses wherein the first terminal executes inference processing by the first neural network using the acquired data as the input thereto, and outputs the intermediate data to the second terminal, the intermediate data being obtained by executing processing operations in intermediate layers, up to the predetermined intermediate layer, of the first neural network, which are commonized with the second neural network (one learned neural network 18 is separated by a predetermined intermediate layer h2, layers ranging from the input layer h1 to the predetermined intermediate layer h2 are included in the learned neural network 18A, and layers ranging from an intermediate layer h3 subsequent to the predetermined intermediate layer h2 to the output layer h4 are included in the learned neural network 18B, paragraph 0037). and a third neural network (The analysis unit 34 according to the present embodiment performs a process of obtaining the analysis result of the learning data received from the input unit 32 using a learning neural network 18C. In the learning neural network 18C, a conversion process of converting the learning data to compression data using portions ranging from the input layer h1 to the predetermined intermediate layer h2. That is, the compression data is obtained as an output of the predetermined intermediate layer h2 of the learning neural network 18C, paragraph 0060); and wherein the second terminal executes processing operations in the intermediate layers, after the intermediate layer, of the second neural network, and processing operations intermediate layers, after the intermediate layer, of the third neural network, using the intermediate data as an input thereto (The analysis unit 34 according to the present embodiment performs a process of obtaining the analysis result of the learning data received from the input unit 32 using a learning neural network 18C. In the learning neural network 18C, a conversion process of converting the learning data to compression data using portions ranging from the input layer h1 to the predetermined intermediate layer h2. That is, the compression data is obtained as an output of the predetermined intermediate layer h2 of the learning neural network 18C, paragraph 0060-0061). Regarding claim 13, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies. Regarding claim 15, the subject matter of the claim is substantially similar to claim 1 and as such the same rationale of rejection applies . Claim Rejections - 35 USC § 103 07-20-aia AIA 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA 7. Claim 3, 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Kurauchi in view of Fukushi (US 20250068923) . Regarding claim 3, Kurauchi does not disclose wherein in the second neural network and the third neural network, learning is performed by fixing parameters of the intermediate layers commonized with the first neural network to the same parameters as used in the first neural network. However, Fukushi discloses wherein the machine learning processing unit 15 optimizes model parameters of the first encoding model 151, the second encoding model 152, and the estimation model 153 in such a way that the error between the estimation result of the estimation model 153 and the correct answer data decreases. For example, the machine learning processing unit 15 optimizes the model parameters of the first encoding model 151, the second encoding model 152, and the estimation model 153 in such a way that the error between the estimation result of the estimation model 153 and the correct answer data is minimized. This training improves the accuracy rate of the estimation result output from the estimation model 153 (paragraph 0059). The combination of Kurauchi and Fukushi would have resulted in the neural network learning of Kurauchi to further utilize Fukushi’s use of optimizing with model parameters. One would have been motivated to have combined the references because a user in Kurauchi is already interested in obtaining learned data using CNN models and utilization of parameters would have been well known in the art. As such, the combination of references would have resulted in a predictable invention to one of ordinary skill in the art. Regarding claim 6, Kurauchi does not disclose further comprising a control unit configured to control which neural networks of the plurality of neural networks are to be used by the first terminal and the second terminal, respectively. However, Fukushi discloses at least in paragraph 0151 wherein a control unit 415 controls aspects to control the neural networks and their respective terminals. The combination of Kurauchi and Fukushi would have resulted in the neural network learning of Kurauchi to further utilize Fukushi’s use of optimizing with model parameters. One would have been motivated to have combined the references because a user in Kurauchi is already interested in obtaining learned data using CNN models and utilization of parameters would have been well known in the art. As such, the combination of references would have resulted in a predictable invention to one of ordinary skill in the art. Regarding claim 7, Kurauchi does not disclose wherein the first terminal is an image capturing apparatus including an image capturing unit, and wherein the first terminal executes inference processing by the first neural network using an image captured by the image capturing unit as the input thereto, and outputs the intermediate data to the second terminal, the intermediate data being obtained by executing the processing operations in the intermediate layers, up to the predetermined intermediate layer, of the first neural network, which are commonized with the second neural network. However, Fukushi discloses the first encoding model 151 and the second encoding model 152 output time-series data (code) of 10 Hz in response to input of time-series data (raw data) measured at a cycle of 100 hertz (Hz). For example, the first encoding model 151 and the second encoding model 152 output time-series data (code) whose data amount has been reduced by averaging or denoising in response to input of time-series data corresponding to raw data. For example, the first encoding model 151 outputs image data (code) of 7×7 pixels in response to input of image data (raw data) of 28×28 pixels. The code only needs to include features of having a smaller data amount than the raw data and enabling estimation of correct answer data corresponding to the raw data. The data capacity, the data format, and the like of the code are not limited. The combination of Kurauchi and Fukushi would have resulted in the neural network learning of Kurauchi to further utilize Fukushi’s use of optimizing with model parameters. One would have been motivated to have combined the references because a user in Kurauchi is already interested in obtaining learned data using CNN models and utilization of parameters would have been well known in the art. As such, the combination of references would have resulted in a predictable invention to one of ordinary skill in the art . 07-21-aia AIA 8. Claim 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Kurauchi in view of Fukushi (US 20250068923) . Regarding claim 4, Kurauchi does not disclose wherein the second terminal is higher in computational power than the first terminal, and wherein the second neural network is a neural network that performs more detailed cluster classification than classification performed by the first neural network. However, Nozaki discloses wherein a management device includes: a time series acquisition unit that is configured to acquire a time series related to power consumption of a machine for a certain time; a classification unit that is configured to classify the time series into any one of a plurality of clusters; and a process estimation unit that is configured to estimate a process executed by the machine, based on relationship information indicating a relationship between the plurality of clusters and the process of the machine, and the cluster into which the time series is classified. The combination of Kurauchi and Nozaki would have resulted in the neural network learning of Kurauchi to further utilize Nozaki’s use of management of resources. One would have been motivated to have combined the references because a user in Kurauchi is already interested in obtaining learned data using CNN models and the utilization of resource management would have made for more efficient models. As such, the combination of references would have resulted in a predictable invention to one of ordinary skill in the art. Regarding claim 5, Kurauchi does not disclose wherein the first neural network is a neural network that performs simple cluster classification. However, Nozaki discloses wherein a management device includes: a time series acquisition unit that is configured to acquire a time series related to power consumption of a machine for a certain time; a classification unit that is configured to classify the time series into any one of a plurality of clusters; and a process estimation unit that is configured to estimate a process executed by the machine, based on relationship information indicating a relationship between the plurality of clusters and the process of the machine, and the cluster into which the time series is classified. The combination of Kurauchi and Nozaki would have resulted in the neural network learning of Kurauchi to further utilize Nozaki’s use of management of resources. One would have been motivated to have combined the references because a user in Kurauchi is already interested in obtaining learned data using CNN models and the utilization of resource management would have made for more efficient models. As such, the combination of references would have resulted in a predictable invention to one of ordinary skill in the art. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID E CHOI whose telephone number is (571)270-3780. The examiner can normally be reached on M-F: 7-2, 7-10 (PST). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bechtold, Michelle T. can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID E CHOI/Primary Examiner, Art Unit 2148 Application/Control Number: 18/168,603 Page 2 Art Unit: 2148 Application/Control Number: 18/168,603 Page 3 Art Unit: 2148 Application/Control Number: 18/168,603 Page 4 Art Unit: 2148 Application/Control Number: 18/168,603 Page 5 Art Unit: 2148 Application/Control Number: 18/168,603 Page 6 Art Unit: 2148 Application/Control Number: 18/168,603 Page 7 Art Unit: 2148 Application/Control Number: 18/168,603 Page 8 Art Unit: 2148 Application/Control Number: 18/168,603 Page 9 Art Unit: 2148
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602396
TRANSFORMING MODEL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12585995
Capturing Data Properties to Recommend Machine Learning Models for Datasets
2y 5m to grant Granted Mar 24, 2026
Patent 12585957
SYSTEM AND METHOD FOR EFFICIENT ESTIMATION OF CUMULATIVE DISTRIBUTION FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12580878
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR PRESENTING SESSION MESSAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12572836
INTELLIGENT PROVISIONING OF QUANTUM PROGRAMS TO QUANTUM HARDWARE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.4%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month