Prosecution Insights
Last updated: April 19, 2026
Application No. 17/856,569

METHOD AND SYSTEM FOR SEARCHING DEEP NEURAL NETWORK ARCHITECTURE

Final Rejection §103
Filed
Jul 01, 2022
Examiner
NGUYEN, CHAU T
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
372 granted / 549 resolved
+12.8% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
31 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment filed on 10/24/2025 has been entered. Claims 1-20 are pending. Claims 1, 9 and 17 are currently amended. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pronovost et al. (Pronovost), US Patent No. US 11,704,572 B1, and further in view of Gao et al. (Gao), NPL “A DNN task of multiple mobile device of splitting and unloading method”, Document ID CN-110764885-A, published on 2020-02-07, pages: 8 As to independent claim 1, Pronovost discloses a method for searching deep neural network architecture for computation offloading in a computing environment in which a computation is performed using a first device and a second device (col. 1, lines 46-67: techniques for selectively offloading data that is computed by a first processing unit (first device) during training of an artificial neural network onto memory associated with a second processing unit (second device); col. 9, line 60 – col. 10, line 29: the neural network can be deep learning algorithm), the method comprising: configuring a target deep network including a plurality of computation cells, each computation cell including a plurality of nodes, a weight between each node of the plurality of nodes, and an operation selector that selects a candidate operation between each node of the plurality of nodes (col 5, lines 21-35 and Figure 1: the neural network may include a plurality of layers (cells), each layer (cell) may include one or more nodes, for example, the neural network includes four layers and a layer 112 includes six nodes, or any number of layers and/or nodes may be implemented; col. 2, lines 1-17: each layer may be associated with one or more operations and each operation may be associated with a weight; col. 5, line 54 – col. 6, line 3: a system, application, or other entity (an operation selector) select a node or layer to be associated with a checkpoint) partitioning the plurality of computation cells into a first portion in which the computation is performed on the first device and a second portion in which the computation is performed on the second device, the first portion including a transmission cell, and the transmission cell including a resource selector that determines whether each computation inside the transmission cell is processed by the first device or the second device, and a channel selector which determines a channel through which a computation result processed by the first device is transmitted to the second device (col. 6, lines 5-25: during forward propagation of the neural network, the training component may compute an activation(s) (transmission cells) for operations associated with the neural network, for examples, after computation, the activation(s) may be stored in the memory 112 of the first processing unit (first device), then training component may cause one or more of the activation(s) to be transferred to the memory 108 of the second processing unit (second device); col. 15, line 63 – col. 16, line 62: one or more of the activations associated with the checkpoints may be offloaded to the memory 108 (of the second processing unit) associated, for example, data offloading for a portion of a neural network: a forward graph 302 for the portion of the neural network includes nodes from input to output with respect to forward propagation; A stop gradient operation (resource selector) may act as an identity operation for forward propagation, indicate that such activations should be copied to a particular memory (of a first or second processing unit), and wherein an identity operation may receive a value (channel selector) as input and provide the value as output); and updating the weight, the operation selector, the resource selector, and the channel selector (col. 2, lines 1-17: during backwards propagation, an error representing a difference between the output and a desired output may be propagated backwards through the layers (cells) of the artificial neural network to adjust the weights using gradient descent, wherein the backwards propagation may include executing one or more gradient operations associated with the one or more operations of the forward propagation to generate one or more gradients; col. 2, lines 48-60: during backwards propagation, recompute an activation (transmission cell), which is used to compute a gradient). Pronovost, however, does not disclose partitioning the plurality of computation cells into a first portion in which neural network computations between nodes in computation cells of the first portion are performed on the first device and a second portion in which neural network computations between nodes in computation cells of the second portion are performed on the second device. In the same field of endeavor, Gao discloses a method of DNN task splitting and unloading multi-mobile device comprising: according to the mobile device number, the layer number of the DNN task and each of the DNN task dividing construction dividing and unloading model, wherein each mobile deice wit a DNN task, each of the DNN task is divided into two parts, the front part of the mobile device is locally processed, output data obtained after processing is transmitted to the server, the latter part is unloaded from server to server processing; wherein dividing and unloading model is an NxM matrix X, N is the mobile device, M is layer number of the DNN task, each layer is a sub-task (Abstract and pages 2-3). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of Gao with Pronovost to include partitioning the plurality of computation cells into a first portion in which neural network computations between nodes in computation cells of the first portion are performed on the first device and a second portion in which neural network computations between nodes in computation cells of the second portion are performed on the second device for the purpose of effectively reducing time delay of the DNN task processing as Gao disclosed. As to dependent claim 2, Pronovost discloses wherein updating of the weight, the operation selector, the resource selector, and the channel selector includes initializing the weight, the operation selector, the resource selector, and the channel selector (col. 2, lines 1-60), inputting a finite length input arrangement to the target deep network to perform a feedforward propagation (col. 2, lines 1-60), performing the computation on the first portion and the computation on the second portion to calculate a loss based on the computation on the first portion and the computation on the second portion (col. 2, lines 1-60); and updating the weight, the operation selector, the resource selector, and the channel selector through a backward propagation based on the calculated loss (col. 2, lines 1-60). As to dependent claim 3, Pronovost discloses wherein calculating of the loss includes calculating an offloading loss, calculating a prediction loss (col. 4, lines 15-49), and calculating a final loss through a weighted sum of the offloading loss and the prediction loss (col. 4, lines 15-49). As to dependent claim 4, Pronovost discloses wherein the transmission cell is a computation cell included in the first portion adjacent to a partitioning point between the first portion and the second portion (col. 6, lines 5-25). As to dependent claim 5, Pronovost discloses wherein the second portion includes a receiving cell, and the receiving cell has one input node (col. 2, lines 48-60). As to dependent claim 6, Pronovost discloses wherein the first device and the second device are connected by wired communication or wireless communication (col. 11, lines 18-28). As to dependent claim 7, Pronovost discloses wherein the first device includes a mobile device, and the second device includes an edge server (col. 1, lines 46-67 and col. 7, lines 1-12). As to dependent claim 8, Pronovost discloses wherein the plurality of computation cells include a normal cell (col. 5, lines 21-35), and a reduced cell which reduces a spatial resolution of a feature map of the normal cell in half (col. 3, lines 20-42). Claims 9-16 are system claims that contain similar limitations of claims 1-8, respectively. Therefore, claims 9-16 are rejected under the same rationale. Claims 17-20 are medium claims that contain similar limitations of claims 1-3 and (6 and 7), respectively. Therefore, claims 17-20 are rejected under the same rationale. Response to Arguments Applicant’s arguments and amendments filed on 10/24/2025 have been fully considered but they are not deemed fully persuasive. Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection as explained here below, necessitated by Applicant’s substantial amendment (i.e., partitioning the plurality of computation cells into a first portion in which neural network computations between nodes in computation cells of the first portion are performed on the first device and a second portion in which neural network computations between nodes in computation cells of the second portion are performed on the second device) to the claims which significantly affected the scope thereof. Please see the rejection above with additional cited prior art Gao. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAU T NGUYEN whose telephone number is (571)272-4092. The examiner can normally be reached on Monday-Friday from 8am to 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula, can be reached at telephone number 5712724128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /CHAU T NGUYEN/Primary Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Jul 01, 2022
Application Filed
Jul 22, 2025
Non-Final Rejection — §103
Aug 12, 2025
Interview Requested
Aug 18, 2025
Applicant Interview (Telephonic)
Aug 18, 2025
Examiner Interview Summary
Oct 24, 2025
Response Filed
Feb 07, 2026
Final Rejection — §103
Mar 04, 2026
Interview Requested
Mar 13, 2026
Applicant Interview (Telephonic)
Mar 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596765
GENERATION AND USE OF CONTENT BRIEFS FOR NETWORK CONTENT AUTHORING
2y 5m to grant Granted Apr 07, 2026
Patent 12591795
METHOD FOR PROVIDING EXPLAINABLE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 31, 2026
Patent 12585722
IMAGE GENERATION SYSTEM, COMMUNICATION APPARATUS, METHODS OF OPERATING IMAGE GENERATION SYSTEM AND COMMUNICATION APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579356
MATHEMATICAL CALCULATIONS WITH NUMERICAL INDICATORS
2y 5m to grant Granted Mar 17, 2026
Patent 12547825
WHITELISTING REDACTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.8%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month