Prosecution Insights
Last updated: April 19, 2026
Application No. 18/316,152

PROCESS FOR TRANSFORMING A TRAINED ARTIFICIAL NEURON NETWORK

Non-Final OA §102§103
Filed
May 11, 2023
Examiner
CHEEMA, NOOR FATIMA
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
STMicroelectronics
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
3 currently pending
Career history
3
Total Applications
across all art units

Statute-Specific Performance

§101
15.4%
-24.6% vs TC avg
§103
61.5%
+21.5% vs TC avg
§102
15.4%
-24.6% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the application filed on May 11, 2023. Claims 1-20 are pending and have been examined. Claims 1-20 are rejected. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. The present application claims foreign priority based on French Patent Application No. FR2205831 filed June 15, 2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement Acknowledgment is made of the information disclosure statements filed May 11, 2023, which comply with 37 CFR 1.97. As such, the information disclosure statements have been placed in the application file and the information referred to therein has been considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 8, and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gao et. Al (U.S Patent Application Publication No. US 20190279072 A1). Gao et. Al was filed on March 7, 2019, and this date is before the foreign priority effective filing date of this application, i.e., June 15, 2022. Therefore, Gao et. Al constitutes prior art under 35 U.S.C. 102(a)(1). With respect to Claims 1, 8, and 15: Gao et. Al teaches: “having a trained artificial neural network comprising a binary convolution layer, a pooling layer, and a batch normalization layer,” (Paragraph 0023 discloses a trained artificial neural network, “Referring to Fig. 1, a binary convolutional neural network model trained and generated using known techniques is illustrated. When the binary convolutional neural network model shown in Fig. 1 is trained and generated.” Paragraph 0049 mentions the type of layers comprising the trained artificial neural network, “There may be other layers between the convolution layer and the quantization layer, such as a batch normalization layer, a pooling layer, a scaling layer and the like.”) “wherein the pooling layer is arranged between the binary convolution layer and the batch normalization layer in the trained artificial neural network;” (Paragraph 0068 teaches this arrangement of layers, “wherein the head layer of the first sub-structure is a binary convolution layer 2, the middle layers are a pooling layer 2 and a batch normalization layer 2.” Paragraph 0074 also mentions this arrangement, “a pooling layer is added between the convolution layer and the batch normalization layer in the sub-structure shown in Fig. 5B.”) “and converting the trained artificial neural network to a transformed artificial neural network,” (Paragraph 0005 discusses a conventional neural network (trained artificial neural network) being optimized (transformed), “In view of the technical problems existing in the above conventional multilayer neural network model, the present disclosure intends to provide a scheme for optimizing the conventional multilayer neural network model to reduce processor resources necessary to operate a multilayer neural network model.” Paragraph 0048 further elaborates on the changing of the physical structure of the trained artificial neural network (transformation) by dividing out layers, “In step S202, the processing for optimizing the multilayer neural network model is started. The details are: dividing out at least one sub-structure from the multilayer neural network model.”) “wherein the batch normalization layer is arranged between the binary convolution layer and the pooling layer in the transformed artificial neural network.” (Paragraph 0051 mentions merging to denote a transformation in the state of the artificial neural network, “Note that the “transferring” described in the embodiments of the present disclosure is essentially a merge operation.” Paragraph 0104 also mentions joint quantization processing which further depicts a transformation in the state of the artificial neural network, “Since the five sub-structures in the network model shown in FIG. 2D have undergone the joint quantization processing based on data transferring in step S203 and the sub-structure simplifying processing in step S204, and the operation parameters in the binary convolution layer 2 to the binary convolution layer 7 may be 1.” Paragraph 0121 discloses that convolution processing is being done with the information gained from the i-th layer input by the storage unit 402. A similar utilization of information processing is occurring in the pooling/activation unit 404. This information is being retrieved from the pooling/activation layer as depicted in (Fig. 2D) where the binary convolution layer 7 is followed by the batch normalization layer 7 followed by the activation/pooling layer 7. This arrangement and ordering of layers is consistent with that of the transformed artificial neural network. Therefore, Claims 1, 8, and 15 are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et. Al, (U.S Patent Application Publication No. US 20190279072 A1 filed on March 7, 2019, hereinafter “Gao”), in view of Sledevic, (Adaptation of Convolution and Batch Normalization Layer for CNN Implementation on FPGA, published June, 2019). With respect to Claims 2, 9, and 16: Gao does not appear to explicitly disclose: “merging the batch normalization layer with the binary convolution layer in the transformed artificial neural network.” However, Sledevic teaches: “merging the batch normalization layer with the binary convolution layer in the transformed artificial neural network.” (Page 1, paragraphs 4 and 5 disclose that a joining of equivalent parameters entails a combining of the binary convolution and batch normalization layers of the artificial neural network, “However, in this work we adapt these two layers and move the computations to single core by joining equivalent parameters. The aim of this investigation is to adapt convolution and batch normalization layer for further implementation in FPGA. The next section describes the binarization of convolutional layer and integration with batch normalization layer.”) Gao and Sledevic are analogous art and in the same field of invention because both references pertain to enhancing the physical structure of an artificial neural network to perform more efficiently, reduce computational resources, and increase model transparency. Where Gao teaches rearranging the order of layers in a transformed artificial neural network but not merging certain ones together, Sledevic teaches merging the batch normalization layer with the binary convolution layer. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Sledevic (merging layers) in order to optimize the utilization of memory resources in a transformed artificial neural network to have less loss in precision while still maintaining operational accuracy. One of ordinary skill in the art would be motivated to do so because by integrating Sledevic's framework into the methods of Gao one would be, "reducing FPGA resource utilization rate. Using XNOR weight networks the memory can be saved 32 times and computations speeded up 58 times {page 1, paragraphs 2 and 3 of Sledevic}." Therefore, Claims 2, 9, and 16 are rejected. Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et. Al, (U.S Patent Application Publication No. US 20190279072 A1 filed on March 7, 2019, hereinafter “Gao”), in view of Bankman et. Al, (An Always-On 3.8 μJ/86% CIFAR-10 Mixed-Signal Binary CNN Processor with All Memory on Chip in 28-nm CMOS, published October, 2018, hereinafter “Bankman”). With respect to Claims 3, 10, and 17: Gao does not appear to explicitly disclose: “converting the pooling layer into a binary pooling layer.” However, Bankman teaches: “converting the pooling layer into a binary pooling layer.” (Page 3, Section A. Paragraph 5 discloses a rearrangement of the placement of the max-pooling layers to focus on binarized functionalities, “The original BinaryNet topology placed max-pooling layers immediately following convolutions, before batch normalization and binarization…Alternatively, the binary CNN of this work places max-pooling layers after binarization, such that max-pooling operates over binary activations.” This is a modification of the structure of the neural network which accounts for a conversion to where now the characteristics of the max-pooling layers are binary. Page 3, Section II. Paragraph 3 further teaches this conversion, “In this work, we make several modifications to the original BinaryNet topology in order to simplify the logic and interconnect at the interface between memory and compute, resulting in a binary CNN topology.”) Gao and Bankman are analogous art and in the same field of invention because both references pertain to heightening the robustness of the network model while reducing spatial dimensions and preserving key-feature representation. Where Gao teaches transforming a trained artificial neural network but not converting a pooling layer into a binary pooling layer, Bankman teaches converting the state of a pooling layer to encompass being binary post-transformation. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Bankman (binarizing pooling layers) in order to reduce the burden on memory storage whilst improving processing speed. One of ordinary skill in the art would be motivated to do so because by integrating Bankman's framework into the methods of Gao one would be, " minimizing energy while permitting the sacrifice of some programmability and accuracy. Restricting the filter size to 2×2 and the number of filters and channels to 256 minimizes the path loading between memory and compute. Using only a single FC layer at the network output reduces the required weight memory capacity by 5.1×. The CMOS-inspired topology achieves 86.05% accuracy on CIFAR-10 {Page 3, Section A. Paragraph 6 of Bankman}." Therefore, Claims 3, 10, and 17 are rejected. Claims 4, 5, 11, 12, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et. Al, (U.S Patent Application Publication No. US 20190279072 A1 filed on March 7, 2019, hereinafter “Gao”), in view of Bankman et. Al, (An Always-On 3.8 μJ/86% CIFAR-10 Mixed-Signal Binary CNN Processor with All Memory on Chip in 28-nm CMOS, published October, 2018, hereinafter “Bankman”), as applied to claims 3, 10, and 17 above, and further in view of Helwegen et. Al, (U.S Patent Application Publication No. US 20220405576 A1 filed on June 21, 2021, hereinafter “Helwegen”). With respect to Claims 4, 11, and 18: The combination of Gao and Bankman does not appear to explicitly disclose: “wherein the pooling layer of the trained artificial neural network is a maximum pooling layer, the method further comprising converting the pooling layer of the trained artificial neural network into a binary maximum pooling layer in the transformed artificial neural network.” However, Helwegen teaches: “wherein the pooling layer of the trained artificial neural network is a maximum pooling layer, the method further comprising converting the pooling layer of the trained artificial neural network into a binary maximum pooling layer in the transformed artificial neural network.” (Paragraph 0078 mentions the relevancy of the low precision weights in pertinence to the network, “For ultra - low precision neural network layers having weights, the computation of the function may be performed using ultra - low precision weights and activations...The function may be computed using specialized hardware that is configured to perform multiplication and sum operations using ultra - low precision values.” Paragraph 0166 depicts the initial binarization conversion of the pooling layer, “During the forward pass, the latent weights are converted to ultra - low precision weights (for example by binarizing), and the ultra - low precision weights are used to generate the neural network output.” Paragraph 0209 clarifies the state and structure of the pooling layers, “The binarized intermediate output is received by a binarized max pooling layer 646-1 , which is an ultra - low precision layer. The binarized max pooling layer is a local max pooling layer.”) The Examiner notes that the specification of this application does not give "binary maximum pooling layer" any specific definition. Paragraph 0037 of this application’s specification gives an example of how a "binary maximum pooling layer” can be implemented with an AND logic operation which increases the max pooling layer data. Under BRI, the Examiner will interpret “binary maximum pooling layer” as a logic function which increases values of the maximum pooling layer. Paragraph 0209 of the Helwegen reference discloses that the local precision layer is also a maximum pooling layer and paragraph 0086 further maintains that AND logic operations/functions can be applied to the layers. The combination of Gao and Bankman in addition to Helwegen are analogous art and in the same field of invention because all three references disclose refining the physical state of artificial neural networks to obtain optimized performance for deployment in low-power or memory-constrained environments. Where Gao-Bankman teaches transforming a trained artificial neural network and converting a pooling layer into a binary pooling layer but not converting the maximum pooling layer into a binary maximum pooling layer, Helwegen teaches converting a maximum pooling layer into a binary maximum pooling layer. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Bankman (binarizing pooling layers) and further with the teachings of Helwegen (converting pooling layers to binary maximum pooling layers) in order to limit the overuse of resources. One of ordinary skill in the art would be motivated to do so because by integrating Helwegen's framework into the methods of Gao and Bankman one could state, "better use of storage capacity can be achieved. This may facilitate the use of the neural network architecture on resource-constrained devices, where the memory requirements of skip connections may otherwise be prohibitive {0039 of Helwegen}." Therefore, Claims 4, 11, and 18 are rejected. With respect to Claims 5, 12, and 19: The combination of Gao and Bankman does not appear to explicitly disclose: “wherein the pooling layer of the trained artificial neural network is a minimum pooling layer, the method further comprising converting the pooling layer of the trained artificial neural network into a binary minimum pooling layer in the transformed artificial neural network.” However, Helwegen teaches: “wherein the pooling layer of the trained artificial neural network is a minimum pooling layer, the method further comprising converting the pooling layer of the trained artificial neural network into a binary minimum pooling layer in the transformed artificial neural network.” (Paragraph 0078 mentions the relevancy of the low precision weights in pertinence to the network, “For ultra - low precision neural network layers having weights, the computation of the function may be performed using ultra - low precision weights and activations...The function may be computed using specialized hardware that is configured to perform multiplication and sum operations using ultra - low precision values.” Paragraph 0166 depicts the initial binarization conversion of the pooling layer, “During the forward pass, the latent weights are converted to ultra - low precision weights (for example by binarizing), and the ultra - low precision weights are used to generate the neural network output.” Paragraph 0072 notes the kind of layers that result from the conversion, “Examples of ultra - low precision layers are binarized neural network layers.” Paragraph 0068 discloses the parallel type of pooling that these layers encompass, “Examples of local pooling include local average pooling, local maximum pooling, and local minimum pooling...In local minimum pooling, the value derived for each area is the minimum of the values in that area.”) The Examiner notes that the specification of this application does not give "binary minimum-pooling layer" any specific definition. Paragraph 0037 of this application’s specification gives an example of how a "binary minimum pooling layer" can be implemented with an OR logic operation which decreases the minimum pooling layer data. Under BRI, the examiner will interpret "binary minimum pooling layer" as a logic function which decreases values of the minimum pooling layer. In paragraph 0092 of the Helwegen reference, examples of ultra - low precision layers include local pooling layers which can also encompass local minimum pooling [0068] in addition to being binary [0072]. Paragraph 0086 discloses that among operations, XNOR logic functions can be applied to these layers. Here, local pooling can be interchanged with binary pooling to encompass and arrive at the same state of pooling layers which is binary minimum pooling layers. The combination of Gao and Bankman in addition to Helwegen are analogous art and in the same field of invention because all three references disclose refining the physical state of artificial neural networks to obtain optimized performance for deployment in low-power or memory-constrained environments. Where Gao-Bankman teaches transforming a trained artificial neural network and converting a pooling layer into a binary pooling layer but not converting the minimum pooling layer into a binary minimum pooling layer, Helwegen teaches converting a minimum pooling layer into a binary minimum pooling layer. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Bankman (binarizing pooling layers) and further with the teachings of Helwegen (converting pooling layers to binary minimum pooling layers) in order to limit the overuse of resources. One of ordinary skill in the art would be motivated to do so because by integrating Helwegen's framework into the methods of Gao and Bankman one could state, "better use of storage capacity can be achieved. This may facilitate the use of the neural network architecture on resource-constrained devices, where the memory requirements of skip connections may otherwise be prohibitive {0039 of Helwegen}." Therefore, Claims 5, 12, and 19 are rejected. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et. Al, (U.S Patent Application Publication No. US 20190279072 A1 filed on March 7, 2019, hereinafter “Gao”), in view of Luo et. Al, (U.S Patent Application Publication No. US 20210312289 A1 filed on June 18, 2021, hereinafter “Luo”), and Hosseini et. Al, (Binary Precision Neural Network Manycore Accelerator, published April, 2021, hereinafter “Hosseini”). With respect to Claims 6, 13, and 20: Gao does not appear to explicitly disclose: “further comprising performing a binary conversion and a bit-packing by the batch normalization layer.” However, Luo teaches: “further comprising performing a binary conversion and a bit-packing by the batch normalization layer.” (Paragraph 0123 discloses the type of functional operation that occurs within the normalization layer, “it can be defined that in the data processing method of the present disclosure, the normalization processing mode is: formula (2).” PNG media_image1.png 122 604 media_image1.png Greyscale Paragraph 0127 mentions that in an application of formula (2) binary conversion can also be done, “In a possible implementation, the first transformation parameter U, the second transformation parameter V, the third transformation parameter U ', and the fourth transformation parameter V ' may be binarization matrices, where the value of each element in the binarization matrices is 0 or 1. That is , V ' , VE { 0 , 1 } © XC and U ' , UE { 0 , 1 } NxN are four learnable binarization matrices, respectively, each element therein being 0 or 1. Therefore, UuV and U'oV ' are normalization parameters in the data processing method of the present disclosure.” Paragraph 0145 further details the normalization layer’s ability to perform binary conversion, “an independent normalization operation mode may be autonomously learned for each layer of feature data of the neural network model. When the feature data is subjected to normalization processing according to formula (2), there are four binarization diagonal block matrices to be learned in the normalization operation mode of each layer.”) Gao and Luo are analogous art and in the same field of invention because both references pertain to optimizing calculation speed and memory efficiency. Where Gao teaches transforming a trained artificial neural network but not performing binary conversion in the batch normalization layer, Luo teaches performing binary conversion in the batch normalization layer. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Luo (performing binary conversion by the batch normalization layer) in order to reduce the burden on memory storage whilst stabilizing information processing. One of ordinary skill in the art would be motivated to do so because by integrating Luo's framework into the methods of Gao one could, "further reduce the amounts of calculations and parameters in the data processing method of the present disclosure, and to change a parameter optimization process into a differentiable end-to-end mode, multiple sub-matrices may be used for an inner product operation to construct the binarization diagonal block matrices {0145 of Luo}." Gao and Luo do not appear to explicitly disclose: “further comprising performing a binary conversion and a bit-packing by the batch normalization layer.” However, Hosseini teaches: “further comprising performing a binary conversion and a bit-packing by the batch normalization layer.” (Section 5.4.3 discloses the presence of bit-packing, “PCH Instruction. Patch-select or PCH is a new invention to the ISA of BiNMAC that facilitates the bit packing and bit manipulation.” Section 5.5, paragraph 3 elaborates on the operational performance of bit-packing by the batch normalization layer, “result is accumulated in the special register PACT. If a Batch Normalization layer follows this convolution, then the quantized ζb is loaded and added to the accumulating PACT register, and if a sign AF follows the Batch Normalization layer, then only the sign bit of the final accumulated result is extracted. By using the PCH instruction then, the output fmap bits get packed within the DMEM on an along-rows-packed basis and get ready to be BCAST to their designated addresses.”) Gao and Hosseini are analogous art and in the same field of invention because both references pertain to optimizing calculation speed and memory efficiency. Where Gao teaches transforming a trained artificial neural network but not performing bit-packing in the batch normalization layer, Hosseini teaches performing bit-packing in the batch normalization layer. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Hosseini (performing bit-packing by the batch normalization layer) in order to improve cache utilization and data transfer speed. One of ordinary skill in the art would be motivated to do so because by integrating Hosseini 's framework into the methods of Gao one could note that, " BiNMAC outperforms the TX2 by 11.3× in performance and by 4.5× in energy efficiency. Two other case studies for physiological datasets including physical activity monitoring and stress detection were implemented on the BiNMAC with minimum power configuration and also on the CPU, to compare the power and efficiency of the two: BiNMAC on average outperforms the ARM Cortex-A57 CPU by 2.8× in power and 13.5× in energy efficiency {Section 8 of Hosseini}." Therefore, Claims 6, 13, and 20 are rejected. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gao et. Al, (U.S Patent Application Publication No. US 20190279072 A1 filed on March 7, 2019, hereinafter “Gao”), in view of Ferguson, (U.S Patent Application Publication No. US 20200387789 A1 filed on June 5, 2020). With respect to Claims 7 and 14: Gao teaches: “and wherein the transformed artificial neural network is used during an execution of data based on the training.” (Paragraph 0009 discloses using the transformed artificial neural network to execute results that are directly correlated to the training that has occurred, “After the multilayer neural network model is optimized by the optimization apparatus of the second embodiment, the optimized network model may be operated by using an apparatus for applying the multilayer neural network model. The application apparatus may be a known apparatus for operating a network model and includes an inputting module for inputting a data set corresponding to a task requirement that is executable by the network model to the optimized multilayer neural network model; and an operating module for operating the data set in each of layers from top to bottom in the optimized multilayer neural network model and output results.”) Gao does not appear to explicitly disclose: “wherein the trained artificial neural network is used during a training of a corresponding artificial neural network,” However, Ferguson teaches: “wherein the trained artificial neural network is used during a training of a corresponding artificial neural network,” (Paragraph 0024 discloses that the trained artificial neural network is being used to train another neural network, “If the trained neural network 114 is discarded due to lack of fidelity to the expected or accepted output, then another neural network 106 may be trained, as discussed above, using a second set of training data 112 to obtain a second trained neural network 114… That is, the second neural network 106 is trained with former test data repurposed as training data. The resulting second trained neural network 114 may be evaluated based on further generated test data 116.”) Gao and Ferguson are analogous art and in the same field of invention because both references pertain to optimizing and refining the structure of artificial neural networks pre and post training. Where Gao teaches the transformed artificial neural network being used during an execution of data but not to train another corresponding neural network, Ferguson teaches the trained artificial neural network being used to train a corresponding artificial neural network. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to combine the base reference of Gao (transforming and rearranging the order of layers) with the teachings of Ferguson (training a corresponding artificial neural network) in order to improve the overall accuracy, performance, and efficiency of the neural network. One of ordinary skill in the art would be motivated to do so because by integrating Ferguson's framework into the methods of Gao one could note that, "it should be apparent that a neural network may be trained in an efficient and accurate manner using low-discrepancy data, iteratively adjusted weightings based on error, and recycling of test data into training data. The time and processing resources required in training and deploying a neural network may be reduced {0058 of Ferguson}." Therefore, Claims 7 and 14 are rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NOOR F CHEEMA whose telephone number is (571)272-9642. The examiner can normally be reached Monday-Friday 7:30am-5:00pm alternative Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NOOR F CHEEMA/Examiner, Art Unit 2142 /N.F.C./ /Mariela Reyes/ Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Mar 09, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month