Prosecution Insights
Last updated: April 19, 2026
Application No. 18/082,459

METHODS AND SYSTEMS FOR PERFORMING STOCHASTIC COMPUTING USING NEURAL NETWORKS ON HARDWARE DEVICES

Non-Final OA §102§103§112
Filed
Dec 15, 2022
Examiner
WONG, WILLIAM
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Secutopia Corporation
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
4y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
120 granted / 397 resolved
-24.8% vs TC avg
Strong +27% interview lift
Without
With
+26.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
33 currently pending
Career history
430
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
23.5%
-16.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to final office action mailed on 12/15/2022. Claims 1-15 are pending and have been examined. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Specification The disclosure is objected to because of the following informalities: The use of a trade name or a mark used in commerce (e.g. WIFI, BLUETOOTH, XILINX, VIVADO, etc.) has been noted in this application. It should be capitalized (each letter) wherever it appears and be accompanied by the generic terminology or, where appropriate, include a proper symbol indicating use in commerce, such as ™, SM, or ® following the word. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Appropriate correction is required. Claim Objections Claims 1, 8, and 15 are objected to because of the following informalities: As per claim 1, it appears that “to first activation layer” in line 6 should be replaced with “to a first activation layer”. This similarly applies to claim 8 and 15. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 and 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 3, there is lack of antecedent basis for “the respective set of inputs” in line 2. There is lack of antecedent basis for “the activation layer” in line 3. There is lack of antecedent basis for “the respective activation layer” in lines 3-4. The above similarly applies to claim 10. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3, 8, 10, and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Huang et al. (“Iterative Normalization: Beyond Standardization towards Efficient Whitening”, https://doi.org/10.48550/arXiv.1904.03441, 4/6/2019, pages 1-12). As per independent claim 1, Huang teaches a method for training a system using stochastic computation, comprising: at an electronic device (e.g. in page 4, “IterNorm does not introduce any extra costs in memory or computation during inference”, i.e. electronic device): receiving a first set of inputs (e.g. in page 3, “Input: mini-batch inputs X”); training, using the first set of inputs, a stochastic neural network having a series of activation layers and an output layer (e.g. in pages 1 and 3, “Batch Normalization (BN) is ubiquitously employed for accelerating neural network training… introduce Stochastic Normalization Disturbance… activations of each intermediate layer… whiten the activations of each layer within a mini-batch, such that the output of each layer has an isometric diagonal covariance matrix … Input: mini-batch inputs X”), including: before passing the first set of inputs to first activation layer in the series of activation layers, normalizing each input in the first set of inputs (e.g. in page 1, “centering, scaling and decorrelating [i.e. normalizing] the input data… extends the operations from the input layer [i.e. before] to centering and scaling activations of each intermediate layer”); and propagating the outputs from the first activation layer as inputs to a second activation layer in the series of activation layers, wherein the inputs to the second activation layer are normalized before being passed to the second activation layer (e.g. in pages 1-3, “centering and scaling activations of each intermediate layer… whiten the activations of each layer within a mini-batch, such that the output of each layer has an isometric diagonal covariance matrix… Iterative Normalization (IterNorm) to further enhance BN with more efficient whitening… Whitening the activation ensures that all dimensions along the eigenvectors have equal importance in the subsequent linear layer… Input: mini-batch inputs”; in other words, iteratively performs normalization to each subsequent activation/intermediate layer, i.e. propagates normalized outputs as inputs to next layers). As per claim 3, the rejection of claim 1 is incorporated and Huang further teaches for each activation layer in the series of activation layers, normalizing each input in the respective set of inputs for the activation layer before passing the respective set of inputs to the respective activation layer (e.g. in pages 1-3, “centering and scaling activations of each intermediate layer… whiten the activations of each layer within a mini-batch, such that the output of each layer has an isometric diagonal covariance matrix… Iterative Normalization (IterNorm) to further enhance BN with more efficient whitening… Whitening the activation ensures that all dimensions along the eigenvectors have equal importance in the subsequent linear layer… Input: mini-batch inputs”; in other words, iteratively performs normalization to each subsequent activation/intermediate layer, i.e. propagates normalized outputs as inputs to next layers). Claims 8 and 10 are the device claims corresponding to method claims 1 and 3, and are rejected under the same reasons set forth and Huang further teaches one or more processors and memory storing one or more programs for execution by the one or more processors (e.g. in page 4, “IterNorm does not introduce any extra costs in memory or computation during inference”, i.e. implying a computer with memory that stores and runs IterNorm). Claim 15 is the medium claim corresponding to method claim 1, and is rejected under the same reasons set forth and Huang further teaches a non-transitory computer-readable storage medium storing one or more programs for execution by an electronic device with one or more processors (e.g. in page 4, “IterNorm does not introduce any extra costs in memory or computation during inference”, i.e. implying a computer with memory that stores and runs IterNorm). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Iterative Normalization: Beyond Standardization towards Efficient Whitening”, https://doi.org/10.48550/arXiv.1904.03441, 4/6/2019, pages 1-12) in view of Baek et al. (US 20210390660 A1). As per claim 2, the rejection of claim 1 is incorporated and Huang further teaches wherein normalizing each input in the first set of inputs comprises normalizing each input to have a value (e.g. in page 1, “centering, scaling and decorrelating [i.e. normalizing] the input data”), but does not specifically teach within a range of [+1, -1]. However, Baek teaches normalizing within a range of [+1, -1] (e.g. in paragraph 65, “normalize values…to values from −1 to 1”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Huang to include the teachings of Baek because one of ordinary skill in the art would have recognized the benefit of accounting for hardware performance. Claim 9 is the device claim corresponding to method claim 2, and is rejected under the same reasons set forth. Claims 4-7 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Huang et al. (“Iterative Normalization: Beyond Standardization towards Efficient Whitening”, https://doi.org/10.48550/arXiv.1904.03441, 4/6/2019, pages 1-12) in view of Huang et al. (“Centered Weight Normalization in Accelerating Training of Deep Neural Networks”, ICCV, 2017, pages 2803-2811, hereinafter referred to as “Huang2”) and Eaton et al. (US 20100063949 A1). As per claim 4, the rejection of claim 1 is incorporated, but Huang does not specifically teach wherein training the stochastic neural network further comprises during forward propagation, converting each weight to one of two possible values. However, Huang2 teaches training comprising, during forward propagation, converting each weight (e.g. in page 2803 and page 2805, “during the course of training…to adjust the norm of the input weight… Forward pass of linear mapping with centered weight normalization… calculate normalized weight”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Huang to include the teachings of Huang2 because one of ordinary skill in the art would have recognized the benefit of improving the difficulty of learning, but the combination does not specifically teach to one of two possible values. However, Eaton teaches converting each weight to one of two possible values (e.g. in paragraphs 8-9 and 49, “encoding the percept as a bit pattern… storing an encoded percept… converting the weights to an encoded percept matrix (i.e., matrix 435) using the same decoding scheme discussed above (converting each positive weight to a 1 and each negative (or zero) weight to a -1)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of the combination to include the teachings of Eaton because one of ordinary skill in the art would have recognized the benefit of decreasing memory requirements. As per claim 5, the rejection of claim 4 is incorporated and the combination further teaches wherein each weight is converted to +1 or -1 during forward propagation (e.g. Huang2, in page 2803 and page 2805, “during the course of training…to adjust the norm of the input weight… Forward pass of linear mapping with centered weight normalization… calculate normalized weight”; Eaton, paragraphs 8-9 and 49, “converting the weights to an encoded percept matrix (i.e., matrix 435) using the same decoding scheme discussed above (converting each positive weight to a 1 and each negative (or zero) weight to a -1)”). As per claim 6, the rejection of claim 4 is incorporated and the combination further teaches applying the trained stochastic neural network to a hardware system, wherein each weight in the neural network of the hardware system is 1-bit (e.g. Huang, in page 4, “IterNorm does not introduce any extra costs in memory or computation during inference”; Huang2, in page 2803 and page 2805, “during the course of training…to adjust the norm of the input weight”; Eaton, in paragraphs 8-10 and 49, “encoding the percept as a [i.e. 1] bit pattern… storing an encoded percept… a processor and a memory containing a machine learning application which when executed by the processor is configured to perform… converting the weights to an encoded percept matrix (i.e., matrix 435) using the same decoding scheme discussed above (converting each positive weight to a 1 and each negative (or zero) weight to a -1)”; note that 2 values are represented as 1 bit). As per claim 7, the rejection of claim 6 is incorporated and the combination further teaches wherein the hardware system converts weight values without using a random number generator (e.g. Eaton, in paragraphs 8-10 and 49, “a processor and a memory containing a machine learning application which when executed by the processor is configured to perform… converting the weights to an encoded percept matrix (i.e., matrix 435) using the same decoding scheme discussed above (converting each positive weight to a 1 and each negative (or zero) weight to a -1)”, i.e. not a random number generator). Claims 11-14 are the device claims corresponding to method claims 4-7, and are rejected under the same reasons set forth. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Zhang et al. (US 20220121909 A1) teaches “implementing at least a stochastic whitening batch normalization layer (SWBN) layer between a first neural network layer and a second neural network layer in a neural network, wherein the first neural network layer generates first layer outputs comprising a plurality of components… weight normalization, layer normalization” (e.g. in paragraphs 26 and 57). Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM WONG whose telephone number is (571)270-1399. The examiner can normally be reached Monday-Friday 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /W.W/Examiner, Art Unit 2144 01/08/2026 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Dec 15, 2022
Application Filed
Jan 09, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572252
CONTROLLING A 2D SCREEN INTERFACE APPLICATION IN A MIXED REALITY APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12530707
CUSTOMER EFFORT EVALUATION IN A CONTACT CENTER SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12511846
XR DEVICE-BASED TOOL FOR CROSS-PLATFORM CONTENT CREATION AND DISPLAY
2y 5m to grant Granted Dec 30, 2025
Patent 12504944
METHODS AND USER INTERFACES FOR SHARING AUDIO
2y 5m to grant Granted Dec 23, 2025
Patent 12423561
METHOD AND APPARATUS FOR KEEPING STATISTICAL INFERENCE ACCURACY WITH 8-BIT WINOGRAD CONVOLUTION
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
57%
With Interview (+26.9%)
4y 11m
Median Time to Grant
Low
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month