Prosecution Insights
Last updated: April 19, 2026
Application No. 18/135,657

AUGMENTATION AND SUPPRESSION USING INTENTIONALLY ADDED PREDEFINED BIAS

Non-Final OA §103
Filed
Apr 17, 2023
Examiner
BENOURAIDA, AMINA MORENO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Egx Acquisition Corp. Dba Edgeworx
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
18
Total Applications
across all art units

Statute-Specific Performance

§101
28.1%
-11.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al., Non-Patent Literature (“Learning Not to Learn: Training Deep Neural Networks with Biased Data”) in view of Paul et al., (US11568241B2). Regarding claim 1 and analogous claim 8, 14: Kim teaches: training a neural network using a training dataset having content, wherein at least a subset of the content comprises intentionally added predefined bias, and (Section 3.1, “The objective of our work is to train a network [training a neural network] that performs robustly with unbiased data during test time, even though the network is trained with biased data [training dataset having content]”…Section 4.2, paragraph 2, “The constructed dogs and cats dataset is shown in Figure 3 (b), with each set containing a color bias. For this dataset, the [subset] bias set B = {dark, bright}”…Section 4.1, “We planted a color bias into the MNIST dataset”(i.e., wherein the subset ‘set B’ is interpreted to contain content which under the broadest reasonable interpretation, containing a color bias. And planted is interpreted as intentionally added predefined bias)”) wherein the intentionally added predefined bias (Section 4.1, “We planted a color bias [intentionally added predefined bias] into the MNIST dataset [14]. To synthesize the color bias, we selected ten distinct colors”) Kim does not explicitly teach: memory configured to store program instructions, wherein, when executed by the computation device, the program instructions cause the computer system to perform one or more operations comprising: modulates an output of the neural network. Paul teaches: memory configured to store program instructions, wherein, when executed by the computation device, the program instructions cause the computer system to perform one or more operations comprising: (Col 14, lines 20-41, “computer system 700 includes a processor 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 718 (e.g., a data storage device), which communicate with each other via a bus 730. Processor 702 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 702 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 702 is configured to execute the processing logic 726 for performing the operations described herein.”) modulates an output of the neural network (Col 4, lines 18-29, “Further, in a spiking neural network, input connections may be stimulatory or inhibitory (i.e., wherein activation or suppression of one or more synapses is interpreted as stimulatory ‘activation’, inhibitory ‘suppression’ of input connections ‘synapses’). A node's membrane potential may also be affected by changes in the node's own internal state (“leakage”). the artificial neuron “fires” (e.g., produces an output spike), when its membrane potential crosses a firing threshold. Thus, the effect of inputs on a spiking neural network node operate to increase or decrease its internal membrane potential, making the node more or less likely to fire (i.e., wherein modulated output is interpreted as a spike)”) Paul and Kim are both related to the same field of endeavor (i.e., machine learning). In view of teachings of Paul it would have been obvious for a person of ordinary skill in the art to apply the teachings of Paul to Kim before the effective filing date of the claimed invention in order to improve the efficiency of training a neural network by using content with intentionally added bias (Paul, Col 1, paragraph 4, “Neural networks are configured to implement features of “learning”, which generally is used to adjust the weights of respective connections between the processing elements that provide particular pathways within the neural network and processing outcomes. Existing approaches for implementing learning in neural networks have involved various aspects of unsupervised learning (e.g., techniques to infer a potential solution from unclassified training data, such as through clustering or anomaly detection), supervised learning (e.g., techniques to infer a potential solution from classified training data), and reinforcement learning (e.g., techniques to identify a potential solution based on maximizing a reward). However, each of these learning techniques are complex to implement, and extensive supervision or validation is often required to ensure the accuracy of the changes that are caused in the neural network.”) Regarding claim 2 and analogous claim 9, 15: Kim, as modified by Paul, teaches the system of claim 1. Kim does not explicitly teach: wherein the modulated output corresponds to activation or suppression of one or more synapses in the neural network Paul further teaches: wherein the modulated output corresponds to activation or suppression of one or more synapses in the neural network (Col 4, lines 18-29, “Further, in a spiking neural network, input connections may be stimulatory or inhibitory (i.e., wherein activation or suppression of one or more synapses is interpreted as stimulatory ‘activation’, inhibitory ‘suppression’ of input connections ‘synapses’). A node's membrane potential may also be affected by changes in the node's own internal state (“leakage”). the artificial neuron “fires” (e.g., produces an output spike), when its membrane potential crosses a firing threshold. Thus, the effect of inputs on a spiking neural network node operate to increase or decrease its internal membrane potential, making the node more or less likely to fire (i.e., wherein modulated output is interpreted as a spike)”) The motivation for claim 2 is the same motivation for claim 1. Regarding claim 3 and analogous claim 10, 16: Kim, as modified by Paul, teaches the system of claim 2. Kim does not explicitly teach: wherein the activation or suppression adjusts weights associated with the one or more synapses for a predefined time interval Paul further teaches: wherein the activation or suppression adjusts weights associated with the one or more synapses for a predefined time interval (Col 4, lines 39-51“A spike train is a temporal sequence of discrete spike events, which provides a set of times specifying at which time a node fires [predefined time interval]. As shown, the spike train xi is produced by the node before the synapse (e.g., node 142), and the spike train xi is evaluated for processing according to the characteristics of a synapse 144. For example, the synapse may apply one or more weights, e.g., weight wjj, which are used in evaluating the data from the spike train xi. Input spikes from the spike train xi enter a synapse such as synapse 144 which has a weight wjj. This weight scales what the impact of the presynaptic spike has on the post-synaptic node (e.g., node 146)”…Col 8, lines 36-38, “nodes of the spiking neural network (i.e., wherein the activation or suppression is interpreted as the spiking) to train the spike neural network, during a first time period, to determine updates to weights [adjusts weights] of respective synapses.”) The motivation for claim 3 is the same motivation for claim 1. Regarding claim 4 and analogous claim 11, 17: Kim, as modified by Paul, teaches the system of claim 1. Kim further teaches: wherein the intentionally added predefined bias comprises additional content that leverages associated learning with one or more features in at least the subset of the content and that are different from the additional content (Section 4.1, “We planted a color bias [intentionally added predefined bias] into the MNIST dataset [14]. To synthesize the color bias, we selected ten distinct colors (i.e., wherein the selected distinct colors is interpreted as ‘additional content’) and assigned them to each digit category as their mean color. Then, for each training image, we randomly sampled a color from the normal distribution of the corresponding mean color and provided variance, and colorized the digit. samples from the colored MNIST, where the images in the training set show that the color and digit class are highly correlated [leverages associated learning] (i.e., wherein leverages associated learning with features is interpreted as the highly correlated features ‘color’) In the case of color bias, the bias itself is completely independent [different from] of the categories (i.e., wherein the ‘different from’ is interpreted as the color bias is independent to the categories ‘shapes’) In other words, an effort to unlearn the bias is purely beneficial for digit categorization. Thus, removing color bias from feature embedding improved the performance significantly because the network is able to focus on learning shape features.”) The motivation for claim 4 is the same motivation for claim 1. Regarding claim 5 and analogous claim 12, 18: Kim, as modified by Paul, teaches the system of claim 1. Kim further teaches: wherein the operations comprise obtaining the content; and wherein obtaining the content comprises: (Section 4.1, “We planted a color bias into the MNIST dataset [14]. To synthesize the color bias, we selected ten distinct colors and assigned them to each digit category as their mean color. Then, for each training image, we randomly sampled a color from the normal distribution of the corresponding mean color and provided variance, and colorized the digit. (i.e., wherein under the broadest reasonable interpretation (BRI) ‘obtaining content’ is interpreted as the created dataset with the bias)”) or generating the content (Section 4.1, “We planted a color bias into the MNIST dataset [14]. To synthesize the color bias, we selected ten distinct colors and assigned them to each digit category as their mean color. Then, for each training image, we randomly sampled a color from the normal distribution of the corresponding mean color and provided variance, and colorized the digit. (i.e., wherein under the broadest reasonable interpretation (BRI) ‘generating content’ is interpreted as the created dataset with the bias)”) Kim does not explicitly teach: accessing the content in memory; receiving the content from an electronic device; Paul further teaches: accessing the content in memory; receiving the content from an electronic device; (Col 14, lines 20- 60, “The exemplary computer system 700 includes a processor 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 706 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 718 (e.g., a data storage device), which communicate with each other via a bus 730 (i.e., wherein under the broadest reasonable interpretation (BRI) the content is stored in memory and later accessed)”…“The software 722 may also reside, completely or at least partially, within the main memory 704 and/or within the processor 702 during execution thereof by the computer system 700, the main memory [in memory] 704 and the processor 702 also constituting machine-readable storage media. The software 722 may further be transmitted or received over a network 720 via the network interface device 708 [electronic device] (i.e., wherein receiving content from a device)”) The motivation for claim 5 is the same motivation for claim 1. Regarding claim 6 and analogous claim 13, 19: Kim, as modified by Paul, teaches the system of claim 5. Kim further teaches: wherein generating the content comprises adding the intentionally added predefined bias to at least the subset of the content (Section 4.1, “We planted a color bias [intentionally added predefined bias] into the MNIST dataset [14]. To synthesize the color bias, we selected ten distinct colors and assigned them to each digit category as their mean color. Then, for each training image, we randomly sampled a color from the normal distribution of the corresponding mean color and provided variance, and colorized the digit. (i.e., wherein under the broadest reasonable interpretation (BRI) ‘generating content’ is interpreted as the created dataset with the bias)”) The motivation for claim 6 is the same motivation for claim 1. Regarding claim 7 and analogous claim 20: Kim, as modified by Paul, teaches the system of claim 5. Kim further teaches: wherein generating the content comprises selecting the intentionally added predefined bias based at least in part on at least the subset of the content (Section 4.1, “We planted a color bias [intentionally added predefined bias] into the MNIST dataset [14]. To synthesize the color bias, we selected [selecting] ten distinct colors and assigned them to each digit category as their mean color. Then, for each training image, we randomly sampled a color from the normal distribution of the corresponding mean color and provided variance, and colorized the digit. (i.e., wherein under the broadest reasonable interpretation (BRI) ‘generating content’ is interpreted as the created dataset with the bias)”… The motivation for claim 7 is the same motivation for claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMINA BENOURAIDA whose telephone number is (571)272-4340. The examiner can normally be reached Monday-Friday 8:30am-5pm ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMINA MORENO BENOURAIDA/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
Dec 13, 2025
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month