Prosecution Insights
Last updated: April 19, 2026
Application No. 18/196,412

Hybrid Fixed/Flexible Neural Network Architecture

Non-Final OA §103
Filed
May 11, 2023
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Polyn Technology Limited
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 05/11/2023. Claims 1-17 are pending and have been considered below. Election/Restrictions 3. Claims 18-23 are withdraw from further consideration pursuant to 37 CFR 1.142(b) as being drawn to nonelected group II and group III. Election was made without traverse in the telephone conversation with the Applicant on 01/22/2026. I. Claims 1-17, drawn to a hardware apparatus (Mixed-signal neural hardware and classifier/regression circuit), classified in G06N3/06, G06N3/065, G06N3/063, G06N3/0455. II. Claims 18-22, drawn to a method of selecting where to split a neural network into fixed and flexible portions by sequentially choosing a candidate split layer, generating embeddings at the candidate layer, training a regression model mapping embeddings to outputs, evaluating accuracy, and repeating with a new set of layers until a predetermined accuracy threshold is met, classified in G06N3/08, G06N20/00. III. Claim 23, drawn to a method of selecting a splitting layer by defining a set of candidate layers and, for each candidate layer, generating test embeddings, training a classifier, computing a respective aggregate error, and selecting the splitting layer as the candidate layer having the smallest aggregate error, classified in G06N3/04. Information Disclosure Statement 4. The information disclosure statement (IDS(s)) submitted on 10/13/2023, 10/30/2024, 06/26/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections – 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claims 1-3, 6, 9, 11, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Busch et al. (U.S. Patent Application Pub. No. US 20190034790 A1) in view of Zhang et al. (U.S. Patent Application Pub. No. US 20200053299 A1). Claim 1: Busch teaches a hardware apparatus (i.e. a neuromorphic integrated circuit; para. [0007]) comprising: an analog circuit, corresponding to a portion (i.e. an analog multiplier array 200, a neural network can be disposed in the analog multiplier array 200 in a memory sector of a neuromorphic IC. the analog multiplier array 200 is an analog circuit, input and output currents can vary in a continuous range instead of simply on or off; para. [0049, 0050]) of a trained neural network (i.e. Since the analog multiplier array 200 is an analog circuit, input and output currents can vary in a continuous range instead of simply on or off. This is useful for storing weights of the neural network … the number of analog layers can be programmed with an initial set of weights as set forth herein for one or more classification problems, one or more regression problems; para. [0050, 0056]), the trained parameters (weights) are physically programmed into analog layers/multiplier arrays, configured to: one or more analog signals (i.e. Word-line analogs are driven by analog input signals; para. [0059]), one or more sensors (i.e. Neuromorphic ICs such as the neuromorphic IC 102 can be deployed in toys, sensors, wearables, augmented reality (“AR”) systems or devices, virtual reality (“VR”) systems or devices, mobile systems or devices, appliances, Internet-of-things (“IoT”) devices, or hearing systems or devices; para. [0048]); and compute an analog output based on the one or more analog signals (i.e. output current is routed as an analog signal to a next layer … the weights are multiplied by input currents to provide output currents that are combined to arrive at a decision of the neural network; para. [0049, 0050]), analog input currents are processed by the analog multiplier array to generate analog output currents and those output are routed as analog signals forward; and a classifier or regression circuit, coupled to the analog circuit (i.e. FIG. 5 illustrates a multi-layered hybrid analog-digital neural network 500 in accordance with some embodiments. As shown, the hybrid neural network 500 includes a number of data inputs, a number of analog layers, a digital layer, and a number of data outputs; para. [0056]), configured to: obtain an input signal based on the analog output (i.e. output current is routed as an analog signal to a next layer; para. [0049]); and apply a machine learning model to the input signal to either (i) classify the input signal according to a plurality of discrete categories (i.e. the decision making by the neural network includes predicting discrete classes for one or more classification problems. The digital layer through its configuration for programmatically compensating for weight drifts of the synaptic weights of the neural network is further configured to maintain a correctly projected decision boundary for predicting the discrete classes by the neural network; para. [0011]) or (ii) assign an output on a predefined continuous scale (i.e. the decision making by the neural network includes predicting continuous quantities for one or more regression problems. The digital layer through its configuration for programmatically compensating for weight drifts of the synaptic weights of the neural network is further configured to maintain a correctly fitted regression line for predicting the continuous quantities by the neural network; para. [0010]). Busch does not explicitly teach obtain one or more analog signals from one or more sensors. However, Zhang teaches obtain one or more analog signals from one or more sensors (i.e. the one or more embodiments of the present invention facilitate coupling the image sensor device to feed analog signals, in the form of voltages, directly to the cross-point array … CMOS sensor can provide analog inputs for analog neural networks; para. [0034, 0035]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Busch to include the feature of Zhang. One would have been motivated to make this modification because it provides sensor-originated analog voltage signals directly to the analog neural network, thereby reducing power/latency. Claim 2: Busch and Zhang teach the hardware apparatus of claim 1. Busch further teaches wherein: the classifier or regression circuit comprises a digital circuit (i.e. FIG. 5 illustrates a multi-layered hybrid analog-digital neural network 500 in accordance with some embodiments. As shown, the hybrid neural network 500 includes a number of data inputs, a number of analog layers, a digital layer, and a number of data outputs; para. [0056]). Busch does not explicitly teach an analog-to-digital converter coupled to the analog circuit and configured to receive and convert the analog output to a digital input. However, Zhang further teaches the hardware apparatus further comprises an analog-to-digital converter coupled to the analog circuit and configured to receive and convert the analog output to a digital input (i.e. that image sensor 750 provides analog data that is digitalized by an A/D converter, and the resulting digital data is forwarded to the classifier 790; para. [0061, 0076]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Busch to include the feature of Zhang. One would have been motivated to make this modification because it provides a digital input suitable for downstream classifier execution. Claim 3: Busch and Zhang teach the hardware apparatus of claim 1. Busch further teaches wherein the analog output comprises a set of latent embeddings (i.e. output current is routed as an analog signal to a next layer rather than over bit lines going to a sense-amp/comparator to be converted to a bit. Word-line analogs are driven by analog input signals … the weights are multiplied by input currents to provide output currents; para. [0049, 0056]) and the classifier or regression circuit applies the machine learning model to the latent embeddings (i.e. FIG. 5 illustrates a multi-layered hybrid analog-digital neural network 500 in accordance with some embodiments. As shown, the hybrid neural network 500 includes a number of data inputs, a number of analog layers, a digital layer, and a number of data outputs, the weights are multiplied by input currents to provide output currents that are combined to arrive at a decision of the hybrid neural network 500 by means of one or more of the number of data outputs. Decision making for the regression problems; para. [0056]). Claim 6: Busch and Zhang teach the hardware apparatus of claim 1. Busch further teaches wherein the classifier or regression circuit comprises one or more digital computing units selected from the group consisting of: CPUs, GPUs, RISCs, FPGAs, and ASICs (i.e. Examples of such circuitry may include, but are not limited or restricted to, a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, a controller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic; para. [0007-0011, 0039]). Claim 9: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach a network of memristors. However, Zhang further teaches a network of memristors (i.e. Crosspoint devices, in effect, function as the ANN's weighted connections between neurons. Nanoscale devices, for example memristors having “ideal” conduction state switching characteristics, are often used as the crosspoint devices in order to emulate synaptic plasticity with high energy efficiency; para. [0032]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Busch to include the feature of Zhang. One would have been motivated to make this modification because it emulates synaptic plasticity with high energy efficiency. Claim 11: Busch and Zhang teach the hardware apparatus of claim 1. Busch further teaches wherein the classifier or regression circuit is reconfigurable (i.e. the digital layer of the hybrid neural network 500 can be programmed through a partial digital retraining process to correct or compensate for the weight drifts; para. [0057]) to train the machine learning model for a new set of inputs that is different from a set of inputs used to train the trained neural network (i.e. Such a scenario can occur after i) partial digital retraining of the digital layer of the hybrid neural network 500 to compensate for the foregoing weight drifts and ii) subsequently testing the hybrid neural network 500 with the second set of test images; para. [0061, 0062]). Claim 12: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach wherein the one or more sensors include an analog sensor selected from the group consisting of: a microphone, a piezoelectric sensor, a PPG sensor, an IMU sensor, a chemical sensor, a Lidar sensor, a Radar sensor, and a CMOS matrix sensor. However, Zhang further teaches wherein the one or more sensors include an analog sensor selected from the group consisting of: a microphone, a piezoelectric sensor, a PPG sensor, an IMU sensor, a chemical sensor, a Lidar sensor, a Radar sensor, and a CMOS matrix sensor (i.e. FIG. 8 depicts a block diagram to illustrate a structure of a typical image sensor. The image sensor 750 includes a matrix of pixel sensors; para. [0062-0065]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Busch to include the feature of Zhang. One would have been motivated to make this modification because it applies the same known ML pipeline to different analog sensor modalities. 7. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Pernisz (U.S. Patent Pub. No. US 5422982 A). Claim 4: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach wherein: the analog circuit comprises a plurality of operational amplifiers and a plurality of resistors; resistance values of the plurality of resistors are based on weights of neurons in the portion of the trained neural network; and the plurality of resistors is configured to connect the plurality of operational amplifiers. However, Pernisz teaches the analog circuit comprises a plurality of operational amplifiers (i.e. The neurons have a non-linear input-output transfer function as shown in FIGS. 8 A-C for two different cases. This is realized with an operational amplifier whose output voltage equals the sum of the currents flowing into its input (trans-impedance amplifier); col. 7, lines 47-55) and a plurality of resistors (i.e. FIG. 7 illustrates a matrix 100 of variable resistors 60 used as adaptive weight synapses in a typical feedforward connection between two layers of neuronal threshold units 90; col. 7, lines 35-40); resistance values of the plurality of resistors are based on weights of neurons in the portion of the trained neural network (i.e. The arrangement includes a symbolic representation 90 for the neuron (trans-impedance amplifier). The resistance of each resistor 60 can be described as its weight w.sub.ij with which it allows neuron j to contribute to the output s.sub.i of neuron i in layer k, in accordance with the equation s.sub.i =.SIGMA..sub.j w.sub.ij s.sub.j; col. 7, lines 35-46); and the plurality of resistors is configured to connect the plurality of operational amplifiers (i.e. The output of the neurons is fed back to the input through the synaptic weights in the array 100 which are implemented by the variable resistive elements 60 of the present invention. The network is fully connected with each output 102 connected to all inputs 104; col. 8, lines 10-20). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Pernisz. One would have been motivated to make this modification because it provides a compact way to implement continuous-valued weight strength and weighted summation in analog. 8. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang, Pernisz, and further in view of Eshun (U.S. Patent Application Pub. No. US20050258513A1). Claim 5: Busch, Zhang, and Pernisz teach the hardware apparatus of claim 4. Busch does not explicitly teach sputtered resistors on a backend-of-the-line (BEOL). However, Eshun teaches sputtered resistors (i.e. FIG. 1, layers 50, 55 are formed on upper surface 40 by processes known in the art such as CVD or sputter deposition. Layer 50 comprises a thin-film of an electrically resistive material such as, for example, TaN. A portion of layer 50 forms the thin-film resistor described hereinafter; para. [0011]) on a backend-of-the-line (BEOL) (i.e. Referring to FIG. 1, a substrate 10 is provided including front-end-of-line (FEOL) levels 15 and BEOL levels 20 formed thereupon. Preferably, substrate 10 comprises a semiconductor material such as, for example, silicon, silicon-on-insulator (SOI), silicon-germanium (SiGe), or gallium arsenide (GaAs). FEOL levels 15 (not shown in detail) include devices such as, for example, transistors (i.e. field effect, bipolar junction), capacitors, resistors, diodes, varactors, and the like which are connected by interconnects in subsequently formed BEOL levels 20 to form an integrated circuit; para. [0010]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch, Zhang, and Pernisz to include the feature of Eshun. One would have been motivated to make this modification because it provides compact, manufacturable resistors with predictable resistance values and good matching. 9. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Cao et al. (U.S. Patent Application Pub. No. US 20170060224 A1). Claim 7: Busch and Zhang teach the hardware apparatus of claim 1. Busch further teaches wherein the classifier or regression circuit comprises a processor that is further configured to perform as a digital controller, providing signals to one or more interfaces (i.e. Examples of such circuitry may include, but are not limited or restricted to, a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, a controller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic; para. [0007-0011, 0039]). Busch does not explicitly teach multiplexing power within the hardware apparatus. However, Cao teaches multiplexing power within the hardware apparatus (i.e. Each power multiplexer may select for the active-mode MX power rail while the corresponding subsystem operates in an active mode; para. [0008]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Cao. One would have been motivated to make this modification because it reduces energy consumption in sensor hardware by power-managing subsystems independently. 10. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Dasalukunte et al. (U.S. Patent Application Pub. No. US 20210150328 A1). Claim 8: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach a compute-in-memory component and one or more programmable memory tiles. Zhang further teaches a compute-in-memory component (i.e. One or more embodiments of the invention provide a programmable resistive crosspoint component referred to herein as a crosspoint device, which provides local data storage functionality and local data processing functionality. In other words, when performing data processing, the value stored at each crosspoint device is updated in parallel and locally, which eliminate the need to move relevant data in and out of a processor and a separate storage element; para. [0056]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Busch to include the feature of Zhang. One would have been motivated to make this modification because it enables the implementation that optimize the speed, efficiency and power consumption of the ANN. However, Dasalukunte teaches a compute-in-memory component and one or more programmable memory tiles (i.e. the analog router fuses well with analog compute-in-memory (CiM) tiles and reduces data conversions between the tiles, resulting in lower-power designs relative to conventional solutions; para. [0013]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Dasalukunte. One would have been motivated to make this modification because it reduces data conversions between the tiles, resulting in lower-power designs relative to conventional solutions. 11. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Alam et al. (U.S. Patent Application Pub. No. US 20220027718 A1). Claim 10: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach an autoencoder comprising an encoder portion, having a plurality of hidden layers that compute a respective representation of each input vector in a lower dimensional space than an input space of the respective input vector, and a decoder portion that reconstructs the respective input vector; the analog circuit corresponds to the encoder portion; and the classifier or regression circuit corresponds to the decoder portion. However, Alam teaches wherein: the trained neural network is an autoencoder comprising an encoder portion, having a plurality of hidden layers that compute a respective representation of each input vector in a lower dimensional space than an input space of the respective input vector (i.e. FIG. 4 depicts that the application of the second weighted matrix via the 90 extraction neurons 440(a-n) is applied to the compressed data via the 10 compressed neurons 430(a-n) is then decompressed from the 10 compressed neurons 430(a-n) to 41 decompressed neurons 450(a-n) such that the 41 decompressed neurons 430(a-n) are a decompressed representation of the second weighted matrix applied to the compressed data of the 10 compressed neurons 430(a-n); para. [0072]), and a decoder portion that reconstructs the respective input vector (i.e. The application of the second weighted matrix to the compressed neurons 430(a-n) that includes identical values of the weights 470(a-n) as the weights 460(a-n) included in the first weighted matrix that is applied to the input neurons 410(a-n) enables the autoencoder neural network configuration 400 to determine whether the output data as output by the decompressed neurons 450(a-n) replicates the input data input into the input neurons 410(a-n) within a threshold; para. [0073, 0074]); the analog circuit corresponds to the encoder portion (i.e. the first memristor crossbar configuration 480 in executing the compression operations; para. [0107]); and the classifier or regression circuit corresponds to the decoder portion (i.e. the subsequent decompression of the compressed neurons 430(a-n) with the application of the weights 470(a-n) included in the second weighted matrix should then result in output data that replicates the input data; para. [0074]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Alam. One would have been motivated to make this modification because it provides an effective means to decrease the space and the power required by conventional neuromorphic computing. 12. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Srinivasan et al. (U.S. Patent Application Pub. No. US 20140038674 A1). Claim 13: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach to generate embeddings that encode types of human activity, and three-axis accelerometer signals. However, Srinivasan teaches wherein the analog circuit is configured to generate embeddings that encode types of human activity (i.e. The feature extraction unit 240 is configured to transform each sampling window into a feature vector. The decision tree classification unit 250 identifies user activity for each sampling window based on features extracted for the sampling window and the decision tree model 260; para. [0038, 0039]), and the analog signal comprises three-axis accelerometer signals (i.e. the sampling unit 210 obtains tri-axial accelerometer sample data from the accelerometer 110; para. [0035]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Srinivasan. One would have been motivated to make this modification because it reduces power consumption for activity recognition. 13. Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Marinelli (U.S. Patent Pub. No. US 11429900 B1). Claim 14: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach to generate compressed data that encodes vibration sensor data based on vibration features from vibration sensors, and comprises three-axis accelerometer signals. However, Marinelli teaches to generate compressed data (i.e. generates a compressed data set including a selected portion of the frequency representation. Sensor device 100 transmits the compressed data set to data manager 435. Data manager 435 generates reconstructed vibration data based on the compressed data set; col. 10, lines 28-36) that encodes vibration sensor data based on vibration features from vibration sensors (i.e. The sensor device detects vibrations of the mechanical machine; col. 1, lines 50-60), and comprises three-axis accelerometer signals (i.e. sensor device 100 may include one or more of the following: a Triaxial 6.6 kHz MEMS Accelerometer to sample Vibration; col. 9, lines 60-65). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Marinelli. One would have been motivated to make this modification because it reduces bandwidth/power/storage at the sensor. Claim 15: Busch, Zhang, and Marinelli teach the hardware apparatus of claim 14. Busch does not explicitly teach wherein the vibration sensors are configured to be placed in machinery, cars, tracks, railway cars, wind turbines, or oil and gas pumps, and the signal is obtained wirelessly from the vibration sensors. Marinelli further teaches wherein the vibration sensors are configured to be placed in machinery, cars, tracks, railway cars, wind turbines, or oil and gas pumps (i.e. machine 50 is a piece of industrial machinery, for example, a machine in a manufacturing facility, a power generation facility, a distribution center, etc. Sensor device 100 is attached to machine 50. Consequently, when machine 50 vibrates while in operation, sensor device 100 generates vibration data representing the vibrations of machine 50; col. 5, lines 15-22), and the analog signal is obtained wirelessly from the vibration sensors (i.e. each sensor device 100 may transmit vibration data wirelessly to gateway device 420, which transmits the vibration data to data manager 435 via network 405; col. 6, lines 7-10). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Marinelli. One would have been motivated to make this modification because it reduces bandwidth/power/storage at the sensor. 14. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Huang et al. (U.S. Patent Application Pub. No. US 20230335118 A1). Claim 16: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach to generate embeddings that encode a first set of keywords, and the classifier or regression circuit is configured to be retrained for a second set of keywords that is distinct from the first set of keywords. However, Huang teaches wherein the circuit is configured to generate embeddings that encode a first set of keywords (i.e. inputting the extracted vector representation to a trained encoding model to generate an embedding representation of the enrollment audio … inputting the extracted one or more vector representations of the one or more portions to the encoding model to generate one or more embedding representations of the input audio; determining whether the input audio comprises the enrolled wake word based on a comparison between the one or more embedding representations of the input audio and the stored embedding representation; and triggering the device based on the determination that the input audio comprises the enrolled wake word; para. [0011, 0014]), and the classifier or regression circuit is configured to be retrained (i.e. the machine learning model is retrained based on the recorded user entry; para. [0008]) for a second set of keywords that is distinct from the first set of keywords (i.e. a single detection module may be pretrained and prestored in the memory which is trained to detect multiple wake up keywords using the single stored module; para. [0070]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Huang. One would have been motivated to make this modification because it provides personalize keyword spotting without redesigning the whole front end. 15. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Busch in view of Zhang and further in view of Pham et al. (U.S. Patent Application Pub. No. US 20220188636 A1). Claim 17: Busch and Zhang teach the hardware apparatus of claim 1. Busch does not explicitly teach to generate pseudo-labels for unlabeled data for self-supervised representation learning. However, Pham teaches wherein the circuit is configured to generate pseudo-labels for unlabeled data (i.e. generates pseudo-labels on unlabeled inputs; para. [0006]) for self-supervised representation learning (i.e. the teacher generates better pseudo-labels to teach the student. This allows the training of the student neural network to better make use of unlabeled training data; para. [0008]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Busch and Zhang to include the feature of Pham. One would have been motivated to make this modification because it provides personalize keyword spotting without redesigning the whole front end. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Urban (Pub. No. US 12301243 B2), a machine-learning-enabled ADC includes a parallel array of analog-to-digital circuits and a machine-learning unit. The parallel array can be constructed from circuit elements, like resistors, capacitors, and/or transistors. The machine-learning unit can be constructed from circuit elements or implemented in software. The parallel array accepts an analog signal as input and produces a high-dimensional digital signal in response. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month