Prosecution Insights
Last updated: April 19, 2026
Application No. 18/342,002

NEURONAL ACTIVITY MODULATION OF ARTIFICIAL NEURAL NETWORKS

Non-Final OA §102§103
Filed
Jun 27, 2023
Examiner
ALABI, OLUWATOSIN O
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
85%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
116 granted / 199 resolved
+3.3% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
45 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 199 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on 06/27/2023. These drawings are acceptable. Information Disclosure Statement The information disclosure statement (IDS) submitted on the following date(s): 06/27/2023, 09/13/2023 and 02/04/2026 have been considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 6-10, 13-17, and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Cleland et al. (US 20220198245, hereinafter ‘Cle’). Regarding independent claim 1, Cle teaches a system, comprising: a processor, a computer-readable memory, and an artificial neural network stored in the computer-readable memory and executable by the processor, ([0038] The processing platform 102 in the present embodiment further comprises a processor 120, a memory 122 and a network interface 124. The processor 120 is assumed to be operatively coupled to the memory 122 and to the network interface 124 as illustrated by the interconnections shown in the figure.) wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate; ([0006] In one embodiment, a computer-implemented method of training a neural network [during an inferencing phase or a training phase of the artificial neural network] to recognize sensory patterns [an inferencing task result based on a data candidate] comprises obtaining input data, preprocessing the input data in one or more preprocessors of the neural network, and applying the preprocessed input data to a core portion of the neural network [an inferencing task result based on a data candidate]. The core portion of the neural network comprises a plurality of principal neurons [wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate] and a plurality of interneurons, and is configured to implement a feedback loop from the interneurons to the principal neurons that supports persistent unsupervised differentiation of multiple learned sensory patterns over time.) and a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations. (0117] The interneurons 212 receive synaptic excitation from principal neurons 210. As noted above, each interneuron initially receives input from a randomly selected proportion of principal neurons (e.g., 20%) drawn from across the entire principal neuron population. Interneurons [a control neuron population that is independent of the set of base neuron populations] spike when a sufficient number of their presynaptic principal neurons fire (this number is illustratively the interneuron receptive field order k, and, in some embodiments, will vary among interneurons), and an excitatory synaptic plasticity rule [wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations] strengthens those inputs from the principal neurons that caused the interneuron to fire and weakens the other inputs. This progressively narrows the field of effective inputs to a small number of principal neurons k, where the order k depends on factors such as the inhibitory neuron's spike threshold and the limit on the maximum excitatory synaptic weight. Hence, individual interneurons learn to become responsive to diagnostic feature combinations of order k [wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations]. They deliver their activity as inhibition onto principal neurons, where it serves to delay principal neuron spike firing according to the inhibitory synaptic weight.. [0118] The core network in some embodiments is illustratively constructed with neuron populations exhibiting heterogeneous properties (such as thresholds, initial synaptic weights, and maximum synaptic weights) [a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations]. Heterogeneous properties across the interneuron population ensures that different interneurons will exhibit different values of k, such that some interneurons are responsive to relatively common low-order diagnostic feature combinations and others are responsive only to rarer, higher-order diagnostic feature combinations [a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations]… [0171] During training, relatively clean (low-noise) signals are used to train the network. Spiking activity in principal neurons activates interneurons. The interneurons have one or more embedded learning rules, illustratively implemented as one or more spike timing-dependent plasticity (STDP) rules, that cause them to be activated only by sufficient numbers of inputs from different principal neurons…) Regarding claim 2, the rejection of claim 1 is incorporated and Cle further teaches the system of claim 1, wherein the control neuron population modulates the neuronal activity of the at least one base neuron population by scaling one or more operands internally produced by the at least one base neuron population. [0171] During training, relatively clean (low-noise) signals are used to train the network. Spiking activity in principal neurons activates interneurons. The interneurons have one or more embedded learning rules [wherein the control neuron population modulates the neuronal activity of the at least one base neuron population by scaling one or more operands internally produced by the at least one base neuron population], illustratively implemented as one or more spike timing-dependent plasticity (STDP) rules, that cause them to be activated only by sufficient numbers of inputs from different principal neurons. When a given interneuron is activated, the one or more STDP rules (over the course of a few gamma cycles) adjust the synaptic weights [wherein the control neuron population modulates the neuronal activity of the at least one base neuron population by scaling one or more operands internally produced by the at least one base neuron population] of the starting inputs so that that interneuron now is only activatable by that specific set of k principal neuron inputs.…) Regarding claim 3, the rejection of claim 1 is incorporated and Cle further teaches the system of claim 1, wherein the at least one base neuron population receives inputs produced by one or more of the set of base neuron populations, and wherein the control neuron population receives those inputs or a subset of those inputs. (in [0080] FIG. 4 shows a schematic of a heterogeneous duplication preprocessor 400 in one embodiment. In this embodiment, each sensor stream from a given input sensor 402-1 is fanned out to a set 404 of multiple excitatory feed-forward interneurons, each of which projects sparsely and randomly to a number of “sister” principal neurons in a set 406 of principal neurons… [0098] The core network projects PN activity onto a larger number of interneurons (INs), activating them such that they, in turn, deliver synaptic inhibition back onto the PN array [wherein the at least one base neuron population receives inputs produced by one or more of the set of base neuron populations]. The weight matrix between PNs and INs (FIG. 3) is initially sparse (i.e., only a fraction of the possible connections between PNs and INs actually exist), and becomes sparser and more selective with learning. During sensory activation, this excitatory-inhibitory feedback loop is driven through several recurrent cycles (the gamma cycle) [wherein the at least one base neuron population receives inputs produced by one or more of the set of base neuron populations]. When learning is active, synaptic weights at the excitatory (PN.fwdarw.IN) and inhibitory (IN.fwdarw.PN) synapses are updated over successive gamma cycles according to local learning rules [wherein the at least one base neuron population receives inputs produced by one or more of the set of base neuron populations and wherein the control neuron population receives those inputs or a subset of those inputs during learning feedback cycle]. During testing, successive gamma cycles underlie an attractor network in which these learned synaptic weights shape the attractor state, leading to pattern recognition even under highly noisy conditions. [0099] An inference network can be included in a Sapinet instantiation, receiving a copy of PN activity and delivering its output onto INs [and wherein the control neuron population receives those inputs or a subset of those inputs] in parallel to direct PN.fwdarw.IN excitation. When present, the inference network provides additional pattern completion...) Regarding claim 6, the rejection of claim 1 is incorporated and Cle further teaches the system of claim 1, wherein the at least one base neuron population and the control neuron population exhibit non-uniform types of neuronal dynamics. (in [0116] The principal neurons 210 integrate sensor information following preprocessing, and emit spikes [wherein the at least one base neuron population and the control neuron population exhibit non-uniform types of neuronal dynamics as spikes] (events, pulses) as output. There are multiple specific implementations, but they share the property of spiking earlier within each gamma cycle in proportion to the strength of the sensory input that they are receiving… [0117] The interneurons 212 receive synaptic excitation from principal neurons 210. As noted above, each interneuron initially receives input from a randomly selected proportion of principal neurons (e.g., 20%) drawn from across the entire principal neuron population. Interneurons spike [wherein the at least one base neuron population and the control neuron population exhibit non-uniform types of neuronal dynamics as spikes] when a sufficient number of their presynaptic principal neurons fire (this number is illustratively the interneuron receptive field order k, and, in some embodiments, will vary among interneurons), and an excitatory synaptic plasticity rule strengthens those inputs from the principal neurons that caused the interneuron to fire and weakens the other inputs…) Regarding claim 7, the rejection of claim 6 is incorporated and Cle further teaches the system of claim 6, wherein the non-uniform types of neuronal dynamics are selected from the group consisting of perceptron dynamics, spiking neural unit dynamics, long short-term memory dynamics, gated recurrent unit dynamics, and quasi recurrent unit dynamics. (in 0105] 3. Network architectures are not necessarily feed-forward; they can include any number of feedback loops. Feedback-inclusive spiking networks [wherein the non-uniform types of neuronal dynamics are selected from the group consisting of … spiking neural unit dynamics,] are sometimes referred to as recurrent neural networks (RNNs), examples of which include echo state networks and liquid state machines. Recurrent networks may exhibit dynamical systems properties, such as Sapinet's gamma cycle (FIG. 2).) Regarding claims 8 and 15, the limitations are similar to claim 1 and are rejected under the same rationale. Regarding claims 9-10 , the limitations are similar to those in claims 2-3 and are rejected under the same rationale. Regarding claims 13-14, the limitations are similar to those in claims 6-7 and are rejected under the same rationale. Regarding claims 16-17 , the limitations are similar to those in claims 2-3 and are rejected under the same rationale. Regarding claim 20 , the limitations are similar to those in claim 6 and are rejected under the same rationale. Claims 1, 8 and 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Liu et al. (US 20220269879, hereinafter ‘Liu’). Regarding independent claim 1, Cle teaches a system, comprising: a processor, a computer-readable memory, and an artificial neural network stored in the computer-readable memory and executable by the processor, (in [0004] Systems and techniques are described herein that can be implemented for improved facial expression recognition. According to at least one example, apparatuses are provided for improved facial expression recognition. An example apparatus can include a memory (or multiple memories) and a processor or multiple processors (e.g., implemented in circuitry) coupled to the memory (or memories)...; And in [0001] The present disclosure is related to facial expression recognition. More specifically, the present disclosure relates to improving facial expression recognition systems based on implementing techniques for facial landmark detection in neural networks trained for facial expression recognition) wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate; ([0106] As shown in FIG. 9, a convolutional network is a sequence of layers. Every layer of a convolutional neural network transforms one volume of activation data (also referred to as activations) to another volume of activation through a differentiable function. For example, each layer can accepts an input 3D volume and can transforms that input 3D volume to an output 3D volume through a differentiable function. Three types of layers that can be used to build convolutional neural network architectures can include convolutional layers [wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate], pooling layers [wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate], and one or more fully-connected layer. A network also includes an input layer, which can hold raw pixel values of an image. For example, an example image can have a width of 32 pixels, a height of 32 pixels, and three color channels (e.g., R, G, and B color channles). Each node of the convolutional layer is connected to a region of nodes (pixels) of the input image. The region is called a receptive field. In some cases, a convolutional layer can compute the output of nodes (also referred to as neurons) that are connected to local regions in the input [wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate], each node computing a dot product between its weights and a small region they are connected to in the input volume. Such a computation can result in volume [32×32×12] if 12 filters are used. The ReLu layer can apply an elementwise activation function, such as the max(0,x) thresholding at zero, which leaves the size of the volume unchanged at [32×32×12]. The pooling layer can perform a downsampling operation along the spatial dimensions (width, height), resulting in reduced volume of data, such as a volume of data with a size of [16×16×12] [wherein the artificial neural network comprises: a set of base neuron populations that collectively generate, during an inferencing phase or a training phase of the artificial neural network, an inferencing task result based on a data candidate]... And in [0076] In one example, the convolutional block 402(1) can receive, as input, the image frame 406 and the landmark image frame 408(1). In some cases, the size of the landmark image frame 408(1) can correspond to (e.g., match) the size of the image frame 406. In an illustrative example, the size of each image frame can be 56×64 pixels. Each image frame can correspond to a separate channel, resulting in a total input size of 56×64×2. In one example, the convolutional layers of the convolutional block 402(1) can output activation data (e.g., a feature map) with a size of 56×64×31 (e.g., 31 channels of 56×64 pixels). The pooling layer of the convolutional block 402(1) can downsample (e.g., reduce the size of) this activation data before passing the activation data to the convolutional block 402(2). For instance, the pooling layer can reduce the size of activation data in each channel by half. In some cases, downsampling activation data in a convolutional neural network can enable extraction and/or analysis of various types of features (e.g., coarse features, medium-grain features, and/or fine-grained features). However, downsampling in the neural network 400(B) can result in a loss of landmark feature information passed between convolutional blocks...) and a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations. (in [0106] … Three types of layers that can be used to build convolutional neural network architectures can include convolutional layers, pooling layers, and one or more fully-connected layer [a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations]. A network also includes an input layer, which can hold raw pixel values of an image... The fully-connected layer can compute the class scores, resulting in volume of size [1×1×4], where each of the four (4) numbers correspond to a class score, such as among the four categories of dog, cat, boat, and bird. The CIFAR-10 network is an example of such a network, and has ten categories of objects. Using such a neural network, an original image can be transformed layer by layer from the original pixel values to the final class scores. Some layers contain parameters and others may not. For example, the convolutional and fully-connected layers perform transformations [a control neuron population that is independent of the set of base neuron populations, wherein the control neuron population modulates, during the inferencing phase or the training phase, neuronal activity of at least one base neuron population of the set of base neuron populations] that are a function of the activations in the input volume and also of the parameters (the weights and biases) of the nodes, while the ReLu and pooling layers can implement a fixed function.) Each component can be associated with a training or inference process, in [0123] As illustrated by the example of FIG. 9, a convolutional neural network can include multiple convolutional layers, with each layer refining the features extracted by a previous layer. Each convolutional layer may be, but need not be, followed by pooling. The output of a combination of these layers represent high-level features of the input image, such as the presence of certain shapes, colors, textures, gradients, and so on. [0124] To turn these feature maps into a classification, a convolutional neural network can include one or more fully-connected layers. In some cases, a Multi-Layer Perceptron that uses, for example, a softmax activation function can be used after a fully-connected layer. A fully-connected layer can classify the input image into various classes based on training data. For example, the convolutional neural network of FIG. 9 was trained to recognize dogs, cats, boats, and birds, and can classify objects in an input image as including one of these classes. Regarding claims 8 and 15, the limitations are similar to claim 1 and are rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-5, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US 20220269879, hereinafter ‘Liu’) in view of Yang et al. (US 20200311962, hereinafter ‘Yang’). Regarding claim 4, the rejection of claim 1 is incorporated and Liu further teaches the system of claim 1, wherein the control neuron population receives, . (As depicted in Fig. 9 and [0106] … Three types of layers that can be used to build convolutional neural network architectures can include convolutional layers, pooling layers, and one or more fully-connected layer [wherein the control neuron population receives, … inputs produced by one or more of the set of base neuron populations that are prior to the at least one base neuron populations]. A network also includes an input layer, which can hold raw pixel values of an image... The fully-connected layer can compute the class scores, resulting in volume of size [1×1×4], where each of the four (4) numbers correspond to a class score, such as among the four categories of dog, cat, boat, and bird. The CIFAR-10 network is an example of such a network, and has ten categories of objects. Using such a neural network, an original image can be transformed layer by layer from the original pixel values to the final class scores. Some layers contain parameters and others may not. For example, the convolutional and fully-connected layers perform transformations [wherein the control neuron population receives, …, inputs produced by one or more of the set of base neuron populations that are prior to the at least one base neuron population] that are a function of the activations in the input volume and also of the parameters (the weights and biases) of the nodes, while the ReLu and pooling layers can implement a fixed function.) Liu does not expressly teach neuron population receives, via bottom-up skip connections. Yang expressly teaches neuron population receives, via bottom-up skip connections, (in 0032] …. FPN replaces the feature extractor of detectors like Faster R-CNN and generates multiple feature map layers (multi-scale feature maps) with better quality information than the regular feature pyramid for object detection. FPN includes a bottom-up and a top-down pathway [neuron population receives, via bottom-up skip connections]. The bottom-up pathway is the usual convolutional network for feature extraction [neuron population receives, via bottom-up skip connections]. As we go up, the spatial resolution decreases. With more high-level structures detected, the semantic value for each layer increases. FPN provides a top-down pathway to construct higher resolution layers from a semantic rich layer. While the reconstructed layers are semantic strong but the locations of objects are not precise after all the down-sampling and up-sampling. Lateral connections are added between reconstructed layers and the corresponding feature maps to help the detector to predict the location betters. FPN also acts as skip connections [neuron population receives, via bottom-up skip connections] to make training easier.) Yang and Liu are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for retrieving and processing information using deep learning detection models as disclosed by Yang with the method of developing information retrieval and object recognition using neural networks as disclosed by Liu. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Yang and Liu noted above; Doing so allows for making training easier and improve model predictions (Yang, 0032). Regarding claim 5, the rejection of claim 1 is incorporated and Cle further teaches the system of claim 1, wherein the control neuron population receives, . (As depicted in Fig. 9 and [0106] … Three types of layers that can be used to build convolutional neural network architectures can include convolutional layers, pooling layers, and one or more fully-connected layer [wherein the control neuron population receives, … inputs produced by one or more of the set of base neuron populations that are prior to the at least one base neuron populations]. A network also includes an input layer, which can hold raw pixel values of an image... The fully-connected layer can compute the class scores, resulting in volume of size [1×1×4], where each of the four (4) numbers correspond to a class score, such as among the four categories of dog, cat, boat, and bird. The CIFAR-10 network is an example of such a network, and has ten categories of objects. Using such a neural network, an original image can be transformed layer by layer from the original pixel values to the final class scores. Some layers contain parameters and others may not. For example, the convolutional and fully-connected layers perform transformations [wherein the control neuron population receives, …, inputs produced by one or more of the set of base neuron populations that are prior to the at least one base neuron population] that are a function of the activations in the input volume and also of the parameters (the weights and biases) of the nodes, while the ReLu and pooling layers can implement a fixed function.) Liu does not expressly teach neuron population receives, via top-down skip connections. Yang expressly teaches neuron population receives, via top-down skip connections, (in 0032] …. FPN replaces the feature extractor of detectors like Faster R-CNN and generates multiple feature map layers (multi-scale feature maps) with better quality information than the regular feature pyramid for object detection. FPN includes a bottom-up and a top-down pathway [neuron population receives, via top-down skip connections.]. The bottom-up pathway is the usual convolutional network for feature extraction. As we go up, the spatial resolution decreases. With more high-level structures detected, the semantic value for each layer increases. FPN provides a top-down pathway [neuron population receives, via top-down skip connections.] to construct higher resolution layers from a semantic rich layer. While the reconstructed layers are semantic strong but the locations of objects are not precise after all the down-sampling and up-sampling. Lateral connections are added between reconstructed layers and the corresponding feature maps to help the detector to predict the location betters. FPN also acts as skip connections [neuron population receives, via top-down skip connections] to make training easier.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Yang and Liu for the same reasons disclosed above. Regarding claims 11-12 , the limitations are similar to those in claims 4-5 and are rejected under the same rationale. Regarding claims 18-19, the limitations are similar to those in claims 4-5 and are rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Letunovskiy et al. (US 20240169213): teaches in [0014] In an implementation, the determining of the search space further comprises applying one or more of the following constraints: [0015] a) each architecture comprises a plurality of stages limited by a predefined maximum of stages, each stage comprises one or more of the blocks our of a limited set of blocks, the number of blocks in each stage being limited by a predefined maximum of blocks: [0016] b) each block comprises one or more of convolution layers out of a predefined set of convolution layers with mutually different convolution kernel sizes, each convolution layer is followed by a normalization and/or activation: [0017] c) the activation is a rectified linear unit, ReLU, and the normalization is a batch normalization: [0018] d) output of the block is configurable to include or not to include a skip connection: [0019] e) one or more blocks in each stage increases the number of channels: [0020] f) the first block in a stage has a stride of 2 or more in its first non-identity layer and no skip connection. [0021] This set of constraints proved to be efficient for search space determination. Constraint a) provides a scalable architecture, which may be easily extended by adding blocks. It makes easier search of architecture suitable for target computer vision task… Huang et al. (US 11868878): teaches in 2:4-34: An artificial neural network (also referred to as “neural network”) may include multiple processing nodes arranged on two or more layers, where processing nodes on one layer may connect to processing nodes on another layer. The processing nodes can be divided into layers including, for example, an input layer, a number of intermediate layers (also known as hidden layers), and an output layer. Each processing node on a layer (e.g., an input layer, an intermediate layer, etc.) may receive a sequential stream of input data elements, multiply each input data element with a weight, compute a weighted sum of the input data elements, and forward the weighted sum to the next layer. At the last stage of an artificial neural network, such as a convolutional neural network (CNN) or a recurrent neural network (RNN, such as a long short-term memory (LSTM) network), one or more layers in a fully-connected (FC) layer may be used to make the final decision based on a combination of the output data generated by processing nodes on the preceding layer, where each processing node on a fully-connected layer may have connections to all processing nodes on the preceding layers. The size of the last layer of the fully-connected layer can be very large in order to, for example, classify a large number of different objects. As such, the number of connections between the last layer of the fully-connected layer and the preceding layer of the fully-connected layer may be large. In order to implement the fully-connected layer, a large memory space and a high bandwidth bus may be required, which may limit the performance of the FC layer when the memory space or the data transfer bandwidth of the underlying hardware is limited… Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Feb 09, 2026
Non-Final Rejection — §102, §103
Apr 08, 2026
Interview Requested
Apr 16, 2026
Examiner Interview Summary
Apr 16, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579409
IDENTIFYING SENSOR DRIFTS AND DIVERSE VARYING OPERATIONAL CONDITIONS USING VARIATIONAL AUTOENCODERS FOR CONTINUAL TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12572814
ARTIFICIAL NEURAL NETWORK BASED SEARCH ENGINE CIRCUITRY
2y 5m to grant Granted Mar 10, 2026
Patent 12561570
METHODS AND ARRANGEMENTS TO IDENTIFY FEATURE CONTRIBUTIONS TO ERRONEOUS PREDICTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12547890
AUTOREGRESSIVELY GENERATING SEQUENCES OF DATA ELEMENTS DEFINING ACTIONS TO BE PERFORMED BY AN AGENT
2y 5m to grant Granted Feb 10, 2026
Patent 12536478
TRAINING DISTILLED MACHINE LEARNING MODELS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
85%
With Interview (+26.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 199 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month