Prosecution Insights
Last updated: April 19, 2026
Application No. 18/059,710

METHOD AND SYSTEM FOR MUTUAL INFORMATION (MI) BASED SPIKE ENCODING OPTIMIZATION OF MULTIVARIATE DATA

Non-Final OA §103
Filed
Nov 29, 2022
Examiner
RUTTEN, JAMES D
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Tata Consultancy Services Limited
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
365 granted / 580 resolved
+7.9% vs TC avg
Strong +38% interview lift
Without
With
+38.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
23 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 580 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-15 have been examined. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6-8 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over “Feature Extraction by Non-Parametric Mutual Information Maximization” by Torkkola (hereinafter “Torkkola”) in view of U.S. Patent Application Publication 20140012788 by Piekniewski ("Piekniewski"). In regard to claim 1, Torkkola discloses: 1. A … method, comprising: See Torkkola, at least Fig. 1 on p. 1419, depicting a flow graph “method.” collecting, … as input data; Torkkola p. 1415, section I, e.g. “raw input variable space.” Torkkola does not expressly disclose: … processor implemented … via one or more hardware processors, … a time-series data. This is taught by Piekniewski, Fig. 10 element 1102 depicting a processor. Also see ¶ 0100, “The input signal in this example is a sequence of images (image frames) received from a CCD or CMOS camera via a receiver apparatus, or downloaded from a file.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Piekniewski’s processors and data in order to process sensory data for solving artificial intelligence problems as suggested by Piekniewski (see ¶ 0006 and 0100). Torkkola also discloses: generating a ranked set of optimized dimensions in the input data, via the one or more hardware processors; Torkkola, p. 1416, e.g.: One well known linear transform for dimensionality reduction is principal component analysis or PCA (Devijver and Kittler, 1982). The transform is derived from eigenvectors corresponding to the largest eigenvalues of the covariance matrix for data of all classes. PCA seeks to optimally represent the data in terms of minimal mean-square-error between the representation and the original data. Torkkola does not expressly disclose: encoding the input data to a spike domain, via the one or more hardware processors, by processing the ranked set of optimized dimensions using an encoding scheme; This is taught by Piekniewski. See Piekniewski ¶ 0100-0101, e.g. One exemplary apparatus for processing of sensory information (e.g., visual, audio, somatosensory) using a spiking neural network (including one or more of the conditional plasticity mechanisms described herein) is shown in FIG. 9. … The apparatus 1000 may also include an encoder 1024 configured to transform (encode) the input signal so as to form an encoded signal 1026. In one variant, the encoded signal comprises a plurality of pulses (also referred to as a group of pulses) configured to model neuron behavior. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Piekniewski’s encoded spike signals with Torkkola’s PCA data in order to model neuron behavior as suggested by Piekniewski. Torkkola also discloses: determining a Mutual Information (MI) between the input data and corresponding data from the spike domain, for each of a plurality of dimensions of the input data, via the one or more hardware processors, wherein the MI is a quantitative representation of similarity between the input data and the corresponding data from the spike domain; Torkkola, pp. 1418, section 2.1, e.g. “Mutual information measures dependence between variables, in this case between C and Y.” determining a weighted sum of the MI of the plurality of dimensions of the input data, via the one or more hardware processors; determining if the weighted sum of the MI at least matches a maximum MI value, via the one or more hardware processors, wherein the input data is optimized to achieve the maximum MI value if the weighted sum of the MI is not matching the maximum MI value; and See Torkkola p. 1419, section 2.3, “ … the objective is to find a transformation g (or its parameter vector w) to yi ∈ Rd, d < D, such that yi = g(w, xi) maximizes I(C,Y ), the mutual information (MI) between transformed data Y and class labels C. The procedure is depicted in Figure 1. To achieve this, I(C,Y ) needs to be estimated as a function of the data set, I({ci, yi}), in a differentiable form. Once that is done, gradient ascent can be performed on I(C,Y ) as follows (denoting the learning rate by ) wt+1 = wt +  δ I δ w = wt +  ∑ i = 1 N δ I δ i δ y i δ w (4) optimizing a plurality of spike trains in the spike data, via the one or more hardware processors, using the input data after achieving the maximum MI value, to generate an optimized set of spike trains. See Torkkola p. 1419 as cited above, whereby utilization of the parameter vector w provides an optimized transform from input to output (i.e. spike train). In regard to claim 2, Torkkola also discloses: 2. The method of claim 1, wherein the ranked set of optimized dimensions in the input data are generated by performing a Principle Component Analysis (PCA) on the input data. Torkkola, p. 1416, e.g.: One well known linear transform for dimensionality reduction is principal component analysis or PCA (Devijver and Kittler, 1982). The transform is derived from eigenvectors corresponding to the largest eigenvalues of the covariance matrix for data of all classes. PCA seeks to optimally represent the data in terms of minimal mean-square-error between the representation and the original data. In regard to claim 3, Torkkola also discloses: 3. The method of claim 1, wherein the MI is determined for each of a plurality of dimensions of the input data, based on temporal information contained in an entire spike train of each of a plurality of dimensions of the input data. See Torkkola p. 1419, section 2.3, e.g.: … the objective is to find a transformation g (or its parameter vector w) to yi ∈ Rd, d < D, such that yi = g(w, xi) maximizes I(C,Y ), the mutual information (MI) between transformed data Y and class labels C. The procedure is depicted in Figure 1. To achieve this, I(C,Y ) needs to be estimated as a function of the data set, I({ci, yi}), in a differentiable form. Once that is done, gradient ascent can be performed on I(C,Y ) as follows (denoting the learning rate by ) wt+1 = wt +  δ I δ w = wt +  ∑ i = 1 N δ I δ i δ y i δ w (4) In regard to claim 6, Torkkola discloses: 6. A system, comprising: See Torkkola, top of p. 1418, section 1, e.g. “We also discuss two approaches to make the method applicable to large databases.” Torkkola does not expressly disclose: one or more hardware processors; a communication interface; and a memory storing a plurality of instructions, wherein the plurality of instructions when executed, cause the one or more hardware processors to: This is taught by Piekniewski. See Fig. 10 depicting a system with processor 1102, interface 1114, and memory 1106. Also see ¶ 0110, “The system 1100 may further comprise a nonvolatile storage device 1106, comprising, inter alia, computer readable instructions configured to implement various aspects of spiking neuronal network operation.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Piekniewski’s system elements in order to process sensory data for solving artificial intelligence problems as suggested by Piekniewski (see ¶ 0006 and 0100). All further limitations of claim 6 have been addressed in the above rejection of claim 1. In regard to claims 7-8, parent claim 6 is addressed above All further limitations of claims 7-8 have been addressed in the above rejections of claims 2-3, respectively. In regard to claim 11, Torkkola discloses: 11. One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause: See Torkkola, top of p. 1418, section 1, e.g. “We also discuss two approaches to make the method applicable to large databases.” All further limitations of claim 11 have been addressed in the above rejection of claim 1. In regard to claims 12-13, parent claim 11 is addressed above All further limitations of claims 12-13 have been addressed in the above rejections of claims 2-3, respectively. Claims 4-5, 9-10 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Torkkola in view of Piekniewski as applied above, and further in view of U.S. Patent Application Publication 20200372898 by Olabiyi et al. ("Olabiyi"). In regard to claim 4, Torkkola does not expressly disclose: 4. The method of claim 1, wherein the plurality of spike trains in the spike data are optimized by adding gaussian noise to the input data iteratively till the maximum MI is achieved. However, this is taught by Olabiyi. See Olabiyi ¶ 0029, “In several embodiments, the training samples used are taken from the top k (where k is an arbitrary number) generator outputs and/or the maximum a posterior probability output with Gaussian noise as additional inputs. This allows the machine classifier to be trained based on plausible trajectories during training, particularly as compared to existing machine classifiers where the discriminator mostly score the generated samples very low.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Olabiyi’s noise with the input data of Torkkola and Piekniewski in order to allow machine classifier to be trained based on plausible trajectories during training as suggested by Olabiyi. In regard to claim 5, Torkkola also discloses: 5. The method of claim 4, wherein the maximum MI is updated in one or more of a plurality of iterations of optimization of the spike train, further comprising: comparing value of the maximum MI in a current iteration with the value of the maximum MI in previous iteration; and updating the maximum MI as equal to the value of the maximum MI in the current iteration, if exceeding the value of the maximum MI in the previous iteration. See Torkkola p. 1419, section 2.3, “Once that is done, gradient ascent can be performed on I(C,Y ) …” In regard to claims 9-10, parent claim 6 is addressed above All further limitations of claims 9-10 have been addressed in the above rejections of claims 4-5, respectively. In regard to claims 14-15, parent claim 11 is addressed above All further limitations of claims 14-15 have been addressed in the above rejections of claims 4-5, respectively. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model” by Toyoizumi et al. See Abstract, “We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model.” U.S. Patent 9195934 to Hunt et al. See col. 8, lines 55-58, “An appropriate choice of features may be learned from the input such as by using a principle component analysis, an independent component analysis, and/or another analysis.” Also col. 24 lines 65-67, “At an operation 802, an input may be encoded into a spike signal by one or more of the encoding methodologies described above.” U.S. Patent Application Publication 9146546 by Sinyavskiy et al. ("Sinyavskiy") col. 43, lines 53 e.g. “In some implementations of unsupervised learning, the mutual information I(x,y) between the input x and output y spike trains of the networks may be used as the cost function F, so that: F =I(x,y)= l n ( p ( y | x ) / p ( y ) ) x,y ⟹ F(x,y)=h(y)-h(y|x) (Eqn. 74) where h(y) is the unconditional per-stimulus entropy (surprisal), described by (Eqn. 13). Learning by the network 700 may be configured to maximize the cost function F of Eqn. 74.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to James D Rutten whose telephone number is (571)272-3703. The examiner can normally be reached M-F 9:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James D. Rutten/Primary Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Nov 29, 2022
Application Filed
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579423
SYSTEMS AND METHODS FOR PREDICTING BIOLOGICAL RESPONSES
2y 5m to grant Granted Mar 17, 2026
Patent 12555004
PATH-SUFFICIENT EXPLANATIONS FOR MODEL UNDERSTANDING
2y 5m to grant Granted Feb 17, 2026
Patent 12541707
METHOD AND SYSTEM FOR DEVELOPING A MACHINE LEARNING MODEL
2y 5m to grant Granted Feb 03, 2026
Patent 12510888
Model Reduction and Training Efficiency in Computer-Based Reasoning and Artificial Intelligence Systems
2y 5m to grant Granted Dec 30, 2025
Patent 12511577
DETERMINING AVAILABILITY OF NETWORK SERVICE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+38.4%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 580 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month