Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is in response to the Application filed on 04/18/2024.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-5 and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over Munkberg et al. (U.S. Publication No. 2018/0357537) (hereafter, "Munkberg") in view of Cleland et al. (U.S. Publication No. 2022/0198245) (hereafter, "Cleland").
Regarding claim 1, Munkberg teaches a system for training a neural network model ([0005] system are disclosed for training a neural network model; [0054] randomized slices or other subsets of the spectral representation of the volume undergoing scanning, may be fed in as a sequence to train a neural network; [0040]): at least one computer processor ([0023] any processor capable of performing the necessary processing operations); and at least one non-transitory computer-readable storage medium ([0112] computer-readable media) storing instructions that, when executed by the at least one computer processor, causes the at least one computer processor to perform ([0112] Computer programs, or computer control logic algorithms, may be stored in the main memory 604 ... computer programs, when executed, enable the system 600 to perform various functions. The memory 604, the storage 610, and/or any other storage are possible examples of computer-readable media): obtaining a first sequence of sparse input data as training data ([0024] At step 110, an input vector X is selected from a set of training data that includes ... sparse target vectors Ȳ; [0056] At step 165, a sparse input vector X̅ is selected from a set of training data that includes sparse input vectors X̅ and sparse target vectors Ȳ); augmenting the first sequence of sparse input data by zero-filling missing input points ([0045] Setting the components where Ȳ has missing samples to a predetermined value ensures that the gradient is minimized (i.e., becomes zero) for positions where the sparse target vector is missing samples; [0057] At step 168, values are inserted into the sparse input vector for the missing samples ... inserts values for the missing samples according to the bitmask. The values may be predetermined, such as zero); training the recurrent neural network model using the augmented sequence of sparse input data to obtain a trained recurrent neural network model, and ([0039] FIG. 1D illustrates a block diagram of a system 150 for training a neural network 125 using sparse target vectors 145; [0042] When the input vectors 115 includes sparse input vectors, the neural network 125 is trained by minimizing a loss function,
PNG
media_image1.png
58
293
media_image1.png
Greyscale
where the sparse input vector X̅ is a subset of the dense input vector X; [0044] all components where Ȳ has missing samples are set to a predetermined value, such as zero) applying new data as an input to the trained recurrent neural network model ([0039] Input vectors 115 may be sparse X or dense X; [0040] During training, an input vector X̅ or X is applied to a neural network model 125), wherein the new data comprises a second sequence of sparse input data to obtain a corresponding output data sequence ([0040] During training, an input vector X̅ or X is applied to a neural network model 125 to generate the output f(X̅) or f(X). A sparse parameter adjustment unit 135 receives both the output f(X̅) or f(X) and the sparse target vector Ȳ that is paired with the input vector X̅ or X that was applied to generate the output f(X̅) or f(X)).
Munkberg does not expressly teach recurrent.
However, Cleland teaches recurrent ([0027] FIG. 1 shows an information processing system 100 implementing a neuromorphic algorithm using a spiking neural network (SNN); [0105] Feedback-inclusive spiking networks are sometimes referred to as recurrent neural networks (RNNs)).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of combination of Munkberg to incorporate the step/system of training a recurrent neural network taught by Cleland.
The suggestion/motivation for doing so would have been to restore data quality ([0005] Illustrative embodiments provide neuromorphic algorithms for rapid online learning and signal restoration ... provide spiking neural network (SNN) algorithms, inspired by olfactory brain circuitry, that enable the rapid online learning of sensor array responses and the subsequent identification of source signatures under highly suboptimal conditions; [0173] These will deploy the “correct” inhibition onto principal neurons, affecting their spike timing such that the principal neurons more accurately reflect the odor/signal that is associated with those interneurons). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Munkberg and Cleland to obtain the invention as specified in claim 1.
Regarding claim 2, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein training of the recurrent neural network model comprises ([0039] training a neural network 125) updating values of a plurality of parameters of the recurrent neural network model via a selective backpropagation through time process ([0045] the backpropagation process performed by the sparse parameter adjustment unit 135 will only update the parameters, Θ, based on the actual data present in the sparse target vector ... the parameters are weights of the neural network 125; [0040] A sparse parameter adjustment unit 135 receives both the output f(X̅) or f(X) and the sparse target vector Y̅ that is paired with the input vector X̅ or X that was applied to generate the output f(X̅) or f(X), respectively. The bitmask(s) for each training pair may be provided to the parameter adjustment unit 135; [0030] the sparse input data and/or sparse target data for the training dataset is computed on-the-fly rather than storing the entire training dataset ... a bitmask indicates positions associated with the subset of samples that are present in the sparse target data; [0035]).
Regarding claim 3, the combination of Munkberg and Cleland teaches all the limitations of claim 2 above. Munkberg teaches wherein the selective backpropagation through time process comprises ([0045] the backpropagation process) computing a reconstruction loss for observed data points in the first sequence of sparse input data and ([0040] A loss function may be computed by the parameter adjustment unit 135 to measure distances (i.e., differences or gradients) between the sparse target vectors 145 and the output vectors. The parameter adjustment unit 135 adjusts the parameters based on the distances and the target bitmask) bypassing computing the reconstruction loss for missing data points in the first sequence of sparse input data ([0028] samples in the output vector that correspond with samples missing in the sparse target vector may be discarded or need not be generated by the neural network model; [0039] a bitmask associated with each sparse target vector indicates positions of the samples in the subset of the samples. The positions corresponding to samples present in the subset of the samples varies for each sparse target vector in the sparse target vectors 145).
Regarding claim 4, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the instructions further cause the at least one computer processor to generate the output data sequence by reconstructing the output data sequence at observed data points of the second sequence of sparse input data and ([0039] When the input vectors 115 are sparse, a bitmask may be used to indicate the positions of samples that are present in each input vector; [0040] During training, an input vector X̅ or X is applied to a neural network model 125 to generate the output f(X̅) or f(X). A sparse parameter adjustment unit 135 receives both the output f(X̅) or f(X) and the sparse target vector Ȳ that is paired with the input vector X̅ or X that was applied to generate the output f(X̅) or f(X)) interpolating the output data sequence at unobserved data points of the second sequence of sparse input data ([0028] At step 120, the input vector is processed by a neural network model to produce output data for the samples within the output vector; [0063] Steps 110 and 120 are performed ... when sparse input vectors are used, values are inserted into the sparse input vector for the missing samples).
Regarding claim 5, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the instructions further cause the at least one computer processor to pretrain the recurrent neural network model with input data that is not missing input points ([0024] At step 110, an input vector X is selected from a set of training data that includes dense input vectors X; [0041]).
Regarding claim 9, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the first sequence of sparse input data comprises ([0056] a sparse input vector X̅ is selected from a set of training data that includes sparse input vectors X̅ and sparse target vectors Ȳ) data from a scanning or temporally multiplexing sampling process ([0054] Functional Magnetic Resonance (MRI) images captured using different, randomized slices or other subsets of the spectral representation of the volume undergoing scanning, may be fed in as a sequence to train a neural network to reconstruct high-quality volumetric images based only on the limited amount of information that corresponds to a short pulse sequences).
With respect to claim 10, arguments analogous to those presented for claim 1, are applicable.
With respect to claim 11, arguments analogous to those presented for claim 2, are applicable.
With respect to claim 12, arguments analogous to those presented for claim 3, are applicable.
With respect to claim 13, arguments analogous to those presented for claim 4, are applicable.
With respect to claim 14, arguments analogous to those presented for claim 5, are applicable.
With respect to claim 15, arguments analogous to those presented for claim 9, are applicable.
With respect to claim 16, arguments analogous to those presented for claim 1, are applicable.
With respect to claim 17, arguments analogous to those presented for claim 2, are applicable.
With respect to claim 18, arguments analogous to those presented for claim 3, are applicable.
With respect to claim 19, arguments analogous to those presented for claim 4, are applicable.
With respect to claim 20, arguments analogous to those presented for claim 5, are applicable.
Claim 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Munkberg et al. (U.S. Publication No. 2018/0357537) (hereafter, "Munkberg") in view of Cleland et al. (U.S. Publication No. 2022/0198245) (hereafter, "Cleland") and further in view of Weingärtner et al. (U.S. Publication No. 2020/0289011) (hereafter, "Weingärtner").
Regarding claim 6, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the first sequence of sparse input data ([0056] a sparse input vector X̅ is selected from a set of training data that includes sparse input vectors X̅ and sparse target vectors Ȳ) and the second sequence of sparse input data comprise ([0039] Input vectors 115 may be sparse X or dense X; [0040] During training, an input vector X̅ or X is applied to a neural network model 125).
Munkberg does not expressly teach staggered samplings of data.
However, Weingärtner teaches staggered samplings of data ([0051] A has a “stacked”-convolutional structure, where each column represents a time-shifted copy of the multi-channel waveform template. This gives a Toeplitz-structure for a single neuron and a single channel. The final matrix A is obtained by interleaving the rows of these Toeplitz matrices for different channels and then interleaving the columns of the resulting matrices for different neurons. This finally yields a matrix formed of time shifted copies of the block of multi-channel waveforms).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Munkberg to incorporate the step/system of using matrix formed of time shifted copies for neuronal signal recordings taught by Weingärtner.
The suggestion/motivation for doing so would have been to improve efficiency of processing for online spike recovery in high-density electrode arrays analyze sparse neuronal data ([0006] the concept of effective bandwidth is used to derive improved bounds to minimize the buffer time and limit the computational effort required to analyze the recordings. In several embodiments, a process is utilized to determine bounds and achieve an efficient method suitable for online processing; [0041] accurate signal recovery is feasible with finite buffers in an online setting by utilizing processes that derive improved bounds for the buffer size ... sparse signal recovery processes in accordance with various embodiments of the invention that utilize limited buffer sizes enable accurate online spike detection). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Munkberg and Weingärtner to obtain the invention as specified in claim 6.
Regarding claim 8, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the first sequence of sparse input data comprises ([0056] a sparse input vector X̅ is selected from a set of training data that includes sparse input vectors X̅ and sparse target vectors Ȳ).
Munkberg does not expressly teach electrophysiological recording data.
However, Weingärtner teaches electrophysiological recording data ([0007] continuously obtaining multi-channel electrophysiological recordings using a multi-channel electrode; [0020] a probe capable of capturing multi-channel electrophysiological recordings).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Munkberg to incorporate the step/system of using multi-channel electrophysiological recordings for online spike recovery taught by Weingärtner.
Motivation for this combination has been stated in claim 8.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Munkberg et al. (U.S. Publication No. 2018/0357537) (hereafter, "Munkberg") in view of Cleland et al. (U.S. Publication No. 2022/0198245) (hereafter, "Cleland") and further in view of Marshel et al. (U.S. Publication No. 2021/0063964) (hereafter, "Marshel").
Regarding claim 7, the combination of Munkberg and Cleland teaches all the limitations of claim 1 above. Munkberg teaches wherein the first sequence of sparse input data comprises ([0056] a sparse input vector X̅ is selected from a set of training data that includes sparse input vectors X̅ and sparse target vectors Ȳ).
Munkberg does not expressly teach 2-photon (2P) calcium imaging data.
However, Marshel teaches 2-photon (2P) calcium imaging data ([0133] minimally exciting neural activity sensors, such as calcium sensitive indicators (e.g., GCaMP) and voltage sensitive indicators expressed in cell-type specific fashion in the brain by engineered viruses or other genetic targeting strategies ... Any and all neurons in the three dimensional field of view accessible with a single two-photon objective (>1000 um3) will be accessible to fire with high precision).
It would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to modify the device and method of Munkberg to incorporate the step/system of using 2-photon calcium imaging data for stimulating and monitoring of network activity taught by Marshel.
The suggestion/motivation for doing so would have been to improve the temporal precision of neural activation patterns ([0047] natural patterns of activity can be precisely measured and/or replayed into a neural network, for example to create artificial perceptions or to artificially reinforce learning. In certain instances, the subject methods provide for an improvement in temporal precision of neural activation patterns). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predicted results. Therefore, it would have been obvious to combine Munkberg and Marshel to obtain the invention as specified in claim 7.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL C. CHANG whose telephone number is (571)270-1277. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan S. Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL C CHANG/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669