Prosecution Insights
Last updated: April 19, 2026
Application No. 17/663,031

METHOD FOR NEURAL SIGNALS STABILIZATION

Final Rejection §103
Filed
May 12, 2022
Examiner
BRACERO, ANDREW ANGEL
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Teledyne Scientific & Imaging LLC
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
5 granted / 5 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
31
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-14 are presented for examination in this application (17/663,031) filed 2022-05-12 having an effective filing date, via provisional application 63/188,085, of 2021-05-13. The Examiner cites particular sections in the references as applied to the claims below for the convenience of the applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant(s) fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Information Disclosure Statement Acknowledgement is made of the information disclosure statements filed on 2025-11-25. Response to Arguments Applicant’s arguments and remarks filed 2025-11-25 have been fully considered. The arguments and remarks regarding the 35 U.S.C 101 rejections were found to be persuasive. The arguments and remarks regarding the 35 U.S.C 103 rejections were not found to be persuasive. The 35 U.S.C 101 and 35 U.S.C 112 rejections have been overcome. The 35 U.S.C 103 rejections have been maintained. Response to Arguments 35 U.S.C 103 Applicant’s response: Applicant asserts “Independent claim 1 recites the training a plurality of models to translate disrupted neural signals to recovered neural signals …ADAN architecture described by Farshchian, which is similar that of Generative Adversarial Network uses a single model to reconstruct the neural signals”. Examiner’s response: The examiner respectfully disagrees. Under broadest reasonable interpretation in light of the specification, one of ordinary skill in the art may take the Generative Adversarial Network, as disclosed by Farshchian, as being composed of a plurality of models, specifically two, as the GAN is composed of two neural networks, that of a generator and discriminator. These two neural networks may be seen as models that are being used in order to reconstruct the neural signals, as shown at least in fig. 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, and 10 are rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao). Regarding claim 1 (currently amended): Farshchian teaches a method for recalibrating a brain-computer interface (BCI) in communication with a patient comprising (see abstract page 1: “Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis. … We implement various domain adaptation methods to stabilize the interface over significantly long times.”.)): receiving, at the BCI, a first set of neural signals and a second set of neural signals, wherein the first set of neural signals comprises dayo data and wherein the second set of neural signals comprises dayi data (see section 3 ‘Experimental Setup’ page 3: “To record neural activity, we implanted a 96-channel microelectrode array (Blackrock Microsystems, Salt Lake City, Utah) into the hand area of primary motor cortex (M1). Prior to implanting the array, we intraoperatively identified the hand area of M1 through sulcal landmarks, and by stimulating the surface of the cortex to elicit twitches of the wrist and hand muscles. We also implanted electrodes in 14 muscles of the forearm and hand, allowing us to record the electromyograms (EMGs) that quantify the level of activity in each of the muscles. Data was collected in five experimental sessions spanning 16 days.”. Also see fig. 1(C) “C. The ADAN architecture that aligns the firing rates of day-k to those of day-0, when the BMI was built.”.), transmitting, to a computing device comprising at least one processor coupled to at least one memory unit, the first set of neural signals and the second set of neural signals and storing the signals in a first datastore and a second datastore of the computing device (see fig. 1(C): “C. The ADAN architecture that aligns the firing rates of day-k to those of day-0, when the BMI was built.”.), training, by the computing device, a plurality of neural translation models in an adversarial networks for neural interfaces (ANNI) method, wherein the ANNI method trains a plurality of translation models (see fig. 1 (C)), and wherein the plurality of translation models comprise a first autoencoder model, a second autoencoder model (see fig. 1 (B)), a disrupted recovery model (see fig. 1 (C)), an artificial disruption model (see fig. 1 (C)), a first discriminator model (see fig. 1(C). Also see section 4.2.3 page 5: “To this end, we train an ADAN whose architecture is very similar to that of a Generative Adversarial Network (GAN): it consists of two deep neural networks, a distribution alignment module and a discriminator module (Figure 1C).”), a second discriminator model (see section 4.2.3 paragraph 2 page 5: “The discriminator is an AE … with the same architecture as the one used for the BMI (Figure 1B)”. Also see fig. 1 (B)), shared latent space model (see fig. 1 (C)), a shared signal space model (see fig. 1 (C)) and a penalty drift model with training objectives that achieve a shared latent space which retains signal class information, and penalize signal class swapping (see fig. 1 (B) and fig. 1 (C)); wherein the training comprises generating, by the computing device, a model loss function for each of the plurality of the neural translation models (see section 5, first paragraph, page 6: “In simultaneous training, the AE is trained using the joint loss function of equation 1 that includes not only the unsupervised neural reconstruction loss but also a supervised regression loss that quantifies the quality of EMG predictions. Therefore, the supervision of the dimensionality reduction step through the integration of relevant movement information leads to a latent representation that better captures neural variability related to movement intent.” ), deriving, by the computing device, a weighting value for each of the plurality of neural translation models, wherein the weighting value corresponds to the loss function for each of the plurality of neural translation models (see section 4.2.3 page 6: “Given discriminator and aligner parameters θD and θA, respectively, the discriminator and aligner loss functions LD and LA to be minimize”), calculating, by the computing device, an internal metric value for each respective epoch (see pg. 5 section 4.2.3: “To train the ADAN, we need to quantify the reconstruction losses. Given input data X, the discriminator outputs ˆ X = ˆX(X,θD), with residuals R(X,θD) = X − ˆX(X,θD) . Consider the scalar reconstruction losses r obtained by taking the L1 norm of each column of R. Let ρ0 and ρk be the distributions of the scalar losses for day-0 and day-k, respectively, and let µ0 and µk be their corresponding means. We measure the dissimilarity between these two distributions by a lower bound to the Wasserstein distance (Arjovsky et al., 2017), provided by the absolute value of the difference between the means: W(ρ0,ρk) ≥ |µ0 − µk| (Berthelot et al., 2017).”) determining, by the computing device, whether the internal metric value of the respective epoch meets a predetermined value or meet a predetermined number of epochs (see section 4.2.3, paragraph 4, page 6: “Given discriminator and aligner parameters θD and θA, respectively, the discriminator and aligner loss functions LD and LA to be minimize can be expressed as PNG media_image1.png 36 435 media_image1.png Greyscale ”), determining, by the computing device, whether the internal metric value meets a predetermined value or meet a predetermined number of epochs (see figure 4 page 9: “Average improvements in EMG prediction performance for alignment using ADAN as a function of the amount of training data needed for domain adaptation at the beginning of each day, averaged over all days after day-0. Shading represents standard deviation of the mean.”), updating, by the computing device, the weighting values for the plurality of neural translation models when the predetermined internal metric value is not met (see section 4.2.3, page 6 : “The aligner module receives as inputs the firing rates Xk of day-k. During training, the gradients through the discriminator bring the output A(Xk) of the aligner closer to X0”), and selecting, by the computing device, a dayi translation model based on the weighting values of the plurality of neural translation models that correspond to a first epoch with a highest internal metric value (see pg. 5 section 4.2.3 “. The goal of the discriminator is to maximize the difference between the neural reconstruction losses of day-k and day-0. The great dissimilarity between the probability distribution of day-0 residuals and that of day-k residuals obtained with the discriminator in its initialized state results in a strong signal that facilitates subsequent discriminator training. The distribution alignment module works as an adversary to the discriminator by minimizing the neural reconstruction losses of day-k (Warde-Farley & Bengio, 2017). It consists of a hidden layer with exponential units and a linear readout layer, each with n fully connected units. The aligner parameters θA, the weights of the n by n connectivity matrices from input to hidden and from hidden to output, are initialized as the corresponding identity matrices. The aligner module receives as inputs the firing rates Xk of day-k.”) translating, by the BCI, the incoming neural signals based on the selected dayi model (Fig 1 reconstructed aligned firing rates day k). outputting, by the BCI, recovered neural signals according to the selected dayi model (Fig 1 reconstructed aligned firing rates day k). Farshchian does not teach receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data. Kao, however, analogously teaches receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data (see pg. 2: “These dynamics characterize how the neural population activity modulates itself over time (for example, through recurrent connectivity8,9) so that the neural population activity at time k is informative of the population activity at time k+1. ”) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian and Kao before him or her, to modify the method of claim 1 to include attributes of receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data as taught by Kao in order to allow a model that is informative at time i + 1 of i (see page 2 second paragraph: “These dynamics characterize how the neural population activity modulates itself over time (for example, through recurrent connectivity8,9) so that the neural population activity at time k is informative of the population activity at time k+1.”).”). Regarding claim 2 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farschian further teaches wherein the day0 data comprises data received by the same day that the BCI is calibrated by a clinician (see section 4.1 page 3: “Once the neural AE and the EMG predictor networks have been trained on the data acquired on the first recording session, indicated as day-0, their weights remain fixed.”. Also see section 4.2 page 3: “To stabilize a fixed BMI, we need to align the latent space of later days to that of the first day, when the fixed interface was initially built.”. Also see abstract page 1: “Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis.”.) and wherein the day0 data corresponds to undisrupted neural signals (see section 4.2 page 3: “To stabilize a fixed BMI, we need to align the latent space of later days to that of the first day, when the fixed interface was initially built. This step is necessary to provide statistically stationary inputs to the EMG predictor.”.). Regarding claim 10 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian further teaches wherein a class drift penalty is applied such that recovered signals are required to resemble their original disrupted version in order to deter the problem of class swap (see section 5 page 6: “In simultaneous training, the AE is trained using the joint loss function of equation 1 that includes not only the unsupervised neural reconstruction loss but also a supervised regression loss that quantifies the quality of EMG prediction”). Claims 3 and 9 are rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) in view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao) in further view of Gandhi et al. (“Denoising Time Series Data Using Asymmetric generative Adversarial Networks” hereinafter referred to as Gandhi). Regarding claim 3 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian in view of Kao does not teach wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second datastore. Gandhi, however, teaches in analogous wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second datastore (see section 3 page 7: “Thus, we want to learn a mapping from a noisy signal to a clean signal using only a set of unpaired noisy signals and a set of clean signals.”. Also see abstract page 2: “Our model for denoising time series is trained using unpaired training corpora and does not need information about the source of the noise or how it is manifested in the time series.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Gandhi before him or her, to modify the method of claim 3 to include attributes of wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second datastore as taught by Gandhi in order to ease the expense of removing noise [(Examiner’s note: a person having ordinary skill in the art, using broadest reasonable interpretation, in light of the specification, could take removing noise and the stabilizing of neural signals to function similarly.]) (see Gandhi at section 3 page 7: “This is because manually removing noise is an expensive task and needs domain expertise. But, it is much easier to collect signals with artifacts and signals without artifacts.”.). Regarding claim 9 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian in view of Kao does not teach wherein the cycle-consistency penalty is applied in the latent space as a direct comparison between the clean and disrupted latent representations of the same signal in order to separate the shared latent space by signal class. Gandhi, however, teaches in analogous wherein the cycle-consistency penalty is applied in the latent space as a direct comparison between the clean and disrupted latent representations of the same signal in order to separate the shared latent space by signal class (see section 4 page 8 : “\(D_a\) and \(D_b\) are two adversarial discriminators. \(D_a\) aims to distinguish between noisy time series A and time series generated by adding noise \(B+G\_N(B)\). \(D_b\) aims to distinguish between clean time series B and denoised time series \(G\_B(A)\). We describe the architecture of the discriminators in Sect. 4.1. To train this architecture, just like in cycleGAN, four losses are used, two adversarial losses and two cycle consistency loss. The adversarial losses are used for training the two mapping functions. For mapping function \(G\_B : A \rightarrow B\), discriminator \(D_b\) matches the distribution of time series denoised by the generator and the distribution of clean time series. This loss function is given by Eq. 1.”. Also see section 4 page 9: “To enforce this relation between noisy input and clean output by the generator we use a cycle consistency loss.”). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Gandhi before him or her, to modify the method of claim 9 to include attributes of the cycle-consistency penalty is applied in the latent space as a direct comparison between the clean and disrupted latent representations of the same signal as taught by Gandhi in order to separate the shared latent space by signal class in order to ease the expense of removing noise [(Examiner’s note: a person having ordinary skill in the art, using broadest reasonable interpretation, in light of the specification, could take removing noise and the stabilizing of neural signals to function similarly.]) (see Gandhi at section 3 page 7: “This is because manually removing noise is an expensive task and needs domain expertise. But, it is much easier to collect signals with artifacts and signals without artifacts.”.). Claims 5, 6, 11, and 12are rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) in view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao) in further view of Pandarinath et al. (US20220129071A1 hereinafter referred to as Pandarinath). Regarding claim 5: Farshchian in view of Kao teaches the method of claim 1. Farshchian in view of Kao does not explicitly teach wherein the BCI and the computing device are integrated into the same device. Pandarinath, however, teaches in analogous wherein the BCI and the computing device are integrated into the same device (see [0030]: “In other embodiments, the one or more sensors 120 and/or the target device 130 may incorporate the functionalities discussed and associated with the system 110.”. Also see [0031]: “Although the systems/devices of the environment 100 are shown as being directly connected, the device 110 may be indirectly connected to one or more of the other systems/devices of the environment 100. In some embodiments, the device 110 may be only directly connected to one or more of the other systems/devices of the environment 100.”. Also see [0007]: “The disclosed embodiments may include computer-implemented systems and methods for stabilizing a brain computer interface (BC) so that the a target device may controlled for long periods of time without supervised recalibration.”. Also see [0008]: “In some embodiments, the system may include one or more processors; and one or more hardware storage devices having stored thereon computer-executable instructions. The instructions may be executable by the one or more processors to cause the computing system to perform at least receiving neural data for a period of time from one or more sensors. The one or more processors may be further configured to cause the computing system to perform at least transforming the neural data to generate aligned variables using a trained alignment network.” [(Examiner’s note: i.e., emphasis added. A person having ordinary skill in the art using broadest reasonable interpretation in light of the specification could see that the system from Pandarinath can be integrated in the same device with recited words of “the computing system” using the embodiment of one sensor and one processor.)]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Pandarinath before him or her, to modify the method of claim 5 to include attributes of the BCI and the computing device are integrated into the same device as taught by Pandarinath in order to allow for a greater possibility of variations, alternatives, and modifications (see Pandarinath at [0063]: “The computing system 700 depicted in FIG. 7 is merely an example and is not intended to unduly limit the scope of inventive embodiments recited in the claims. One of ordinary skill in the art would recognize many possible variations, alternatives, and modifications. For example, in some implementations, the computing system 700 may have more or fewer subsystems than those shown in FIG. 7, may combine two or more subsystems, or may have a different configuration or arrangement of sub systems.”.) Regarding claim 6 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian in view of Kao does not explicitly teach wherein the BCI and computing device send and receive signals over a wireless communication interface. Pandarinath, however, teaches in analogous wherein the BCI and computing device send and receive signals over a wireless communication interface (see [0030]: “In some embodiments, the neural data analysis system 110 may be configured to communicate with the one or more sensors 120, the target device 130, another programming or computing device via a wired or wireless connection using any of a variety of local wireless communication techniques”.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Pandarinath before him or her, to modify the method of claim 6 to include attributes of wherein the BCI and computing device send and receive signals over a wireless communication interface as taught by Pandarinath in order to communicate between the target device and the sensors (see Pandarinath at [0030]: “In some embodiments, the neural data analysis system 110 may be configured to communicate with the one or more sensors 120, the target device 130, another programming or computing device via a wired or wireless connection using any of a variety of local wireless communication techniques”.) Regarding claim 11 (currently amended): Farshchian teaches receiving a first set of neural signals and a second set of neural signals from a BCI, wherein the first set of neural signals comprises dayo data and wherein the second set of neural signals comprises dayi data (see section 3 ‘Experimental Setup’ page 3: “To record neural activity, we implanted a 96-channel microelectrode array (Blackrock Microsystems, Salt Lake City, Utah) into the hand area of primary motor cortex (M1). Prior to implanting the array, we intraoperatively identified the hand area of M1 through sulcal landmarks, and by stimulating the surface of the cortex to elicit twitches of the wrist and hand muscles. We also implanted electrodes in 14 muscles of the forearm and hand, allowing us to record the electromyograms (EMGs) that quantify the level of activity in each of the muscles. Data was collected in five experimental sessions spanning 16 days.”. Also see fig. 1(C) “C. The ADAN architecture that aligns the firing rates of day-k to those of day-0, when the BMI was built.”.), transmitting the first set of neural signals and the second set of neural signals and storing the signals in a first datastore and a second datastore of the computing device (see fig. 1(C): “C. The ADAN architecture that aligns the firing rates of day-k to those of day-0, when the BMI was built.”.), training a plurality of neural translation models in an adversarial networks for neural interfaces (ANNI) method (see fig. 1 (C)), and wherein the plurality of translation models comprise a first autoencoder model, a second autoencoder model (see fig. 1 (B)), a disrupted recovery model (see fig. 1 (C)), an artificial disruption model (see fig. 1 (C)), a first discriminator model (see fig. 1(C). Also see section 4.2.3 page 5: “To this end, we train an ADAN whose architecture is very similar to that of a Generative Adversarial Network (GAN): it consists of two deep neural networks, a distribution alignment module and a discriminator module (Figure 1C).”), a second discriminator model (see section 4.2.3 paragraph 2 page 5: “The discriminator is an AE … with the same architecture as the one used for the BMI (Figure 1B)”. Also see fig. 1 (B)), shared latent space model (see fig. 1 (C)), a shared signal space model (see fig. 1 (C)) and a penalty drift model wherein the training comprises (see fig. 1 (B) and fig. 1 (C)), generating a model loss function for each of the plurality of the neural translation models (see section 5, first paragraph, page 6: “In simultaneous training, the AE is trained using the joint loss function of equation 1 that includes not only the unsupervised neural reconstruction loss but also a supervised regression loss that quantifies the quality of EMG predictions. Therefore, the supervision of the dimensionality reduction step through the integration of relevant movement information leads to a latent representation that better captures neural variability related to movement intent.” ), deriving a weighting value for each of the plurality of neural translation models, wherein the weighting value corresponds to the loss function for each of the plurality of neural translation models (see section 4.2.3 page 6: “Given discriminator and aligner parameters θD and θA, respectively, the discriminator and aligner loss functions LD and LA to be minimize”), calculating an internal metric value for each respective epoch (see pg. 5 section 4.2.3: “To train the ADAN, we need to quantify the reconstruction losses. Given input data X, the discriminator outputs ˆ X = ˆX(X,θD), with residuals R(X,θD) = X − ˆX(X,θD) . Consider the scalar reconstruction losses r obtained by taking the L1 norm of each column of R. Let ρ0 and ρk be the distributions of the scalar losses for day-0 and day-k, respectively, and let µ0 and µk be their corresponding means. We measure the dissimilarity between these two distributions by a lower bound to the Wasserstein distance (Arjovsky et al., 2017), provided by the absolute value of the difference between the means: W(ρ0,ρk) ≥ |µ0 − µk| (Berthelot et al., 2017).”) determining whether the internal metric value of the respective epoch meets a predetermined value or meets a predetermined number of epochs (see section 4.2.3, paragraph 4, page 6: “Given discriminator and aligner parameters θD and θA, respectively, the discriminator and aligner loss functions LD and LA to be minimize can be expressed as PNG media_image1.png 36 435 media_image1.png Greyscale ”), determining whether the internal metric value of the respective epoch meets a predetermined value or meets a predetermined number of epochs (see figure 4 page 9: “Average improvements in EMG prediction performance for alignment using ADAN as a function of the amount of training data needed for domain adaptation at the beginning of each day, averaged over all days after day-0. Shading represents standard deviation of the mean.”), updating, by the computing device, the weighting values for the plurality of neural translation models when the predetermined value for the internal metric value is not met (see section 4.2.3, page 6 : “The aligner module receives as inputs the firing rates Xk of day-k. During training, the gradients through the discriminator bring the output A(Xk) of the aligner closer to X0”), selecting a dayi translation model based on the weighting values of the plurality of neural translation models that correspond to a first epoch with a highest internal metric value (see pg. 5 section 4.2.3 “. The goal of the discriminator is to maximize the difference between the neural reconstruction losses of day-k and day-0. The great dissimilarity between the probability distribution of day-0 residuals and that of day-k residuals obtained with the discriminator in its initialized state results in a strong signal that facilitates subsequent discriminator training. The distribution alignment module works as an adversary to the discriminator by minimizing the neural reconstruction losses of day-k (Warde-Farley & Bengio, 2017). It consists of a hidden layer with exponential units and a linear readout layer, each with n fully connected units. The aligner parameters θA, the weights of the n by n connectivity matrices from input to hidden and from hidden to output, are initialized as the corresponding identity matrices. The aligner module receives as inputs the firing rates Xk of day-k.”) translating the incoming signals based on the selected dayi (Fig 1 reconstructed aligned firing rates day k). outputting recovered neural signals according to the selected dayi model (Fig 1 reconstructed aligned firing rates day k). Farshchian does not teach receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data, or a non-transitory computer-readable storage medium Kao, however, teaches in analogous receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data (see pg. 2: “These dynamics characterize how the neural population activity modulates itself over time (for example, through recurrent connectivity8,9) so that the neural population activity at time k is informative of the population activity at time k+1. ”) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian and Kao before him or her, to modify the non-transitory computer readable storage medium of claim 11 to include attributes of analogous receiving, at the BCI, incoming neural signals wherein the incoming neural signals comprises dayi+1 data, or a non-transitory computer-readable storage medium as taught by Kao in order to allow a model that is informative at time i + 1 of i (see page 2 second paragraph: “These dynamics characterize how the neural population activity modulates itself over time (for example, through recurrent connectivity8,9) so that the neural population activity at time k is informative of the population activity at time k+1.”).”). Farshchian in view of Kao does not explicitly teach a non-transitory computer readable storage medium comprising instructions stored thereon, when executed by one or more processors coupled to one or more memory units, perform operations. Pandarinath, however, teaches in analogous a non-transitory computer readable storage medium comprising instructions stored thereon, when executed by one or more processors coupled to one or more memory units, perform operations (see [0082]: “In one or more example embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer readable medium or non-transitory processor-readable medium.”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Pandarinath before him or her, to modify the non-transitory computer readable storage medium of claim 11 to include attributes of a non-transitory computer readable storage medium comprising instructions stored thereon, when executed by one or more processors coupled to one or more memory units, perform operations as taught by Pandarinath in order to increase the scalability of the operations disclosed by way of different storage mediums (see [0082]: “The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer.”.). Regarding claim 12 (currently amended): Farshchian in view of Kao in further view of Pandarinath teaches the non-transitory computer readable storage medium of claim 11. Farshchian further teaches wherein the day0 data comprises data received by the same day that the BCI is calibrated by a clinician (see section 4.1 page 3: “Once the neural AE and the EMG predictor networks have been trained on the data acquired on the first recording session, indicated as day-0, their weights remain fixed.”. Also see section 4.2 page 3: “To stabilize a fixed BMI, we need to align the latent space of later days to that of the first day, when the fixed interface was initially built.”. Also see abstract page 1: “Brain-Machine Interfaces (BMIs) have recently emerged as a clinically viable option to restore voluntary movements after paralysis.”.) and wherein the day0 data corresponds to undisrupted neural signals (see section 4.2 page 3: “To stabilize a fixed BMI, we need to align the latent space of later days to that of the first day, when the fixed interface was initially built. This step is necessary to provide statistically stationary inputs to the EMG predictor.”.). Claim 7 is rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) in view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao) in further view of Roy et al. (“Deep Learning Based Inter-subject Continuous Decoding of Motor Imagery for Practical brain-Computer Interfaces” hereinafter referred to as Roy). Regarding claim 7 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian further teaches wherein the internal metric for determining training completion and model selection is the difference between signal distributions according to the mean and o signals (see section 4.2.3 page 5: “The goal of the discriminator is to maximize the difference between the neural reconstruction losses of day-k and day-0”. Also see section 4.2.3 page 6: “Consider the scalar reconstruction losses r obtained by taking the L 1 norm of each column of R. Let ρ0 and ρk be the distributions of the scalar losses for day-0 and day-k, respectively, and let µ0 and µk be their corresponding means. We measure the dissimilarity between these two distributions by a lower bound to the Wasserstein distance (Arjovsky et al., 2017), provided by the absolute value of the difference between the mean.”.). Farshchian does not explicitly teach the difference between the mean and variance. Roy, however, teaches in analogous difference between the mean and variance (see section 2.3 page 5 : “Adam can be understood as a combination of SGDM with momentum and Root Mean Square Error Propagation (RMSprop).”. [Examiner’s note i.e. emphasis added. Root mean square error is directly related to variance as it is the square root of the variance.]). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, and Roy before him or her, to modify the method of claim 7 to include attributes of difference between the mean and variance as taught by Roy in order to aid in evaluating the accuracy of a the predictive model (see Roy at section 2.3 page 5 : “Adam can be understood as a combination of SGDM with momentum and Root Mean Square Error Propagation (RMSprop).”. [Examiner’s note i.e. emphasis added. Root mean square error is directly related to variance as it is the square root of the variance.]). Claim 8 is rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) in view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao) in further view of Chang et al. (US20220301563A1 hereinafter referred to as Chang). Regarding claim 8 (currently amended): Farshchian in view of Kao teaches the method of claim 1. Farshchian in view of Kao does not explicitly teach wherein the internal metric for determining training completion and model selection is the entropy of decoded class values across translated signals and based on the BCI's original dayo decoder. Chang, however, teaches in analogous wherein the internal metric for determining training completion and model selection is the entropy of decoded class values across translated signals and based on the BCI's original dayo decoder (see [00394]: “First, it was analyzed how the amount of neural data used during training affects decoder performance. For each participant, fit utterance classification models was fit with neural data recorded during perception and production of an iteratively increasing number of randomly drawn samples (perception or production trials during training blocks) of each utterance. These models were then evaluated on all test block trials for that participant. It was found that classification accuracy and cross entropy improved over approximately 10-15 training samples (FIG. 9, FIG. 10). After this point, performance began to improve more slowly, although it never completely plateaued (except for the answer classifier for participant 2, where 30 training samples were acquired; FIG. 10). These findings suggest that reliable classification performance can be achieved with only 5 minutes of speech data, but it remains unclear how many training samples would be required before performance no longer improves. A similar analysis was also performed with the detection models to assess speech detection performance as a function of the amount of training data used. It was found that detection performance plateaus with about 25% of the available training data (as little as 4 minutes of data, including silence) for each participant.”. Also see [0026]: “FIG. 15A, Classification accuracy and cross entropy as a function of the amount of training data (mean with standard error) … Each green dot marks the performance on the test block using the hyperparameters that minimized cross entropy on the validation set (the hyperparameter values used in the main results).”.). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, Chang before him or her, to modify the method of claim 8 to include attributes of wherein the internal metric for determining training completion and model selection is the entropy of decoded class values across translated signals and based on the BCI's original dayo decoder as taught by Chang in order to provide further insight into performance (see Chang at [0390]: “Classification performance was also assessed using cross entropy, a metric that compares the predicted utterance likelihoods and the actual utterance identities for each trial across all test blocks for a participant. Given utterance log likelihoods predicted by a classification model for trials in the test blocks, cross entropy measures the average number of bits required to correctly classify those utterances. These values provide further insight into the performance of the utterance classification and context integration models by considering the predicted probabilities of the utterances (not just which utterance was most likely in each trial).”.) Claim 13 is rejected under 35 U.S.C 103 as being unpatentable over Farshchian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces” hereinafter referred to as Farschian) in view of Kao et al. (“Single-Trial Dynamics of Motor Cortex and their Applications to Brain-Machine Interfaces” hereinafter referred to as Kao) in further view of Gandhi et al. (“Denoising Time Series Data Using Asymmetric generative Adversarial Networks” hereinafter referred to as Gandhi) in further view of Pandarinath et al. (US20220129071A1 hereinafter referred to as Pandarinath). Regarding claim 13 (currently amended): Farshchian in view of Kao in further view of Pandarinath teaches the non-transitory computer readable storage medium of claim 11. Farshchian in view of Kao in further view of Pandarinath does not explicitly teach wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second datastore. Gandhi, however, teaches in analogous wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second datastore (see section 3 page 7: “Thus, we want to learn a mapping from a noisy signal to a clean signal using only a set of unpaired noisy signals and a set of clean signals.”. Also see abstract page 2: “Our model for denoising time series is trained using unpaired training corpora and does not need information about the source of the noise or how it is manifested in the time series.”.) Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Farshchian, Kao, Pandarinath, and Gandhi before him or her, to modify the non-transitory computer readable storage medium of claim 13 to include attributes of wherein the first set of neural signals in the first datastore is unpaired to the second set of neural signals in the second as taught by Gandhi in order to ease the expense of removing noise [(Examiner’s note: a person having ordinary skill in the art, using broadest reasonable interpretation, in light of the specification, could take removing noise and the stabilizing of neural signals to function similarly.]) (see Gandhi at section 3 page 7: “This is because manually removing noise is an expensive task and needs domain expertise. But, it is much easier to collect signals with artifacts and signals without artifacts.”.). Allowable Subject Matter Claims 4 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims provided 101 rejections are overcome. Regarding claims 4 and 14, the closest prior art of record to the limitations of the aforementioned claims, Farschian et al. (“Adversarial Domain Adaptation For Stable Brain-Machine Interfaces”) recites translating neural signals based on the dayi model rather than a dayi+1 model. However, the examiner has found that the distinct features of the applicant’s claimed invention over the prior is the explicit claiming of the aforementioned limitations specified in claims 4 and 14. When viewed individually or in combination with other prior art of record, the limitations specified in claims 4 and 14 are distinct. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: “Learning Subject-Generalized Topographical EEG Embeddings Using Deep Variational Autoencoders and Domain-Adversarial Regularization” — Hagad et al. — discloses a similar adversarial autoencoder network to the claimed invention that uses data from a brain-computer interface Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew A Bracero whose telephone number is (571)270-0592. The examiner can normally be reached Monday - Thursday 7:30a.m. - 5:00 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at Monday - Friday 9:00a.m. - 5:00 p.m. ET at 571-270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW BRACERO/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

May 12, 2022
Application Filed
Aug 30, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Feb 26, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month