Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Arguments
Applicant's arguments with respect to claims 1, 8, and 15 have been considered but are moot in view of the new ground(s) of rejection. Applicant’s arguments are directed to the amended subject matter; new prior art is provided below.
Note: The claims are not directed towards patent ineligible subject matter under 35 U.S.C. 101
Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER?
Yes
Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA?
No
Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION?
Yes, if the claims are alternatively construed to be abstract in step 2A1. The claims seek to improve autoregressive-based neural network prediction supported by the specification, and reflected by the claims e.g. in spec: 0054-0056 In other words, the claims enable the invention to reduce error rates while solving training-inference mismatch for improvement of quality without losing the speed of teacher forcing.
Supported by the following:
In Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018), the claimed invention was a method of virus scanning that scans an application program, generates a security profile identifying any potentially suspicious code in the program, and links the security profile to the application program. 879 F.3d at 1303-04, 125 USPQ2d at 1285-86. The Federal Circuit noted that the recited virus screening was an abstract idea, and that merely performing virus screening on a computer does not render the claim eligible. 879 F.3d at 1304, 125 USPQ2d at 1286. The court then continued with its analysis under part one of the Alice/Mayo test by reviewing the patent’s specification, which described the claimed security profile as identifying both hostile and potentially hostile operations. The court noted that the security profile thus enables the invention to protect the user against both previously unknown viruses and “obfuscated code,” as compared to traditional virus scanning, which only recognized the presence of previously-identified viruses. The security profile also enables more flexible virus filtering and greater user customization. 879 F.3d at 1304, 125 USPQ2d at 1286. The court identified these benefits as improving computer functionality, and verified that the claims recite additional elements (e.g., specific steps of using the security profile in a particular way) that reflect this improvement. Accordingly, the court held the claims eligible as not being directed to the recited abstract idea. 879 F.3d at 1304-05, 125 USPQ2d at 1286-87. This analysis is equivalent to the Office’s analysis of determining that the additional elements integrate the judicial exception into a practical application at Step 2A Prong Two, and thus that the claims were not directed to the judicial exception (Step 2A: NO).
Examples of claims that improve technology and are not directed to a judicial exception include: Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339, 118 USPQ2d 1684, 1691-92 (Fed. Cir. 2016) (claims to a self-referential table for a computer database were directed to an improvement in computer capabilities and not directed to an abstract idea); McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102-03 (Fed. Cir. 2016) (claims to automatic lip synchronization and facial expression animation were directed to an improvement in computer-related technology and not directed to an abstract idea); Visual Memory LLC v. NVIDIA Corp., 867 F.3d 1253,1259-60, 123 USPQ2d 1712, 1717 (Fed. Cir. 2017) (claims to an enhanced computer memory system were directed to an improvement in computer capabilities and not an abstract idea); Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018) (claims to virus scanning were found to be an improvement in computer technology and not directed to an abstract idea); SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295, 1303 (Fed. Cir. 2019) (claims to detecting suspicious activity by using network monitors and analyzing network packets were found to be an improvement in computer network technology and not directed to an abstract idea). Additional examples are provided in MPEP § 2106.05(a).
Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole:
Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality.
Specifically, Ex Parte Desjardins explained the following:
Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8).
Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were
The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 6, 8-10, 12, 13, and 15-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20190180732 A1 PING; Wei et al. (hereinafter PING) in view of US 20220254330 A1 Cobo; Rus et al. (hereinafter Cobo) and further in view of US 20170372201 A1 Gupta; Otkrist et al. (hereinafter Gupta).
Re claim 1, Ping teaches
1. A method of training and operating a neural network model, the method performed, by at least one processor of an electronic device and comprising: (0112-0113 ground truth and autoregressive teacher-net neural model)
shifting a ground-truth waveform (using the known BRI definition of such a term, being known as a reference signal intentionally offset in time (shifted) to mimic real-world distortions, helping train models to handle frame loss, as such frame loss is a focus of Ping within autoregressive neural network prediction, thus shifting a waveform in time by milliseconds for instance 0085-0086 under the premise of a shift function in an autoregressive concept 0061 within an explicitly defined teacher-forced model using multi-convolution to predict an output 0099 with fig. 10, wherein autoregression uses ground-truth alteration of a spectrogram or waveform to reduce noise e.g. whisper which produces a new signal 0112-0114, this is defined as a neural network autoregression to predict outputs that are synthesized speech using a waveform or spectrogram model under the guidance of the neural network autoregression 0039-0041… learning by said model 0110 in a multi-channel system supplemented with fig. 9)
in an initial training iteration, training the neural network model in a teacher forcing mode in which an input an autoregressive channel includes the shifted ground-truth waveform, generating predictions of the neural network model based on the shifted ground-truth waveform in the autoregressive channel (creating predictions based on ground truth spectrum inputs e.g. 0112 0123 0051 with fig. 9 and fi. 12… utilizing an explicit time shift of a signal 0085-0086 under the premise of a shift function in an autoregressive concept 0061 within an explicitly defined teacher-forced model using multi-convolution to predict an output 0099 with fig. 10, wherein autoregression uses ground-truth alteration of a spectrogram or waveform to reduce noise e.g. whisper which produces a new signal 0112-0114, this is defined as a neural network autoregression to predict outputs that are synthesized speech using a waveform or spectrogram model under the guidance of the neural network autoregression 0039-0041… learning by said model 0110 in a multi-channel system supplemented with fig. 9)
However, while PING teaches autoregressive model learning, time shifting a waveform of spectrogram, teacher-forced conditions, multi-convolution, speech synthesized outputs, channels, ground truth, prediction, and improvements thereof, and although the output of a synthesized waveform is a new version of the input audio or text thereof, it fails to teach iteration or replacement explicitly per se, and in lieu of official notice, fails to teach:
In at least one additional training iteration, replacing the shifted ground-truth waveform in the autoregressive channel with the predictions of the neural network model obtained in a previous training iteration as the input to the autoregressive model. (Cobo fig. 1a with at least 0007 shows an overview of updating or replacing per se using autoregressive model 108 using with previous data used such as during a difference calculation, during an iterative process using model data to improve output audio fig. 4 with 0007 and claim 18)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PING to incorporate the above claim limitations as taught by Cobo to allow for simple substitution of one known element of iterative autoregressive ground truth neural network learning for another being autoregressive ground truth modeling with inferences of such iterative learning based on the premises of autoregressive ground truth, to obtain predictable results, thereof reducing errors and model corruption thereof.
However, while the combination teaches ground truth difference in waveforms in an autoregressive model on iterations until complete for each new input, it fails to teach:
wherein only an output of a final iteration of the at least one additional training iteration is backpropagated. (Gupta in a teacher model context, backpropagation as the final step once no more new frames or data is received 0068 0107 and fig. 6 and fig. 8)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PING in view of Cobo to incorporate the above claim limitations as taught by Gupta to allow for combining prior art elements according to known methods to yield predictable results such as using teacher models, or "teacher forcing", and applying backpropagation on only the last iteration, once no more new data is received, which improves autoregressive models e.g. WaveNet or similar, by not propagating gradients through a potentially long history (as with full Backpropagation Through Time (BPTT) in NNs or RNNs or DNNs), issues like vanishing or exploding gradients, which can make training unstable, are avoided, thereby helping to ensure computational efficiency and stability, particularly in very deep or long-sequence models when new data stops being received.
Re claim 8, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope.
For instance, fig. 14 of Ping teaches the necessary hardware.
Re claim 15, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope
For instance, fig. 14 of Ping teaches the necessary hardware.
Re claims 2, 9, and 16, while PING teaches autoregressive model learning, time shifting a waveform of spectrogram, teacher-forced conditions, multi-convolution, speech synthesized outputs, channels, ground truth, prediction, and improvements thereof, and although the output of a synthesized waveform is a new version of the input audio or text thereof, it fails to teach iteration or replacement explicitly per se, and in lieu of official notice, fails to teach:
The method of claim 1, wherein the at least one additional training iteration comprises a plurality of training iterations, each training iteration outputting respective predictions of the neural network model to the autoregressive channel for a next iteration of the plurality of training iterations. (Cobo iterative process using model data to improve output audio fig. 4 with 0007 and claim 18)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PING to incorporate the above claim limitations as taught by Cobo to allow for simple substitution of one known element of iterative autoregressive ground truth neural network learning for another being autoregressive ground truth modeling with inferences of such iterative learning based on the premises of autoregressive ground truth, to obtain predictable results, thereof reducing errors and model corruption thereof.
Re claims 3, 10, and 17, PING teaches
3. The method of claim 2, wherein the neural network model is configured to perform at least one forward pass, compute a loss, and perform at least one backward pass, and wherein, during training, a number of forward passes performed before computing the loss and performing the at least one backward pass is gradually increased. (0047, 0041, 0030, and 0121 backpropagation, forward propagation, and loss minimization using an increasing until results are optimal such as minimizing loss)
Re claims 5, 12, and 18,
5. The method of claim 1, further comprising performing an inference by the neural network model by: providing, for the neural network model, an additional channel containing at least one prediction of the neural network model outputted during training; and (0112-0113 ground truth and autoregressive teacher-net neural model with mel-spectrum or shifted audio to improve audio in a neural network with learning by said model 0110 in a multi-channel system for teacher-forced learning 0099, thereof supplemented with fig. 9)
performing speech enhancement using the neural network model. (improved quality and performance 0055, with 0112-0113 ground truth and autoregressive teacher-net neural model with mel-spectrum or shifted audio to improve audio in a neural network with learning by said model 0110 in a multi-channel system for teacher-forced learning 0099, thereof supplemented with fig. 9)
Re claims 6, 13, and 19, PING teaches
6. The method of claim 5, wherein the neural network model includes a fully convolutional neural network. (convolutional 0097 with 0112-0113 ground truth and autoregressive teacher-net neural model with mel-spectrum or shifted audio to improve audio in a neural network with learning by said model 0110 in a multi-channel system for teacher-forced learning 0099, thereof supplemented with fig. 9)
Claims 7, 14, and 20 s/are rejected under 35 U.S.C. 103 as being unpatentable over US 20190180732 A1 PING; Wei et al. (hereinafter PING) in view of US 20220254330 A1 Cobo; Rus et al. (hereinafter Cobo) and US 20170372201 A1 Gupta; Otkrist et al. (hereinafter Gupta) further in view of US 20210183401 A1 Narayanaswamy; Vivek Sivaraman et al. (hereinafter Narayanaswamy)
Re claims 7, 14, and 20, while the combination teaches Wavenet and LSTM, it fails to teach variations of NET concepts, while analogously applicable, thus failing to teach:
7. The method of claim 6, wherein the fully convolutional neural network includes a WaveUNet architecture augmented at a bottleneck thereof with a long short term memory (LSTM) layer. (Narayanaswamy 0004, 0016-0017, 0024-0026)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of PING in view of Cobo and Gupta to incorporate the above claim limitations as taught by Narayanaswamy to allow for simple substitution of one known element of iterative autoregressive ground truth neural network learning with LSTM and WaveNet for another being bottleneck path reduction in an LSTM and Wave-U-Net system, to obtain predictable results, allowing for the training process to be improved by employing dense connections in a bottleneck and enhancing WaveNet with Wave-U-Net such that deep neural network processing can be applied to sound source separation, improving multi-channel systems.
Conclusion
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/02/2026 has been entered.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 6169981 B1 Werbos; Paul J.
BTT and truncation
US 20220122582 A1 Elias; Isaac et al.
Ground truth autogestion
US 20220293083 A1 Shechtman; Vyacheslav
Ground truth autogestion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 7 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847
Examiner FAX: (571)-270-2847
Michael.Colucci@uspto.gov