Prosecution Insights
Last updated: April 19, 2026
Application No. 17/968,025

IMPLEMENTATION OF DISCRETE FOURIER-RELATED TRANSFORMS IN HARDWARE

Non-Final OA §101§102§103
Filed
Oct 18, 2022
Examiner
LAROCQUE, EMILY E
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
366 granted / 454 resolved
+25.6% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
41 currently pending
Career history
495
Total Applications
across all art units

Statute-Specific Performance

§101
29.3%
-10.7% vs TC avg
§103
22.2%
-17.8% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Drawings The drawings are objected to because figure 10 is not secured with solid black lines. See 37 CFR 1.84.(a).(1). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claims 1-20 are objected to because of the following informalities. Claim 1 lines 12-13 recite “one or more convolution operations”. For antecedent basis reasons, this should recite “the one or more convolution operations”. Claims 2-19 inherit the same deficiency as claim 1 based on dependence. Claim 20 recites substantially the same limitation and is rejected for the same reason. Claim 20 line 16 recites “relate”. This appears to be a typographical error and should possibly recite “related”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding treatment of claims, apparatus claim 20 will be addressed first, followed by the method claims. Regarding claim 20 , under the Alice framework Step 1, the claim falls within the four statutory categories of patentable subject matter identified by 35 USC 101: a process, machine , manufacture or a composition of matter. Under the Alice framework Step 2A prong 1, the claim recites mathematical concepts of mathematical calculations and mathematical relationships for implementing a discrete Fourier-related transform . Specifically, claim 20 recites the following mathematical calculations and mathematical relationships: implementing a discrete Fourier-related transform, wherein the discrete Fourier-related transform comprises at least one multiplication operation, perform one or more convolution operations; wherein the input data contains values to undergo the discrete Fourier-related transform; one convolution kernel, wherein each convolution kernel is derived from a weight matrix that represents a multiplicand or multiplier for at least one multiplication operation of the discrete Fourier-related transform; and execute the discrete Fourier-related transform on the input data, wherein at least one multiplication operation of the discrete Fourier-relate transform is executed t o perform one or more convolution operations using the at least one convolution kernel. See, e.g., [0082-0083] , [0099-00101], which describe the discrete Fourier-related transform in terms of mathematical relationships, and mathematical calculations. For these reasons, claim 20 recites mathematical concepts. Under the Alice framework Step 2A prong 2 analysis, claim 1 recites the following additional elements: a data processing system comprising a hardware accelerator comprising fixed-function circuitry, the fixed-function circuitry comprising at least convolution hardware, and a controller . These elements are recited at a very high level of generality, wherein mathematical calculations and mathematical relationships are merely “applied” in a generically recited apparatus system, or generally linked to a particular technological environment, without specifically limiting the functions performed by the apparatus to specific functions performed in a manner integral to the claim. For these reasons, claim 20 is not integrated into a practical application. Under the Alice Framework Step 2B analysis, claim 20 considered individually and as an ordered combination does not include additional elements that are sufficient to amount to significantly more than the abstract idea. As stated in the Step 2A prong 2 analysis, the claim does no more than generally link generically recited apparatus to a particular technological environment . For these reasons claim 20 does not amount to significantly more than the abstract idea. Claim 1 is directed to a method that would be practiced by the apparatus as in claim 20 . All steps performed by the method as in claim 1 is performed by the apparatus as in claim 20 as configured. The claim 20 analysis applies equally to claim 1. Claims 2-18 are rejected for at least the reasons set forth with respect to claim 1. Claims 2-18 merely further mathematically limit the mathematical concepts of claim 1. Claims 2-18 contain no further additional elements beyond those recited in claim 1 that would require further analysis under step 2A prong 2 and step 2B. Claim 19 is directed to a non-transitory computer readable storage medium storing instructions that when executed on a computer system, cause the computer system to perform the method as in claim 1. The claim 1 analysis applies equally to claim 1 9 . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claim s 1 , 5, 19, and 20 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by T. Lu et al., Nonuniform Fast Fourier Transform on TPUS , 2021 IEEE 18 th International Symposium on Biomedical Imaging (ISBI), April 13-16 2021, (hereinafter “ Lu-Nonuniform ”) . Regarding claim 1, Lu -Nonuniform teaches the following: a method of implementing a discrete Fourier-related transform using a hardware accelerator comprising fixed-function circuitry including convolution hardware configured to perform one or more convolution operations, wherein the discrete Fourier-related transform comprises at least one matrix multiplication operation (abstract, implementing the nonuniform Fast Fourier Transform on Tensor Processing Units (TPUs ) , hardware accelerator for deep learning applications, Introduction 2 nd -3 rd paragraphs, TPU is an application specific integrated circuit (ASIC) for fixed-function circuitry, nonuniform Fourier transform formulated as a discrete Fourier transform (DFT), interpolation function of the FFT builds on a convolution, Kaiser-Bessel function is selected as the convolution kernel , Fourier transform formulated as dense matrix multiplications ) , the method comprising: obtaining input data, wherein the input data contains values to undergo the discrete Fourier-related transform ( Introduction, 1 st paragraph, image reconstruction methods in magnetic resonance imaging (MRI) have an extensive usage of NUFFT when the k-space data are sampled, 3 rd paragraph, sampled image, section 2. Discrete Fourier transform with unequally sampled data, sampling using TPU fig 1 ) ; obtaining at least one convolution kernel, wherein each convolution kernel is derived from a weight matrix that represents a multiplicand or multiplier for the at least one matrix multiplication operation of the discrete Fourier-related transform (section 2.1 d(.) represents the inverse Fourier transform of a convolution kernel, using the Kaiser-Bessel function as the kernel , wherein D is the apodization operator representing a multiplicand or multiplier for the at least one matrix multiplication operation of the discrete Fourier-related transform as in eqn 2, eqn 3, fig 2 ) ; and executing the discrete Fourier-related transform on the input data using the hardware accelerator (fig 2, fig, abstract) , wherein the at least one matrix multiplication operation of the discrete Fourier-related transform is executed by using the convolution hardware to perform one or more convolution operations using the at least one convolution kernel (abstract, the TPU is a hardware accelerator designed for deep learning applications, section 2 matrix multiplication is a convolution operation) . Regarding claim 5 , in addition to the teachings addressed in the claim 1 analysis, Lu-Nonuniform teaches the following: wherein the discrete Fourier-related transform is a discrete Fourier transform (Introduction, second paragraph DFT). Claim 19 is directed to a non-transitory computer readable storage medium having stored thereon computer readable instructions that when execute d at a computer system , c ause the computer system to perform the method as in claim 1. The claim 1 analysis applies equally to claim 19 . Regarding claim 20 , Lu-Nonuniform teaches the following: a data processing system for implementing a discrete Fourier-related transform, wherein the discrete Fourier-related transform comprises at least one multiplication operation (abstract, implementing the nonuniform Fast Fourier Transform on Tensor Processing Units (TPUs), hardware accelerator for deep learning applications, Introduction 2 nd -3 rd paragraphs, nonuniform Fourier transform formulated as a discrete Fourier transform (DFT), interpolation function of the FFT builds on a convolution, Kaiser-Bessel function is selected as the convolution kernel, Fourier transform formulated as dense matrix multiplications) , the data processing system comprising: a hardware accelerator comprising fixed-function circuitry configured to perform a set of available to perform a set of available elementary neural network operations, the fixed function circuitry comprising at least convolution hardware to configured to perform one or more convolution operations (abstract, TPU is an application specific integrated circuit (ASIC) hardware accelerator for comprising fixed-function circuitry , hardware accelerator for deep learning applications for to perform a set of available elementary neural network operations, I ntroduction 2 nd -3 rd paragraphs, nonuniform Fourier transform formulated as a discrete Fourier transform (DFT), interpolation function of the FFT builds on a convolution, Kaiser-Bessel function is selected as the convolution kernel ); and a controller (abstract, fig 1 TensorFlow) configured to: obtain input data, wherein the input data contains values to undergo the discrete Fourier-related transform (Introduction, 1 st paragraph, image reconstruction methods in magnetic resonance imaging (MRI) have an extensive usage of NUFFT when the k-space data are sampled, 3 rd paragraph, sampled image, section 2. Discrete Fourier transform with unequally sampled data, sampling using TPU fig 1) ; obtain at least one convolution kernel, wherein each convolution kernel is derived from a weight matrix that represents a multiplicand or multiplier for the at least one matrix multiplication operation of the discrete Fourier-related transform (section 2.1 d(.) represents the inverse Fourier transform of a convolution kernel, using the Kaiser-Bessel function as the kernel, wherein D is the apodization operator representing a multiplicand or multiplier for the at least one matrix multiplication operation of the discrete Fourier-related transform as in eqn 2, eqn 3, fig 2) ; and execut e the discrete Fourier-related transform on the input data using the hardware accelerator (fig 2, fig, abstract) , wherein the at least one multiplication operation of the discrete Fourier-relate transform is executed by using the convolution hardware to perform one or more convolution operations using the at least one convolution kernel (abstract, the TPU is a hardware accelerator designed for deep learning applications, section 2 matrix multiplication is a convolution operation) . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Cla im s 2- 4 are rejected unde r 35 U.S.C. 103 as being unpatentable over Lu-Nonuniform in view o f T. L u et al., Large-Scale Discrete Fourier Transform on TPUs , IEEE Access, 7 July 2021 (hereinafter “ Lu-Large-Scale ”) . Regarding claim 2, Lu-Nonuniform discloses the claim 1 limitations. Lu-Nonuniform discloses the convolution kernel generally but does not explicitly disclose wherein each convolution kernel is generated by reshaping and/or permuting the dimensions of a respective weight matrix. However in the same field of endeavors, with authors in common with Lu-Nonuniform, Lu-Large-Scale similarly discloses use of TPUs for executing the DFT (abstract, fig 1, section I, section II). Lu-Large-Scale further discloses: wherein each convolution kernel is generated by reshaping and/or permuting the dimensions of a respective weight matrix ( eqn (13), section III.A, eqn 1 rewritten in matrix form eqn (2) then to matrix of eqn (13) ) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to rewrite the DFT matrix of Lu-Nonuniform, into the matrix form of Lu-Large-Scale to achieve the benefit of a Kronecker product form can be used to contract the product as matrix multiplications of rank-2 and rank-3. Regarding claim 3 , in addition to the teachings addressed in the claim 2 analysis, Lu-Nonuniform teaches the following: the convolution kernel is generated before the input data is obtained (fig 2, the kernel function values are precomputed on the CP host). Regarding claim 4, in addition to the teachings addressed in the claim 1 analysis, Lu-Nonuniform teaches the following: the input data comprises two or more sequences of values, each of which is to be individually transformed using a respective instance of the discrete Fourier-related transform ( abstract MR image reconstruction for input data comprises two or more sequences of values, section I, sampled image. Section 2.2 FFT operates on image for each of which individually transformed using a respective instance of the discrete Fourier-related transform) ; and Lu-Nonuniform discloses the convolution operation generally but does not explicitly disclose a single convolution operation is used to perform a matrix multiplication operation, of the at least one matrix multiplication operations, of multiple instances of the discrete Fourier-related transform on respective sequences of values . However in the same field of endeavors, with authors in common with Lu-Nonuniform, Lu-Large-Scale similarly discloses use of TPUs for executing the DFT (abstract, fig 1, section I, section II). Lu-Large-Scale further discloses: a single convolution operation is used to perform a matrix multiplication operation, of the at least one matrix multiplication operations, of multiple instances of the discrete Fourier-related transform on respective sequences of values ( abstract. fig 3, section IIB, tensorflow including the convolution operator used for the DFT, fig 4 showing the multiple instances of the discrete Fourier-related transform on respective sequences of values as implemented in the TPU as a matrix multiplication operation ) . It would have been obvious to one of ordinary skill in the art before the effective filing date, to formulate the discrete Fourier-related transform of Lu-Nonuniform, into the single convolution operation used by Lu-Large-Scale to perform a matrix multiplication operation, to achieve the benefit of achieving high parallel efficiency on TPUs (Section IV.A.) Cla im s 6-7 , 9 - 11, and 13-1 6 are rejected unde r 35 U.S.C. 103 as being unpatentable over Lu-Nonuniform in view of US 20180253402 A1 Redfern et al., (hereinafter “ Redfern ”) . Regarding claim 6, in addition to the teachings addressed in the claim 5 analysis, Lu-Nonuniform discloses wherein the input data comprises a first tensor to undergo the discrete Fourier transform (abstract, Section 1, FFT formulated as tensor operations implemented in TensorFlow), the discrete Fourier transform comprises a first set of matrix multiplications (section 2 first two paragraphs); and the first set of multiplications is executed by us ing the convolution hardware (abstract, section 1, TPU, TensorFlow). Lu-Nonuniform does not, however, explicitly disclose wherein the first tensor comprises only the real part of values to undergo the discrete Fourier transform; and wherein the discrete Fourier transform comprises a first set of matrix multiplications comprising: multiplying the first tensor by a first weight matrix to produce a first multiplied tensor; and multiplying the first tensor by a second weight matrix to produce a second multiplied tensor; the at least one convolution kernel comprises a first convolution kernel derived from the first weight matrix and a second convolution kernel derived from the second weight matrix and the first set of matrix multiplications is executed by using the convolution hardware to perform at least two convolutions using the first and second convolution kernels . However, in the same field of endeavor, Redfern discloses and apparatus similar to Lu-Nonuniform wherein a processor comprising a matrix multiplication accelerator (MMA) is configured for FFT and convolutions ([0032], fig 2-12). Redfern further discloses: wherein: the input data comprises a tensor, comprising only the real part of values to undergo the discrete Fourier transform ([0087], [0090] X re M,N for first tensor comprising only the real part of the values to undergo the discrete Fourier transform) ; the discrete Fourier transform comprises a first set of matrix multiplications comprising (Table 7, [0096-0101]) : multiplying the first tensor by a first weight matrix to produce a first multiplied tensor ([0091] , [0098] top equation of the four) ; and multiplying the first tensor by a second weight matrix to produce a second multiplied tensor ([0091] , [0098] bottom equation of the four) ; the at least one convolution kernel comprises a first convolution kernel derived from the first weight matrix and a second convolution kernel derived from the second weight matrix ([0091] top and bottom convolutional kernels, using F re 32 , and F im 32 ) . the first set of matrix multiplications is executed using the convolution hardware to perform at least two convolutions using the first and second convolution kernels ([0091], [0098] top and bottom equations for at least two convolutions). It would have been obvious to one of ordinary skill in the art before the effective filing date to organize the set of matrix multiplications executed using the convolution hardware (TPUs) of Lu-Nonuniform, to perform a first set of matrix multiplications on input data comprising only real values on first and second multiplied tensors as disclosed by Redfern, to achieve the benefit of performing smaller operations in batches (Redfern [0091-0092]). Regarding claim 7 , in addition to the teachings addressed in the claim 6 analysis, Lu-Nonuniform discloses wherein the input data comprises a tensor to undergo the discrete Fourier transform (abstract, Section 1, FFT formulated as tensor operations implemented in TensorFlow), the discrete Fourier transform comprises a set of matrix multiplications (section 2 first two paragraphs); and the set of multiplications is executed by suing the convolution hardware (abstract, section 1, TPU, TensorFlow). Lu-Nonuniform does not, however, explicitly disclose wherein the second tensor comprises only the imaginary part s of values to undergo the discrete Fourier transform; and wherein the discrete Fourier transform comprises a second set of matrix multiplications comprising: multiplying the second tensor by a first weight matrix to produce a third multiplied tensor; and multiplying the second tensor by the second weight matrix to produce a fourth multiplied tensor; the second set of matrix multiplications is executed by using the convolution hardware to perform at least two convolutions using the first and second convolution kernel to perform the second set of multiplications. However, in the same field of endeavor, Redfern discloses and apparatus similar to Lu-Nonuniform wherein a processor comprising a matrix multiplication accelerator (MMA) is configured for FFT and convolutions ([0032], fig 2-12). Redfern further discloses: the input data further comprises a second tensor comprising only the imaginary parts of values to undergo the discrete Fourier transform ([0087], [0090] X im M,N for second tensor comprising only the imaginary parts of values to undergo the discrete Fourier transform) ; the discrete Fourier transform comprises a second set of matrix multiplications (Table 7, [0096-0101]) comprising: multiplying the second tensor by the first weight matrix to produce a third multiplied tensor ([0091], [0098] second from top equation of the four) ; and multiplying the second tensor by the second weight matrix to produce a fourth multiplied tensor ([0091], [0098] second from bottom equation of the four) ; and the second set of matrix multiplications is executed by using the convolution hardware to perform at least two convolutions using the first and second convolution kernel to perform the second set of matrix multiplications ([0091], [0098] middle two equations for at least two convolutions, [0091] middle two convolutional kernels, using F im 32 , and F re 32 ) . The motivation to combine set forth with respect to claim 6 applies equally to claim 7. Regarding claim 9, Lu-Nonuniform in view of Redfern teach the claim 7 limitations. Redfern further discloses: wherein the first and second set of matrix multiplications are performed by: performing a first convolution on the first tensor using the first convolution kernel to produce the first multiplied tensor ([0091] top equation) ; performing a second convolution on the first tensor using the second convolution kernel to produce the second multiplied tensor ([0091] second from top equation) ; performing a third convolution on the second tensor using the first convolution kernel to produce the third multiplied tensor ([0091] second from bottom equation) ; and performing a fourth convolution on the second tensor using the second convolution kernel to produce the fourth multiplied tensor ([0091] bottom equation) . The motivation to combine set forth with respect to claim 6 applies equally to claim 9 . Regarding claim 10 , Lu-Nonuniform in view of Redfern teach the claim 6 limitations. Redfern further discloses: subtracting the fourth tensor from the first tensor to produce the real part of the output of the discrete Fourier transform ([0090] top equation); and summing the second and third tensors to produce the imaginary part of the output of the discrete Fourier transform ([0090] bottom equation). The motivation to combine set forth with respect to claim 6 applies equally to claim 10 . Regarding claim 11 , Lu-Nonuniform in view of Redfern teach the claim 6 limitations. Redfern further discloses: the input data comprises only real values to undergo the discrete Fourier transform ([0087], [0090] X re M,N ) ; the first set of matrix multiplications is performed by: performing a first convolution on the first tensor using the first convolution kernel to produce the first multiplied tensor ([0091], [0098] top equation of the four) ; and performing a second convolution on the first tensor using the second convolution kernel to produce the second multiplied tensor ([0091], [0098] bottom equation of the four) . The motivation to combine set forth with respect to claim 6 applies equally to claim 11 . Regarding claim 13 , in addition to the teachings addressed in the claim 5 analysis, Lu-Nonuniform discloses a method of implementing a discrete fast Fourier transform using a hardware accelerator comprising fixed-function circuitry including convolution hardware configured to perform one or more convolution operations (see claim 1 mapping upon which claim 5 depends), the method comprising: obtaining input data, wherein the input data contains values to undergo the fast Fourier transform ( Introduction, 1 st paragraph, image reconstruction methods in magnetic resonance imaging (MRI) have an extensive usage of NUFFT when the k-space data are sampled, 3 rd paragraph, sampled image ) Lu-Nonuniform does not, however, explicitly disclose dividing the input data into two or more parts; performing a discrete Fourier transform on each part of the input data using the method of claim 5 to produce a respective two or more DFT outputs; and combining the DFT outputs using the hardware accelerator to produce an FFT output that contains a fast Fourier transform of the input data. However, in the same field of endeavor, Redfern discloses and apparatus similar to Lu-Nonuniform wherein a processor comprising a matrix multiplication accelerator (MMA) is configured for FFT and convolutions ([0032], fig 2-12). Redfern further discloses: dividing the input data into two or more parts ([0090] x re , x im ) ; performing a discrete Fourier transform on each part of the input data using the method of claim 5 to produce a respective two or more DFT outputs ([0091]) ; and combining the DFT outputs using the hardware accelerator to produce an FFT output that contains a fast Fourier transform of the input data ([0093], completing the steps of the ID FFT, then storing the rows of the resulting matrix in contiguous order for combining the DFT outputs) . It would have been obvious to one of ordinary skill in the art before the effective filing date to implement the discrete fast Fourier transform using the dividing approach of Redfern on the convolution hardware as disclosed by Lu-Nonuniform , to achieve the benefit of performing smaller operations in batches (Redfern [0091-0092]). Regarding claim 1 4 , Lu-Nonuniform in view of Redfern teach the claim 13 limitations. Redfern further discloses: wherein the step of dividing the input data into two or more parts comprises processing the input data using two or more convolution kernels, each configured to extract a predetermined part of the input data ([0091] two convolution Kernels F re , F im ) . The motivation to combine set forth with respect to claim 13 applies equally to claim 1 4 . Regarding claim 1 5 , Lu-Nonuniform in view of Redfern teach the claim 13 limitations. Redfern further discloses: wherein the step of dividing the input data into two or more parts comprises processing the input data using a deconvolution ( [0101] transpose for deconvolution ) . The motivation to combine set forth with respect to claim 13 applies equally to claim 1 5 . Regarding claim 1 6 , Lu-Nonuniform in view of Redfern teach the claim 13 limitations. Redfern further discloses: wherein the input data comprises a first tensor containing real parts of the values to undergo the fast Fourier transform ([0087]) , and the step of dividing the input data into two or more parts comprises: processing the first tensor using an odd-sampling convolution kernel to produce an odd tensor containing only the odd-indexed values of the first tensor ([0032]) ; and processing the first tensor using an even-sampling convolution kernel to produce an even tensor containing only the even-indexed values of the first tensor ([0032]) . The motivation to combine set forth with respect to claim 13 applies equally to claim 1 5 . Cla im s 17 -18 are rejected unde r 35 U.S.C. 103 as being unpatentable over Lu-Nonuniform in view of Redfern . Regarding claim 17, Lu-Nonuniform teaches the claim 1 limitations. Lu-Nonuniform is silent with respect to a discrete cosine transform. However in the same field of endeavor, Redfern discloses and apparatus similar to Lu-Nonuniform wherein a processor comprising a matrix multiplication accelerator (MMA) is configured for FFT and convolutions ([0032], fig 2-12). Redfern further discloses wherein the discrete Fourier related transform is a discrete cosine transform ([0103]). It would have been obvious to one of ordinary skill in the art before the effective filing date to use the convolution hardware of Lu-Nonuniform to perform a discrete cosine transform because the discrete cosine transform is similar to the FFT, and DCT can be implemented via matrix vector multiplication ([0102]). It is obvious to use a known technique to improve similar devices in the same way. See MPEP 2141.III.(A). Regarding claim 1 8 , Lu-Uniform in view of Redfern teach the claim 17 limitations. Redfern further discloses: the input data comprises a first tensor, comprising real values to undergo the discrete cosine transform ([0102] similar to FFT but data is real) ; the discrete cosine transform comprises a DCT multiplication operation comprising multiplying the first tensor by a DCT weight matrix to produce an output of the discrete cosine transform ([0102] DCT matrix for DCT weight matrix, data for first tensor multiplication similar to FFT) ; the at least one convolution kernel comprises a DCT convolution kernel derived from the DCT weight matrix ([0102] DCT matrix for convolution kernel, similar to FFT) ; and the DCT multiplication operation is executed by using the convolution hardware to perform a convolution of the first tensor using the DCT convolution kernel ([0102], [0103] similar to FFT) . The motivation to combine provided with respect to claim 17 applies equally to claim 18. Allowable Subject Matter Claim 8 and 12 would be allowable if rewritten to overcome the rejections under 35 USC 101, rewritten to overcome the claim objections, and rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter . Applicant claims methods, an apparatus, and a non-transitory computer readable storage medium for implementing a discrete Fourier-related transform using a hardware accelerator comprising fixed-function circuitry including a convolution hardware configured to perform one or more convolution operations, wherein the discrete Fourier-related transform comprises at least one matrix operation, wherein the method as in claim 1 comprises: obtaining input data, wherein the input data contains values to undergo the discrete Fourier-related transform; obtaining at least one convolution kernel, wherein each convolution kernel is derived from a weight matrix that represents a multiplicand or multiplier for the at least one matrix multiplication operation of the discrete Fourier-related transform; and executing the discrete Fourier-related transform on the input data using the hardware accelerator, wherein the at least one matrix multiplication operation of the discrete Fourier-related transform is executed by using the convolution hardware to perform one or more convolution operations using the at least one convolution kernel. Claim 8 including intervening claims 5-7 further comprises: wherein the first and second set of matrix multiplications are performed by: concatenating the first (only real parts of the input data) and second (only imaginary parts of the input data) tensors to produce a concatenated tensor; performing a first convolution on the concatenated tensor using the first convolution kernel to produce a first convolution output containing the first and third multiplied tensors; performing a second convolution on the concatenated tensor using the second convolution kernel to produce a second convolution output containing the second and fourth multiplied tensors; splitting the first and second convolution output to produce the first, second, third and fourth multiplied tensors . Claim 12 including intervening claims 5-6 further comprises: wherein the first weight matrix is equal to the real part of a complete matrix defined by: The primary reasons for indicating allowable subject matter include wherein the real and imaginary parts of the input data are concatenated, first and second convolutions are performed on the concatenated tensor using the first and second convolution kernel respectively to produce first and second outputs, and wherein the first and second convolution outputs are split into four multiplied tensors as in the above highlighted limitations with respect to claim 8 including the remaining limitations in combination. Lu-Nonuniform, Lu-Large-Scale, and Redfern disclose the claimed invention according to the above claim mappings. Both Lu-Nonuniform and Lu-Large-Scale are silent with respect to real and imaginary parts of the input data. Redfern discloses real and imaginary parts but does not teach or suggest the concatenation of real and imaginary parts of the input data followed by convolutions performed on the concatenated tensor, followed by splitting out multiplied tensors. As to claim 12, the primary reasons for indication of allowable subject matter include the specific configuration of the weight matrix, specifically wherein the first column and the first row each comprise all ones. Lu-Large-Scale discloses a Vandermonde weight matrix with all ones in the first column (eqn 3) but does not teach or suggest all ones in the first row. Both Lu-Nonuniform and Redfern are silent with respect to the structure of the weight matrix. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Enter examiner's name" \* MERGEFORMAT EMILY E LAROCQUE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (469)295-9289 . The examiner can normally be reached on FILLIN "Work schedule?" \* MERGEFORMAT 10:00am - 1200pm, 2:00pm - 8pm ET M-F . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Andrew Caldwell can be reached on 571-272- 3701 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMILY E LAROCQUE/ Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Oct 18, 2022
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602202
Finite State Machine-Based Bit-Stream Generator for Low-Discrepancy Stochastic Computing
2y 5m to grant Granted Apr 14, 2026
Patent 12596475
COMPRESSION AND DECOMPRESSION OF MULTI-DIMENSIONAL DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579414
ARTIFICIAL NEURON
2y 5m to grant Granted Mar 17, 2026
Patent 12579214
AUGMENTING MATHEMATICAL OPTIMIZATION MODELS GENERATED FROM HISTORICAL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12578923
METHOD AND APPARATUS FOR GENERATING ARCHITECTURE SPECIFIC CONVOLUTION GRADIENT KERNELS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
93%
With Interview (+12.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month