Prosecution Insights
Last updated: April 19, 2026
Application No. 18/247,408

METHOD AND APPARATUS FOR PERFORMING DECONVOLUTION PROCESSING ON FEATURE DATA BY USING CONVOLUTION HARDWARE

Non-Final OA §101§103
Filed
Mar 30, 2023
Examiner
MARU, MATIYAS T
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Beijing Horizon Robotics Technology Research And Development Co. Ltd.
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
23 granted / 40 resolved
+2.5% vs TC avg
Moderate +12% lift
Without
With
+12.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
39 currently pending
Career history
79
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
1.9%
-38.1% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings - Objections The drawings are objected to under 37 CFR 1.83(a) because several of the drawings, including Figs. 1, 2 and 3, feature small type or grayscale drawings that are currently unreadable and require replacements. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) is required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and were necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 6 is objected to because of the following informalities: “the method further comprises: ailoring the interleaving synthetic output, to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel” should read “wherein after the performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, the method further comprises: tailoring the interleaving synthetic output, to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel.” Appropriate correction is required. Claim Rejections - 35 USC § 101 - Signal Per se 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 10 and 17 – 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Claim 10 recites: “A computer readable storage medium, storing computer program instructions …” and a review of the applicant’s originally filed specification ¶[00125] “The computer readable storage medium may be one readable medium or any combination of a plurality of readable media. The readable medium may be a readable signal medium or a readable storage medium.” The cited paragraph stated that the readable storage medium maybe a readable signal medium, which are not patent eligible under 35 U.S.C. 101. Regarding dependent claim(s) 17 – 21, the claims do not resolve the deficiencies noted above; and is therefore appropriately rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1 – 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. an abstract idea) without significantly more. In step 1, of the 101-analysis set forth in the MPEP 2106, the Examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, falls within one or more statutory categories (processes). In step 2A prong 1, of the 101-analysis set forth in MPEP 2106, the Examiner has determined that the following limitations recite a process that, under broadest reasonable interpretation, covers a mental process but for the recitation of generic computer components: Regarding claim 1, performing zero-padding processing on the feature map; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves extending data representation by adding placeholder values (zeros) around existing values. See (MPEP 2106.04)). determining a plurality of convolution kernels based on the deconvolution kernel; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing multiple patterns to determine convolution kernel from a given deconvolution kernel. See (MPEP 2106.04)). removing a row and/or column of each convolution kernel in which all elements are invalid weights to obtain an optimized convolution kernel, and (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves identifying rows or columns that meets a condition (all invalid) and remove them. See (MPEP 2106.04)). removing a corresponding row and/or column in a zero-padded feature map to obtain an optimized feature map corresponding to each optimized convolution kernel; (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves identifying and removing rows or columns in a zero-padded feature map to create an optimized feature map. See (MPEP 2106.04)). If the claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process, but for the recitation of generic computer components, then it falls within the mental process. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 of the 101-analysis, set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application: by using dedicated convolution hardware, the dedicated convolution hardware comprising a multiply-add array and an on-chip memory, and the method comprising: (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). reading the feature map and a deconvolution kernel into the on-chip memory, (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation directed to mere data gathering as deemed insufficient to transform the judicial exception because claimed elements are considered insignificant extra-solution activity, See MPEP (2106.05(g)). performing convolution processing on each optimized convolution kernel and the corresponding optimized feature map by using the multiply-add array, to obtain a plurality of convolutional outputs; (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation which does not amount to more than a recitation of the words "apply it" (or an equivalent), such as mere instructions to implement an abstract idea on a computer. See MPEP 2106.05(f)). wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel. (i.e.: deemed insufficient to transform the judicial exception to a patentable invention because the claim recites limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h)). In Step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception: Regarding limitation (I, III and IV), recite mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception, see MPEP 2106.05(f). Regarding limitation (II), additional elements considered extra/post solution activity, as analyzed above, are activity that are well-understood routine and conventional, specifically: the courts have recognized the computer functions as well‐understood, routine, and conventional functions. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TL| Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II). Regarding limitation (V), additional elements are deemed insufficient to transform the judicial exception to a patentable invention to a patentable invention because they generally link the judicial exception to the technology environment, see MPEP 2106.05(h). As analyzed above, the additional elements, analyzed above, do not integrate the noted judicial exception into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Regarding claim 2, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein a quantity of multipliers comprised in the multiply-add array is larger than or equal to a quantity of weight values comprised in each optimized convolution kernel. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 11, recites similar subject matter as claim 2, so is rejected under the same rationale. Regarding claim 3, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: determining an upper-side quantity for zero padding and a lower-side quantity for zero padding of the feature map based on a height size of the deconvolution kernel and a stride in a height direction and a zero-padding parameter in the height direction that are used for deconvolution operation (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing numerical patterns, evaluating their relationship and deciding how much padding to apply on each side of the feature map. See (MPEP 2106.04)). wherein the lower-side quantity for zero padding is one more row than the upper-side quantity for zero padding; The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. determining a left-side quantity for zero padding and a right-side quantity for zero padding of the feature map based on a width size of the deconvolution kernel and a stride in a width direction and a zero-padding parameter in the width direction that are used for deconvolution operation, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing width related parameters, evaluating how they affect padding information and making a judgment about the amount of padding to place. See (MPEP 2106.04)). wherein the right-side quantity for zero padding is one more column than the left-side quantity for zero padding. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 12, recites similar subject matter as claim 3, so is rejected under the same rationale. Regarding claim 4, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: determining a quantity and sizes of convolution kernels corresponding to the deconvolution kernel (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing characteristics of a deconvolution kernel, evaluating how many convolution kernels are needed and what their sizes should be, and making judgment based on those observation. See (MPEP 2106.04)). wherein the quantity of the convolution kernels is equal to a product of a stride in a height direction and a stride in a width direction that are used for deconvolution operation, a height size of the convolution kernel is a function of a height size of the deconvolution kernel and the stride in the height direction and a zero-padding parameter in the height direction that are used for deconvolution operation, and a width size of the convolution kernel is a function of a width size of the deconvolution kernel and the stride in the width direction and a zero-padding parameter in the width direction that are used for deconvolution operation; The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. for each position in each convolution kernel, determining two-dimensional coordinate values of a corresponding position in the deconvolution kernel based on two-dimensional indexes in the height direction and the width direction of the convolution kernel, the height size and the width size of the convolution kernel, two-dimensional coordinate values of the position, and the stride in the height direction, the stride in the width direction, the zero-padding parameter in the height direction, and the zero-padding parameter in the width direction that are used for deconvolution operation, and taking a weight value of the corresponding position as a weight value of the position in the convolution kernel, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves mapping positions between two coordinate systems by observing index values, evaluating parameter relationships and assigning corresponding weight values. See (MPEP 2106.04)). wherein when the determined two-dimensional coordinate values of the corresponding position in the deconvolution kernel exceeds a range of a position coordinate in the deconvolution kernel, a weight in the position of the convolution kernel is determined as a zero- valued invalid weight. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim(s) 13 and 18, recite similar subject matter as claim 4, so are rejected under the same rationale. Regarding claim 5, dependent upon claim 4, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: padding all elements in each convolutional output into a synthetic matrix by taking the stride in the height direction and the stride in the width direction that are used for deconvolution operation as padding strides, and taking the two-dimensional indexes in the height direction and the width direction of the convolution kernel as padding offsets. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing stride and index parameters, evaluating where each output element should be placed within a matrix and deciding the appropriate offset for padding. See (MPEP 2106.04)). Claim(s) 14 and 19, recite similar subject matter as claim 5, so are rejected under the same rationale. Regarding claim 6, dependent upon claim 1, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: the method further comprises: ailoring the interleaving synthetic output, to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing an intermediate output, evaluating how its elements should be rearrange or selected and making a judgment to produce a final desired result. See (MPEP 2106.04)). Claim(s) 15 and 20, recite similar subject matter as claim 6, so are rejected under the same rationale. Regarding claim 7, dependent upon claim 6, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein the tailoring the interleaving synthetic output comprises: tailoring right and lower sides of the interleaving synthetic output, until a size of the interleaving synthetic output after tailoring corresponds to a size of the deconvolutional output corresponding to the feature map and the deconvolution kernel. (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing the dimensions of an intermediate output, evaluate whether they match a desired target size and decide how much to remove or adjust from the right and lower sides until the output size matches. See (MPEP 2106.04)). Claim(s) 16 and 21, recite similar subject matter as claim 7, so are rejected under the same rationale. Regarding claim 9, The rest of the limitations recite similar subject matter as claim 1, so are rejected under similar rationale An electronic device, comprising: a dedicated convolution hardware, comprising a multiply-add array and an on-chip memory; at least one off-chip memory, storing instructions; and at least one processor, wherein, when the instructions are run by the processor, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map, and the method comprising: Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 10, The rest of the limitations recite similar subject matter as claim 1, so are rejected under similar rationale. A computer readable storage medium, storing computer program instructions, wherein when the computer program instructions are run by an electronic device, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map and the electronic device further comprises dedicated convolution hardware, the dedicated convolution hardware comprising a multiply-add array and an on-chip memory, Deemed insufficient to transform the judicial exception to a patentable invention because the limitation is directed to mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and are considered to adding the words “apply it” (or an equivalent) with the judicial exception, See MPEP 2106.05(f). Limitations directed to using the computer as a tool for implementing an abstract idea cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Regarding claim 17, dependent upon claim 10, and fail to resolve the deficiencies identified above by integrating the judicial exception into a practical application, or introducing significantly more than the judicial exception. The claim recites: wherein a quantity of multipliers comprised in the multiply-add array is larger than or equal to a quantity of weight values comprised in each optimized convolution kernel, or, The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. wherein the performing zero-padding processing on the feature map comprises: determining an upper-side quantity for zero padding and a lower-side quantity for zero padding of the feature map based on a height size of the deconvolution kernel and a stride in a height direction and a zero-padding parameter in the height direction that are used for deconvolution operation, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing numerical patterns, evaluating their relationship and deciding how much padding to apply on each side of the feature map. See (MPEP 2106.04)). wherein the lower-side quantity for zero padding is one more row than the upper-side quantity for zero padding; and The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. determining a left-side quantity for zero padding and a right-side quantity for zero padding of the feature map based on a width size of the deconvolution kernel and a stride in a width direction and a zero-padding parameter in the width direction that are used for deconvolution operation, (i.e.: the broadest reasonable interpretation, the claim recites abstract idea: mental process: It involves observing width related parameters, evaluating how they affect padding information and making a judgment about the amount of padding to place. See (MPEP 2106.04)). wherein the right-side quantity for zero padding is one more column than the left-side quantity for zero padding. The recitation in the additional limitation simply links the judicial exception to a field of use and/or technology environment, see MPEP 2106.05(h). Limitations directed to field of use cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al., "Accelerating generative neural networks on unmodified deep learning processors—A software approach", in view of Yazdanbakhsh et al., "GANAX: A unified MIMD-SIMD acceleration for generative adversarial networks", Dikici et al., Pub. No.: US11556613B2 and Delin et al., Pub. No.: CN109190758B. Regarding claim 1, Xu teaches: A method for performing deconvolution processing on a feature map by using dedicated convolution hardware, [] the method comprising: (Xu, page: 11, “NCS2 is a neural network processor produced by Intel, and it includes specialized hardware to support native deconvolution operation [A method for performing deconvolution processing on a feature map by using dedicated convolution hardware]. We evaluated the deconvolutional layers of generative neural networks on it with the deconvolution operations implemented using the NZP approach, the SD approach as well as the native deconvolution.”) reading the feature map and a deconvolution kernel into the on-chip memory, and performing zero-padding processing on the feature map; (Xu, page: 6, “Step 1 and Step 2 basically split the deconvolution filters [and a deconvolution kernel] to multiple small convolution filters. This needs to be done only once and can be reused. Therefore, they can be done off-line with software approach. Unlike the first two steps, Step 3 and 4 are performed on the CNN processors for each input feature map [reading the feature map]. In step 3, the input feature maps also need to be padded with zeros [and performing zero-padding processing on the feature map] to obtain equivalent deconvolution output. Otherwise, the output activations on the edge will be ignored. PI columns/rows of zeros will be added where PI is obtained from Equation (9).”) determining a plurality of convolution kernels based on the deconvolution kernel; (Xu, page: 2, “We proposed a novel filter partitioning and reorganization approach to convert a general deconvolution operation [based on the deconvolution kernel] to multiple standard convolution operations [determining a plurality of convolution kernels] strictly without incurring much computing redundancy suh that deconvolution can be implemented efficiently as convolution.”) removing a row and/or column of each convolution kernel in which all elements are invalid weights to obtain an optimized convolution kernel, and (Xu, page: 7, “This neat zero value can be easily compressed on an accelerator with a particular data format. Table 3 lists the number of weight parameters of original neural networks [29], general SD approach and SD with compressed weight parameters. Though there are induced zeros of weight in some benchmarks (DCGAN, MDE and FST), most of the redundant values have been removed [removing a row and/or column of each convolution kernel in which all elements are invalid weights to obtain an optimized convolution kernel] after the compression. In addition, the split deconvolution may produce only the center area of the original deconvolution output feature maps, and we must add zero paddings to the input feature maps to obtain equivalent deconvolution output feature maps. Thereby, the proposed split deconvolution may add zeros to both the weights and the input activations, and induce more computing depending on the neural network parameters.”) Xu does not teach: removing a corresponding row and/or column in a zero-padded feature map to obtain an optimized feature map corresponding to each optimized convolution kernel; performing convolution processing on each optimized convolution kernel and the corresponding optimized feature map the dedicated convolution hardware comprising a multiply-add array and an on-chip memory, by using the multiply-add array, to obtain a plurality of convolutional outputs; and performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel Yazdanbakhsh teaches: removing a corresponding row and/or column in a zero-padded feature map to obtain an optimized feature map corresponding to each optimized convolution kernel; (Yazdanbakhsh, page: 4, “The first optimization maximizes the data reuse by reorganizing the computation of the output rows in a way that the rows with the same pattern in their computations become adjacent. Figure 5(a) illustrates the flow of data after applying this output row reorganization. Applying the output row reorganization in this example, make the even-indexed (2nd and 4th output rows) output rows adjacent. Similar adjacency is established for odd-indexed (3rd and 5th output rows) output rows. Although this optimization addresses the data reuse problem, it does not deal with the resource underutilization (i.e., idle compute nodes (white circles) still exist). To mitigate this resource underutilization, we introduce the second optimization that reorganizes the filter rows. As shown in Figure 5(b), applying the filter row reorganization establishes an adjacency for the 1st, 3rd, and 5th filter rows. Similarly, the 2nd and 4th filter rows become adjacent. After applying output and filter row reorganization, as shown in Figure 5(b), the idle compute nodes can be simply eliminated from the dataflow [removing a corresponding row and/or column in a zero-padded feature map] (i.e.: the zero padding (represents as idle compute nodes) is removed from the active data flow). Figure 5(c) illustrates the GANAX flow of data after performing both optimizations [to obtain an optimized feature map corresponding to each optimized convolution kernel], which improves the resource utilization for transposed convolution operation from 50% to 100%.”) performing convolution processing on each optimized convolution kernel and the corresponding optimized feature map (Yazdanbakhsh, page: 4, “The proposed GANAX flow of data also addresses the inefficiency in performing the horizontal accumulation of partial sums. As shown in Figure 4(b), the conventional convolution dataflow requires five cycles to perform the horizontal accumulation for each output row, regardless of their locations. However, comparing Figure 4(b) and Figure 5(c), we observe that after applying output and filter row reorganization optimizations, the number of required cycles for performing the horizontal accumulation reduces from five to two for even-indexed output rows and from five to three [performing convolution processing on each optimized convolution kernel and the corresponding optimized feature map] for odd-indexed output rows.”) Yazdanbakhsh and Xu are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Yazdanbakhsh with teachings of Xu to run efficiently across both legacy and specialized accelerators while minimizing redundant operations to improve data reuse and maximizing performance and energy efficiency, (Yazdanbakhsh, Abstract). Xu in view of Yazdanbakhsh do not teach: the dedicated convolution hardware comprising a multiply-add array and an on-chip memory by using the multiply-add array, to obtain a plurality of convolutional outputs; performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel Delin teaches: the dedicated convolution hardware comprising a multiply-add array and an on-chip memory, (Delin, “[0026] For a well-designed convolutional neural network accelerator, various hardware specifications or parameters of the accelerator can be known or clearly defined. For example, the configuration of the multiply-accelerators in the multiply-accelerator array (convolution engine) of the accelerator [the dedicated convolution hardware comprising a multiply-add array], the storage capacity of the on-chip memory (e.g., total capacity, single-row capacity, etc.) [and an on-chip memory] of the accelerator, and the standard stride supported by the convolution engine of the accelerator. Furthermore, based on these hardware specifications or parameters, it can be known or determined what kind of convolutional operations the accelerator can support and/or what kind of convolutional operations have high processing efficiency.”) by using the multiply-add array, to obtain a plurality of convolutional outputs; (Delin, “[0026] For a well-designed convolutional neural network accelerator, various hardware specifications or parameters of the accelerator can be known or clearly defined. For example, the configuration of the multiply-accelerators in the multiply-accelerator array (convolution engine) of the accelerator [by using the multiply-add array, to obtain a plurality of convolutional outputs;], the storage capacity of the on-chip memory (e.g., total capacity, single-row capacity, etc.) of the accelerator, and the standard stride supported by the convolution engine of the accelerator. Furthermore, based on these hardware specifications or parameters, it can be known or determined what kind of convolutional operations the accelerator can support and/or what kind of convolutional operations have high processing efficiency.”) Delin, Xu and Yazdanbakhsh are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Delin with teachings of Xu and Yazdanbakhsh to split deconvolution filters and reorganize computations to align with accelerator hardware constraints to improve utilization and maximize performance, (Delin, Abstract). Xu in view of Yazdanbakhsh and Delin do not teach: performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel Dikici teaches: performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel (Dikici, (col. 4, line [13 – 24])“Described herein are methods and systems for performing a convolution transpose operation between an input tensor comprising a plurality of input elements and a filter comprising a plurality of filter weights. The method includes: dividing the filter into a plurality of sub-filters; performing, using hardware logic, a convolution operation between the input tensor and each of the plurality of sub-filters to generate a plurality of sub-output tensors [on the plurality of convolutional outputs, to obtain an interleaving synthetic output], each sub-output tensor comprising a plurality of output elements; and interleaving [performing interleaving synthesis processing], using hardware logic, the output elements of the plurality of sub-output tensors to form a final output tensor for the convolution transpose [wherein the interleaving synthetic output comprises a deconvolutional output corresponding to the feature map and the deconvolution kernel].”) Dikici, Xu, Yazdanbakhsh and Delin are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of Dikici with teachings of Xu, Yazdanbakhsh and Delin to add explicit convolution transpose (deconvolution) output reconstruction by interleaving of sub-outputs to reduce redundant computation, (Dikici, Abstract). Regarding claim 9, Delin teaches: An electronic device, comprising: a dedicated convolution hardware, comprising a multiply-add array and an on-chip memory; (Delin, “[0026] For a well-designed convolutional neural network accelerator, various hardware specifications or parameters of the accelerator can be known or clearly defined. For example, the configuration of the multiply-accelerators in the multiply-accelerator array (convolution engine) of the accelerator [comprising a multiply-add array and], the storage capacity of the on-chip memory (e.g., total capacity, single-row capacity, etc.) [an on-chip memory;] of the accelerator, and the standard stride supported by the convolution engine of the accelerator. Furthermore, based on these hardware specifications or parameters, it can be known or determined what kind of convolutional operations the accelerator can support and/or what kind of convolutional operations have high processing efficiency.”) at least one off-chip memory, storing instructions; and at least one processor, wherein, when the instructions are run by the processor, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map, and the method comprising: (Delin, “[0078] Additionally, as shown in FIG10, the example device 200 may also include a memory MEM and an I/O interface, and the processor PU may be connected to the memory MEM and the I/O interface via a bus system and/or other forms of connection mechanism [at least one off-chip memory, storing instructions; and at least one processor, wherein, when the instructions are run by the processor, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map].”) The rest of the limitations are analogous to claim 1 so rejected under similar rationale. It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Delin with teachings of Xu, Yazdanbakhsh and Dikici for the same reasons disclosed for claim 1. Regarding claim 10, Delin teaches: A computer readable storage medium, storing computer program instructions, wherein when the computer program instructions are run by an electronic device, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map and the electronic device further comprises (Delin, “[0078] Additionally, as shown in FIG10, the example device 200 may also include a memory MEM and an I/O interface, and the processor PU may be connected to the memory MEM and the I/O interface via a bus system and/or other forms of connection mechanism [A computer readable storage medium, storing computer program instructions, wherein when the computer program instructions are run by an electronic device, the electronic device is enabled to implement a method for performing deconvolution processing on a feature map].”) The rest of the limitations are analogous to claim 1 so rejected under similar rationale. It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Delin with teachings of Xu, Yazdanbakhsh and Dikici for the same reasons disclosed for claim 1. Claim(s) 2 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yazdanbakhsh, Dikici, Delin and in further view of CONSTANTIN et al., Pub. No.: EP3674982A1. Xu in view of Yazdanbakhsh, Dikici and Delin teach the method of claim 1. Xu in view of Yazdanbakhsh, Dikici and Delin do not teach: wherein a quantity of multipliers comprised in the multiply-add array is larger than or equal to a quantity of weight values comprised in each optimized convolution kernel. CONSTANTIN teaches: wherein a quantity of multipliers comprised in the multiply-add array is larger than or equal to a quantity of weight values comprised in each optimized convolution kernel. (CONSTANTIN, “[0025] If, for instance, 225 filter weights correspond to a set of 3x3 filters belonging to 25 different output channels [is larger than or equal to a quantity of weight values comprised in each optimized convolution kernel], the MAC arrays are configured to have 25 parallel MAC units which calculate 25 different sums in parallel [wherein a quantity of multipliers comprised in the multiply-add array]; but if the 225 filter weights correspond to a set of 5x5 filters belonging to 9 different output channels, the MAC arrays are configured to have 9 parallel MAC units which calculate 9 different sums in parallel. The MAC array supports 3x3 and 5x5 mode with re-use of same hardware.”) CONSTANTIN, Xu, Yazdanbakhsh, Delin and Dikici are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of CONSTANTIN with teachings of Xu, Yazdanbakhsh, Delin and Dikici to hardware parallelism and weight reuse in Multiply Accumulate (MAC) arrays for efficient partial and full sum computation, (CONSTANTIN, Abstract). Claim 11, recites limitations analogous to claim 2, so is rejected under the same rationale. Claim(s) 6, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yazdanbakhsh, Dikici, Delin and in further view of ZIMMER et al., Pub. No.: US20200250794A1. Xu in view of Yazdanbakhsh, Dikici and Delin teach the method of claim 1. Xu in view of Yazdanbakhsh, Dikici and Delin do not teach: wherein after the performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, the method further comprises: ailoring the interleaving synthetic output, to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel. ZIMMER teaches: wherein after the performing interleaving synthesis processing on the plurality of convolutional outputs, to obtain an interleaving synthetic output, the method further comprises: ailoring the interleaving synthetic output, to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel (ZIMMER, “[0121] Sparse SMLM images SMLM(k) are used as inputs of artificial neural network 300 that may be considered as a generator network that outputs synthetic dense SMLM images denoted ANNA-SMLM [to obtain the deconvolutional output corresponding to the feature map and the deconvolution kernel], while dense SMLM images SMLM(K), corresponding to the desired outputs, are compared with the outputs of artificial neural network 300, to adapt the parameters (typically weights and biases) [ailoring the interleaving synthetic output] of the latter accordingly, for example using a stochastic gradient descent algorithm to minimize iteratively a loss error that measures how well real outputs match desired outputs. Such steps of adapting parameters of artificial neural network 300 are called the learning or training phase.”) ZIMMER, Xu, Yazdanbakhsh, Delin and Dikici are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of ZIMMER with teachings of Xu, Yazdanbakhsh, Delin and Dikici to enable efficient neural network processing for high resolution image reconstruction, (ZIMMER, Abstract). Claim(s) 15 and 20, recite limitations analogous to claim 6, so are rejected under the same rationale. Claim(s) 7, 16 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Xu in view of Yazdanbakhsh, Dikici, Delin, ZIMMER and in further view of SONG et al., Pub. No.: US20190138898A1. Xu in view of Yazdanbakhsh, Dikici, Delin and ZIMMER teach the method of claim 6. Xu in view of Yazdanbakhsh, Dikici, Delin and ZIMMER do not teach: wherein the tailoring the interleaving synthetic output comprises: tailoring right and lower sides of the interleaving synthetic output, until a size of the interleaving synthetic output after tailoring corresponds to a size of the deconvolutional output corresponding to the feature map and the deconvolution kernel. SONG teaches: wherein the tailoring the interleaving synthetic output comprises: tailoring right and lower sides of the interleaving synthetic output, until a size of the interleaving synthetic output after tailoring corresponds to a size of the deconvolutional output corresponding to the feature map and the deconvolution kernel. (SONG, “[0069] The output feature map having the size of 38×11 pixels of the convolution network 31 may be input in the deconvolution network 32. The input feature map having the size of 38×11 pixels, which is input in the deconvolution network 32, may be output as an output feature map 30 b having a size of 1216×352 pixels, which is increased 32 times through a plurality of steps of a deconvolution layer, an unpooling layer, etc. The output feature map 30 b, which is ultimately generated in the deconvolution network 32 [of the deconvolutional output corresponding to the feature map and the deconvolution kernel], may have the same size of pixels as the input image 30 a [tailoring right and lower sides of the interleaving synthetic output, until a size of the interleaving synthetic output after tailoring corresponds to a size], and the output feature map 30 b may thereby be caused to include the location information of the input image 30 a. Thus, the semantic segmentation may be performed by using the output feature map 30 b.”) SONG, Xu, Yazdanbakhsh, Delin, Dikici and ZIMMER are related to the same field of endeavor (i.e.: neural network optimization). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teaching of SONG with teachings of Xu, Yazdanbakhsh, Delin, Dikici and ZIMMER to add kernel rearrangement, subdivision into sub kernel and merging of convolution results to realize deconvolution, (SONG, Abstract). Claim(s) 16 and 21, recite limitations analogous to claim 7, so are rejected under the same rationale. Allowable Subject Matter Claim(s) 3 – 5, 12 – 14 and 18 – 19 are objected to as being dependent upon a rejected base claim and would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art made of record does not teach, make obvious, or suggest the claim limitations as disclosed in applicant's claims. Claim 3 and analogous claim 12 recite: The method according to claim 1, wherein the performing zero-padding processing on the feature map comprises: determining an upper-side quantity for zero padding and a lower-side quantity for zero padding of the feature map based on a height size of the deconvolution kernel and a stride in a height direction and a zero-padding parameter in the height direction that are used for deconvolution operation, wherein the lower-side quantity for zero padding is one more row than the upper-side quantity for zero padding; and determining a left-side quantity for zero padding and a right-side quantity for zero padding of the feature map based on a width size of the deconvolution kernel and a stride in a width direction and a zero-padding parameter in the width direction that are used for deconvolution operation, wherein the right-side quantity for zero padding is one more column than the left-side quantity for zero padding. Closest prior art: Xu, et al. "Accelerating generative neural networks on unmodified deep learning processors—A software approach.", (2020). Xu teaches a filter partitioning and reorganization approach to convert a general deconvolution operation to multiple standard convolution operations strictly without incurring much computing redundancy such that deconvolution can be implemented efficiently as convolution. However, Xu does not teach determining zero padding amounts for a feature map in a deconvolution operation based on the deconvolution kernel size, stride and zero padding parameters in each direction. The upper and lower side padding in the height direction are computed such that the lower side padding is one row more that the upper side padding. The left and right side padding in the width direction are computed such that the right side padding is one column than the left side padding. Yazdanbakhsh, et al. "GANAX: A unified MIMD-SIMD acceleration for generative adversarial networks." (2018). Yazdanbakhsh teaches a unified MIMD-SIMD accelerator architecture that exploits repeated patterns in the computations to create different microprograms that can execute concurrently in SIMD mode. However, Yazdanbakhsh does not teach determining zero padding amounts for a feature map in a deconvolution operation based on the deconvolution kernel size, stride and zero padding parameters in each direction. The upper and lower side padding in the height direction are computed such that the lower side padding is one row more that the upper side padding. The left and right side padding in the width direction are computed such that the right side padding is one column than the left side padding. Claim 4 and analogous claim(s) 13 and 18 recite: The method according to claim 1, wherein the determining a plurality of convolution kernels based on the deconvolution kernel comprises: determining a quantity and sizes of convolution kernels corresponding to the deconvolution kernel, wherein the quantity of the convolution kernels is equal to a product of a stride in a height direction and a stride in a width direction that are used for deconvolution operation, a height size of the convolution kernel is a function of a height size of the deconvolution kernel and the stride in the height direction and a zero-padding parameter in the height direction that are used for deconvolution operation, and a width size of the convolution kernel is a function of a width size of the deconvolution kernel and the stride in the width direction and a zero-padding parameter in the width direction that are used for deconvolution operation; and for each position in each convolution kernel, determining two-dimensional coordinate values of a corresponding position in the deconvolution kernel based on two-dimensional indexes in the height direction and the width direction of the convolution kernel, the height size and the width size of the convolution kernel, two-dimensional coordinate values of the position, and the stride in the height direction, the stride in the width direction, the zero-padding parameter in the height direction, and the zero-padding parameter in the width direction that are used for deconvolution operation, and taking a weight value of the corresponding position as a weight value of the position in the convolution kernel, wherein when the determined two-dimensional coordinate values of the corresponding position in the deconvolution kernel exceeds a range of a position coordinate in the deconvolution kernel, a weight in the position of the convolution kernel is determined as a zero- valued invalid weight. Closest prior art: Xu, et al. "Accelerating generative neural networks on unmodified deep learning processors—A software approach.", (2020). Xu teaches a filter partitioning and reorganization approach to convert a general deconvolution operation to multiple standard convolution operations strictly without incurring much computing redundancy such that deconvolution can be implemented efficiently as convolution. However, Xu does not teach determining multiple convolution kernels corresponding to a deconvolution kernel, where the number of convolution kernels equals the product of the height stride and width stride used for deconvolution. Each convolution kernel’s height and width are computed as functions of the deconvolution kernel size, stride and zero padding parameters. For each position in each convolution kernel, a corresponding position in the deconvolution kernel is calculated and if that position falls outside the deconvolution kernel range, the weight is set to zero as an invalid weight. Yazdanbakhsh, et al. "GANAX: A unified MIMD-SIMD acceleration for generative adversarial networks." (2018). Yazdanbakhsh teaches a unified MIMD-SIMD accelerator architecture that exploits repeated patterns in the computations to create different microprograms that can execute concurrently in SIMD mode. However, Yazdanbakhsh does not teach determining multiple convolution kernels corresponding to a deconvolution kernel, where the number of convolution kernels equals the product of the height stride and width stride used for deconvolution. Each convolution kernel’s height and width are computed as functions of the deconvolution kernel size, stride and zero padding parameters. For each position in each convolution kernel, a corresponding position in the deconvolution kernel is calculated and if that position falls outside the deconvolution kernel range, the weight is set to zero as an invalid weight. Claim 5 is allowable because of its dependency to claim 4, and claim(s) 14 and 19 recite limitations analogous to those of claim 5; therefore, are allowable. Conclusion: The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Alsallakh, et al., "Mind the Pad--CNNs Can Develop Blind Spots.", (2020). Alsallakh proposed bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm to evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Chang, et al., "Towards design methodology of efficient fast algorithms for accelerating generative adversarial networks on FPGAs." 2020. Chang proposed DeConv layers using Winograd minimal filtering. The algorithm computes low complex DeConv using small tiles. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATIYAS T MARU whose telephone number is (571)270-0902 or via email matiyas.maru@uspto.gov. The examiner can normally be reached Monday 8:00am - Friday 4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on (571)431-0762. The fax phone number for the organization were this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.T.M./ Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Mar 30, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586114
GENERATING DIGITAL RECOMMENDATIONS UTILIZING COLLABORATIVE FILTERING, REINFORCEMENT LEARNING, AND INCLUSIVE SETS OF NEGATIVE FEEDBACK
2y 5m to grant Granted Mar 24, 2026
Patent 12572796
METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM
2y 5m to grant Granted Mar 10, 2026
Patent 12567004
METHOD OF MACHINE LEARNING TRAINING FOR DATA AUGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561588
Methods and Systems for Generating Example-Based Explanations of Link Prediction Models in Knowledge Graphs
2y 5m to grant Granted Feb 24, 2026
Patent 12561584
TEACHING DATA PREPARATION DEVICE, TEACHING DATA PREPARATION METHOD, AND PROGRAM
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
70%
With Interview (+12.5%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month