Prosecution Insights
Last updated: April 19, 2026
Application No. 17/697,911

CONVOLUTION METHOD, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §101§102§103§112
Filed
Mar 17, 2022
Examiner
LAROCQUE, EMILY E
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
93%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
366 granted / 454 resolved
+25.6% vs TC avg
Moderate +12% lift
Without
With
+12.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
41 currently pending
Career history
495
Total Applications
across all art units

Statute-Specific Performance

§101
29.3%
-10.7% vs TC avg
§103
22.2%
-17.8% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 454 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 12-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 12 line 2 recites “a plurality of 1x1 convolution kernel elements in a filter”. For antecedent basis reasons, it is unclear if the filter recited in claim 12 is the same as the filter recited in claim 11. For purposes of examination, Examiner interprets as the same. Claims 13-16 inherit the same deficiency as claim 12 based on dependence. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Regarding treatment of claims, apparatus claims 11-19 will be addressed first, followed by method claims 1-10, and computer-readable media claim 20. Regarding claim 11, under the Alice framework Step 2A prong 1, the claim recites Mathematical concepts comprising mathematical relationships: adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in a filter to different sub-regions of a first output matrix, to obtain an accumulating feature of the first output matrix; and extracting a subset from the first output matrix having the accumulating feature as a second output matrix. See e.g., figure 4, which describes the above claim limitations in terms of mathematical calculations and mathematical relationships. See also specification [0040-0041]. For these reasons claim 11 recites mathematical concepts. Under the Alice framework Step 2A prong 2 analysis, additional elements not reciting Mathematical Concepts thereof include: an electronic device, comprising: a memory storing a computer program; and a processor, adapted to call and execute the computer program stored in the memory to execute operations. These additional elements do no more than generally link the additional element to the mathematical calculations in a manner that in effect merely recites “apply it” in a computer. For these reasons, the claim is not integrated into a practical application. Moreover, under the Alice Framework Step 2B analysis, the claim, considered individually and as an ordered combination does not include additional elements that are sufficient to amount to significantly more than the abstract idea. As discussed in the Step 2A prong 2 analysis, the claim merely generally links the additional element to the math in a manner that merely recites “apply it’ in a computer. For these reasons the claim when considered as a whole does not amount to significantly more than the abstract idea. Claims 12-16, and 19 are rejected for at least the reasons set forth with respect to claim 11. Claims 12-16, and 19 contain no further additional elements beyond those recited in claim 11 that would require further analysis under Step 2A prong 1 or Step 2B. Regarding claim 17, under the Alice framework Step 2A prong 2 analysis, the claim recites the following further addition element: reserving a target memory space based on a size of the first output matrix, the target memory space being used to store the first output matrix. These steps comprise an insignificant extra solution activity. Under the step 2B analysis, these steps are well understood, routine and conventional activity. See MPEP 2106.05(d).iv. See also S. Barker, Memory Management-Intro to Operating Systems, Computer Science 377, University of Massachusetts, lecture notes, 2016 (hereinafter “Barker”). Regarding claim 18, in addition to the claim 17 analysis, under the Alice framework Step 2A prong 2 analysis, the claim recites the following further addition element: wherein the target memory space is a contiguous memory. These steps comprise an insignificant extra solution activity. Under the step 2B analysis, these steps are well understood, routine and conventional activity. See MPEP 2106.05(d).iv. See also S. Barker, Memory Management-Intro to Operating Systems, Computer Science 377, University of Massachusetts, lecture notes, 2016 (hereinafter “Barker”). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, and 6-7 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by A. Anderson et al., Low-memory GEMM-based convolution algorithms for deep neural networks, arXiv:1709.03395v1 [cs.CV], 2017, (hereinafter “Anderson”). Regarding claim 1, Anderson teaches the following: adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in a filter to different sub-regions of a first output matrix, to obtain an accumulating feature of the first output matrix (section IV.A. fig 5 output X.2 by k2M for plurality of resultant matrices with subregions with different shadings, with section IV.A describing 1x1 convolution kernel elements, fig 1 algorithm describes the adding and section IV second paragraph discloses instance of fig 1 for 1x1 kernel); and extracting a second output matrix from the first output matrix with the accumulating feature, a size of the second output matrix being less than a size of the first output matrix fig 5, post pass shift add matrix with shift add for accumulating feature, the post pass matrix having a size H x W less than the first output matrix). Regarding claim 2, in addition to the teachings addressed in the claim 1 analysis, Anderson teaches the following: wherein the adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in the filter to different sub-regions of the first output matrix, to obtain the accumulating feature of the first output matrix, comprises: determining, based on an image and a first 1x1 convolution kernel element in the filter, a first resultant matrix corresponding to the first 1x1 convolution kernel element, and adding the first resultant matrix to a respective first sub-region of the first output matrix (fig 5 HxW by k2M, wherein based on the first 1x1 convolution kernel element first HxW by M output, wherein convolution is performed according to fig 1 simplified code which shows adding the first resultant matrix to a respective first sub-region of the first output matrix, and wherein section IV second paragraph describes instance of fig 1 wherein 1x1 kernel, shift of the shift and add of fig 5 for determining a relative location); and performing traversal on remaining 1x1 convolution kernel elements of the plurality of 1x1 convolution kernel elements in the filter, thereby adding each of the plurality of resultant matrices corresponding to a respective one of the plurality of 1x1 convolution kernel elements in the filter to a respective different sub-region of the first output matrix, and obtaining the accumulating feature of the first output matrix (fig 1). Regarding claim 3, in addition to the teachings addressed in the claim 2 analysis, Anderson teaches the following: wherein the adding the first resultant matrix to a first sub-region of the first output matrix, comprises: determining, based on a relative location of the first 1x1 convolution kernel element in the filter, the respective first sub-region of the first output matrix, and adding the first resultant matrix to the respective first sub-region of the first output matrix (fig 1). Regarding claim 4, in addition to the teachings addressed in the claim 2 analysis, Anderson teaches the following: herein the first resultant matrix is added to the first sub-region of the first output matrix based on the formula: PNG media_image1.png 16 140 media_image1.png Greyscale where α =1,β =1, A represents the first 1x1 convolution kernel element, B represents the image, C represents the first output matrix, and A * B represents the first resultant matrix corresponding to the first 1x1 convolution kernel element. Regarding claim 7 and claim 8, in addition to the teachings addressed in the claim 1 analysis, Anderson teaches the following: reserving a target memory space based on the size of the first output matrix, the target memory space being used to store the first output matrix (claim 7), wherein the target memory space is a contiguous memory (claim 8, wherein claim 8 is dependent on claim 7) (section VI.B. second paragraph). Regarding claim 9, in addition to the teachings addressed in the claim 1 analysis, Anderson teaches the following: wherein the filter has a size of K x K, and the filter comprises K21x1 convolution kernel elements (section IV.A first paragraph). Regarding claim 10, in addition to the teachings addressed in the claim 1 analysis, Anderson teaches the following: wherein the adding the plurality of resultant matrices corresponding to the plurality of 1x1 convolution kernel elements in the filter to different sub- regions of the first output matrix, comprises: converting the filter with a size of K x K into K21x1 convolution kernel elements (Section IV.A first paragraph); determining K2 resultant matrices respectively corresponding to the K21x1 convolution kernel elements (section IV.A. first paragraph); and adding the K2 resultant matrices to different sub-regions of the first output matrix (section IV.A first and second paragraph). Claim 11 is directed to an apparatus configured to execute the steps of the method of claim 1. All steps performed by the method of claim 1 are executed by the apparatus of claim 11 as configured. The claim 1 analysis applies equally to claim 11. Furthermore Anderson teaches a memory storing a computer program; and a processor, adapted to call and execute the computer program stored in the memory to execute operations of a convolution method (Section I.second paragraph, fourth paragraph fourth bullet, section III.D. first paragraph GEMM call). Regarding claim 12, in addition to the teachings addressed in the claim 11 analysis, Anderson teaches the following: wherein the adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in a filter to different sub-regions of a first output matrix, to obtain an accumulating feature of the first output matrix, comprises: for each of the plurality of 1x1 convolution kernel elements, acquiring a respective resultant matrix based on the 1x1 convolution kernel element and an input matrix, and adding the acquired resultant matrix to a respective sub-region of the first output matrix (fig 5 HxW by k2M, wherein based on the first 1x1 convolution kernel element first HxW by M output, wherein convolution is performed according to fig 1 simplified code which shows adding the first resultant matrix to a respective first sub-region of the first output matrix, and wherein section IV second paragraph describes instance of fig 1 wherein 1x1 kernel). Claims 13, and 17-19 are directed to an apparatus configured to execute the steps of the method of claims 3, 7-8, and 10. All steps performed by the method of claims 3, 7-8, and 10 are executed by the apparatus of claims 13, and 17-19 as configured. The claim 3, 7-8, and 10 analysis applies equally to claims 14, and 17-19. Claim 20 is directed to a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by the processor of claim 11 would execute all steps of claim 11. The claim 11 analysis applies equally to claim 20. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Technical Blog, NVIDIA developer, CUTLASS: Fast Linear Algebra in CUDA C++, 2017 (hereinafter “NVIDIA Blog”). Regarding claim 4, in addition to the teachings addressed in the claim 2 analysis, Anderson teaches the following: wherein the first resultant matrix is added to the first sub-region of the first output matrix based on the formula: PNG media_image1.png 16 140 media_image1.png Greyscale , where A represents the first 1x1 convolution kernel element, B represents the image, C represents the first output matrix, and A * B represents the first resultant matrix corresponding to the first 1x1 convolution kernel element (Section VI.B first paragraph, fig 10). Anderson does not explicitly disclose where α =1,β =1. However, in the same field of endeavor, NVIDIA Blog discloses the above equation in use for use in matrix multiplication wherein where α =1,β =1. It would have been obvious to one of ordinary skill in the art before the effective filing date to choose α =1,β =1 according to NVIDIA Blog for a simple GEMM call for the matrix multiplication and accumulation of Anderson (NVIDIA Blog, section “Efficient Matrix Multiplication on GPUS” first paragraph). Regarding claim 14, in addition to the teachings addressed in the claim 12 analysis, Anderson teaches the following: For each 1x1 convolution kernel element, adding the resultant matrix corresponding to the 1x1 convolution kernel element to the respective sub=region of the first output matrix according to the formula: PNG media_image1.png 16 140 media_image1.png Greyscale , where A represents the first 1x1 convolution kernel element, B represents the image, C represents the first output matrix, and A * B represents the first resultant matrix corresponding to the first 1x1 convolution kernel element (Section VI.B first paragraph, fig 10). Anderson does not explicitly disclose where α =1,β =1. However, in the same field of endeavor, NVIDIA Blog discloses the above equation in use for use in matrix multiplication wherein where α =1,β =1. It would have been obvious to one of ordinary skill in the art before the effective filing date to choose α =1,β =1 according to NVIDIA Blog for a simple GEMM call for the matrix multiplication and accumulation of Anderson (NVIDIA Blog, section “Efficient Matrix Multiplication on GPUS” first paragraph). Allowable Subject Matter Claims 5-6, and 15-16 would be allowable if rewritten in independent form and rewritten to overcome the respective rejections under 35 USC 112(b) and 35 USC 101. The following is a statement of reasons for the indication of allowable subject matter. Applicant claims methods, apparatus, and a non-transitory computer-readable storage medium, wherein the method as in claim 1 comprises: A convolution method, comprising: adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in a filter to different sub-regions of a first output matrix, to obtain an accumulating feature of the first output matrix; and extracting a second output matrix from the first output matrix with the accumulating feature, a size of the second output matrix being less than a size of the first output matrix. Wherein claim 2 dependent on claim 1 further comprising: wherein the adding a plurality of resultant matrices respectively corresponding to a plurality of 1x1 convolution kernel elements in the filter to different sub-regions of the first output matrix, to obtain the accumulating feature of the first output matrix, comprises: determining, based on an image and a first 1x1 convolution kernel element in the filter, a first resultant matrix corresponding to the first 1x1 convolution kernel element, and adding the first resultant matrix to a respective first sub-region of the first output matrix; and performing traversal on remaining 1x1 convolution kernel elements of the plurality of 1x1 convolution kernel elements in the filter, thereby adding each of the plurality of resultant matrices corresponding to a respective one of the plurality of 1x1 convolution kernel elements in the filter to a respective different sub-region of the first output matrix, and obtaining the accumulating feature of the first output matrix. Wherein claim 5 dependent on claim 2 further comprising: wherein the size of the first output matrix is: PNG media_image2.png 56 620 media_image2.png Greyscale H represents a number of pixels of the image in vertical dimension, and W represents a number of pixels of the image in horizontal dimension. The primary reason for indication of allowable subject matter is the specific sizing of the first output matrix, wherein the size of the first output matrix is a direct result of the convolution algorithm. Anderson discloses the claimed invention according to the above claim mappings. Anderson does not explicitly disclose the specific size of the output matrix claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY E LAROCQUE whose telephone number is (469)295-9289. The examiner can normally be reached on 10:00am - 1200pm, 2:00pm - 8pm ET M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Caldwell can be reached on 571 272 3702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMILY E LAROCQUE/Primary Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Mar 17, 2022
Application Filed
Dec 03, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602202
Finite State Machine-Based Bit-Stream Generator for Low-Discrepancy Stochastic Computing
2y 5m to grant Granted Apr 14, 2026
Patent 12596475
COMPRESSION AND DECOMPRESSION OF MULTI-DIMENSIONAL DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579414
ARTIFICIAL NEURON
2y 5m to grant Granted Mar 17, 2026
Patent 12579214
AUGMENTING MATHEMATICAL OPTIMIZATION MODELS GENERATED FROM HISTORICAL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12578923
METHOD AND APPARATUS FOR GENERATING ARCHITECTURE SPECIFIC CONVOLUTION GRADIENT KERNELS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
93%
With Interview (+12.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 454 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month