Prosecution Insights
Last updated: April 19, 2026
Application No. 17/859,769

ITERATION ENGINE FOR THE COMPUTATION OF LARGE KERNELS IN CONVOLUTIONAL ACCELERATORS

Non-Final OA §103
Filed
Jul 07, 2022
Examiner
YAARY, MICHAEL D
Art Unit
2151
Tech Center
2100 — Computer Architecture & Software
Assignee
STMicroelectronics
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
872 granted / 1001 resolved
+32.1% vs TC avg
Moderate +8% lift
Without
With
+8.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
1019
Total Applications
across all art units

Statute-Specific Performance

§101
24.5%
-15.5% vs TC avg
§103
33.9%
-6.1% vs TC avg
§102
21.6%
-18.4% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1001 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 1-25 are pending in the application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim(s) 1, 6-7, 11, 13-14, 16, 17, 19, 20, 22, and 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boesch et al (hereafter Boesch)(US Pub. 2018/0189642) in view of Nicol (US Pub. 2020/0167309). Boesch was cited in the IDS filed 04/05/2023. 5. As to claim 1, Boesch discloses a convolutional accelerator ([0075] convolution accelerator), comprising: a feature line buffer ([0075] feature line buffer); a kernel buffer ([0075 kernel buffer); a multiply-accumulate cluster coupled to the feature line buffer and the kernel buffer ([0075] a multiply-accumulate (MAC) unit module having a plurality of MAC units arranged to multiply data passed from the kernel buffer with data passed from and the feature line buffer.). 6. Boesch does not teach or suggest at least iteration control circuitry, which, in operation, defines a plurality of sub-tensors of a streamed feature data tensor, wherein the convolutional accelerator, in operation, decomposes a kernel into a plurality of sub-kernels and iteratively convolves the sub-kernels with respective sub-tensors of the defined plurality of sub-tensors of the streamed feature data tensor. However, Nicol discloses at least iteration control circuitry ([0042] iterative techniques), which, in operation, defines a plurality of sub-tensors of a streamed feature data tensor ([0089]-[0091]), wherein the convolutional accelerator, in operation, decomposes a kernel into a plurality of sub-kernels ([0031] and[0058] kernels decomposed/partitioned into sub-kernels) and iteratively convolves the sub-kernels with respective sub-tensors of the defined plurality of sub-tensors of the streamed feature data tensor ([0113] convolution operations). 7. Therefore, it would have been obvious to one of ordinary skill in the prior to the effective filing date of the claimed invention to modify the teachings of Boesch by implementing the iteration control circuitry as in Nicol, for the benefit of more efficient data analysis and tensor computations (Nicol [0005]-[0007]). 8. As to claims 6, 13, 16, 19, and 24, the combination of Boesch and Nicol discloses wherein the streamed feature data tensor is organized into a number of batches, each batch having a same height, a same width and a same depth, and an iteration for a sub-kernel has an iteration length equal to the number of batches (Boesch [0152] and Nicol [0069]). 9. As to claims 7, 14, and 20, the combination of Boesch and Nicol discloses wherein the streamed feature data tensor is repeatedly streamed to the convolutional accelerator during the iterative convolving of the sub- kernels with the respective sub-tensors (Nicol [0089]-[0091]). 10. As to claims 11, 17, 22 and 25, the claims are rejected for similar reasons as to claim 1 above. 11. Claim(s) 2-3, 12, 18, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Boesch and Nicol and further in view of Ovsiannikov (US Pub. 20200336155). 12. As to claims 2, 12, and 18, the combination of Boesch and Nicol does not disclose wherein the iteration control circuitry, in operation, generates sets of pointers to define windows of the streamed feature data tensor, the windows corresponding to respective sub-tensors of the plurality of sub-tensors. However, Ovsiannikov discloses wherein the iteration control circuitry, in operation, generates sets of pointers to define windows of the streamed feature data tensor, the windows corresponding to respective sub-tensors of the plurality of sub-tensors ([0124]-[0128] pointers, windows). 13. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the teachings of Boesch and Nicol with the set of pointers generated as in Ovsiannikov for the benefit of of compressing and decompressing multichannel bit streams in parallel (Ovsiannikov [0002]). 14. As to claims 3 and 23, the combination of Boesch, Nicol, and Ovsiannikov discloses wherein a set of pointers defining a respective window comprises a first line pointer, a last line pointer, a first column pointer, and a last column pointer (Ovsiannikov [0128]). Allowable Subject Matter 15. Claims 4-5, 8-10, 15, and 21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 4 recites at least generates the first line pointer based on a vertical position of the sub-kernel in the kernel, and a vertical iteration offset parameter defined for the kernel decomposition; generates the last line pointer based on the vertical position of the sub-kernel in the kernel, a number of vertical iterations parameter defined for the kernel decomposition, the vertical iteration offset parameter, and a height of the streamed feature data tensor; generates the first column pointer based on the horizontal position of the sub-kernel in the kernel, and a horizontal iteration offset parameter defined for the kernel decomposition. Claims 8, 15, and 21 recite at least wherein the convolutional accelerator, in operation, defines decomposition control parameters including: an iteration period, ITER_PERIOD, defining a length of an iteration of the convolving of a sub-kernel with a respective sub-tensor; a horizontal offset, ITER_OFFSET_H, defining an offset between adjacent sub-kernels in the horizontal direction; a vertical offset, ITER_OFFSET_V, defining an offset between adjacent sub-kernels in the vertical direction; a number of horizontal operations, ITERNRH, defining a number of horizontal operations performed during an iteration associated with a sub-kernel; and a number of vertical operations, ITERNRV, defining a number of vertical operations performed during an iteration associated with a sub-kernel. The closest prior art of record US Pub. 2018/0189642 teaches a configurable accelerator framework device that includes a stream switch and a plurality of convolution accelerators. The stream switch has a plurality of input ports and a plurality of output ports. Each of the input ports is configurable at run time to unidirectionally pass data to any one or more of the output ports via a stream link. Each one of the plurality of convolution accelerators is configurable at run time to unidirectionally receive input data via at least two of the plurality of stream switch output ports, and each one of the plurality of convolution accelerators is further configurable at run time to unidirectionally communicate output data via an input port of the stream switch. However, the prior art of record does not teach the limitations above as claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL D YAARY whose telephone number is (571)270-1249. The examiner can normally be reached Mon-Fri 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Trujillo can be reached at (571)272-3677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL D. YAARY/ Primary Examiner, Art Unit 2151
Read full office action

Prosecution Timeline

Jul 07, 2022
Application Filed
Dec 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591537
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12591411
SYSTEM AND METHOD TO ACCELERATE GRAPH FEATURE EXTRACTION
2y 5m to grant Granted Mar 31, 2026
Patent 12585434
COMPUTING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12585430
FLOATING-POINT CONVERSION WITH DENORMALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585725
NON-RECTANGULAR MATRIX COMPUTATIONS AND DATA PATTERN PROCESSING USING TENSOR CORES
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+8.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 1001 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month