Prosecution Insights
Last updated: April 19, 2026
Application No. 17/820,900

DEEP NEURAL NETWORK (DNN) ACCELERATORS WITH HETEROGENEOUS TILING

Final Rejection §103
Filed
Aug 19, 2022
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action is in response to the amendment filed 12 February 2026. Claims 1-25 are pending. Claims 1, 11, and 21 are independent claims. Claim Objections Claims 1-20 are objected to because of the following informalities: Independent claims 1, 11, and 21 are provided with the status identifier “Currently Amended.” 37 CFR 1.121 states: “(2) When claim text with markings is required. All claims being currently amended in an amendment paper shall be presented in the claim listing, indicate a status of "currently amended," and be submitted with markings to indicate the changes that have been made relative to the immediate prior version of the claims. The text of any added subject matter must be shown by underlining the added text. The text of any deleted matter must be shown by strike-through except that double brackets placed before and after the deleted characters may be used to show deletion of five or fewer consecutive characters. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived. Only claims having the status of "currently amended," or "withdrawn" if also being amended, shall include markings. If a withdrawn claim is currently amended, its status in the claim listing may be identified as "withdrawn— currently amended."” The examiner has analyzed the claims and determined that with respect to claim 1, the applicant has amended the claim to recite, “identifying a tile set for executing tensor operations in a deep neural network (DNN), the tile set included in a hardware accelerator, … (lines 2-3);” with respect to claim 11, the applicant has amended the claim to recite, “identifying a tile set for executing tensor operations in a deep neural network (DNN), the tile set included in a hardware accelerator, …(lines 3-4);” with respect to claim 21, the applicant has amended the claim to recite, “A deep neural network (DNN) accelerator, the DNN accelerator being a hardware accelerator comprising (lines 1-2).” Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 4, 8-9, 11-12, 14, 18-19, 21-23, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Gautham et al. (CN 112149811, published 29 December 2020, hereafter Gautham) and further in view of Fais et al. (US 2019/0370631, published 5 December 2019, hereafter Fais). As per independent claim 1, Gautham discloses a method of deep learning, the method comprising: identifying a tile set for executing tensor operations in a deep neural network (DNN), the tile set comprising a plurality of processing element (PE) arrays having different sizes (page 3, paragraph 7), each PE array comprising PEs arranged in a first number of columns and a second number of rows (page 5, paragraph 2: Here, a DNN includes a plurality of tensors having different sizes. The size of the tensor involved in performing the tensor operation may utilize increased energy efficiency specific to the layers (page 5, paragraph 3)), wherein a PE having a size determined by the first number and the second number (page 8, paragraph 2)) selecting a PE array from the plurality of PE arrays for a convolutional layer in the DNN (page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN) determining dimensions of an output tensor of the convolutional layer, the output tensor being a result of a convolutional operation to be performed by the PE array on an input tensor and a filter (page 8, paragraph 2: Here, the tensor data is loaded into the processing element (PE) array comprising N columns and N rows. This selection is based upon the FSAD determining the optimal scheduling (page 5, paragraphs 3-4)) partitioning the output tensor into output tensor segments based on a size of the PE array (page 9, paragraph 3: Here, the data may be partitioned and distributed to a two-dimensional PE array to leverage data parallelism. The selection is based upon the FSAD determining the optimal scheduling (page 5, paragraphs 3-4)) assigning workloads of generating the output tensor segments to a group of Pes in the PE array, wherein each PE in the group is to receive a workload of generating a respective output tensor segment (page 9, paragraph 3) and to perform a multiply-accumulation (MAC) operation for generating the respective output tensor segment (page 4, paragraph 2: Here, MAC operations are performed inside the arrays of PEs) Gautham fails to specifically disclose the tile set included in a hardware accelerator. However, Fais, which is analogous to the claimed invention because it is directed toward training deep learning models to perform convolutions on data to recognized features on input data (Fais: paragraph 0002), discloses a tile set included in a hardware accelerator (Figure 3; paragraphs 0005 and 0032: Here, the acceleration manager builds the convolutional operation for the input tensor based on tile walks on tiles (Figure 4) to be performed by the hardware accelerator (Figure 1, item 104)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Fais with Gautham, with a reasonable expectation of success, as it would have allowed for implementing convolutions in a hardware accelerator to improve processing speed using process specific hardware (Fais: paragraph 0020). As per dependent claim 2, Gautham discloses: determining dimensions of output tensors of a plurality of convolutional layers in the DNN (page 8, paragraph 2: Here, the dimensions of the tensors are determined. The size of the tensor involved in performing the tensor operation may utilize increased energy efficiency specific to the layers (page 5, paragraph 3)) identifying a set of dimensions from the dimensions of the output tensors, wherein the set of dimensions are dimensions of output tensors of multiple convolutional layers of the plurality of convolutional layers (page 8, paragraph 2) identifying the tile set from a plurality of tile sets based on the set of dimensions, wherein each of the plurality of tile sets is a combination of different PE arrays (page 8, paragraph 2) As per dependent claim 4, Gautham discloses wherein selecting the PE array from the plurality of PE arrays for the convolutional layer in the DNN comprises selecting a group of PE arrays from the plurality of PE arrays for the convolutional layer in the DNN, wherein the group of PE arrays comprises the PE array (page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN). As per dependent claim 8, Gautham discloses wherein assigning the workloads to generate the output tensor segments to the group of PEs in the PE array comprises: for a workload of generating an output tensor segment, identifying a segment of the input tensor and a segment of the filter (page 9, paragraph 3: Here, the data may be partitioned and distributed to a two-dimensional PE array to leverage data parallelism. The selection is based upon the FSAD determining the optimal scheduling (page 5, paragraphs 3-4)) transmitting the segment of the input tensor and the segment of the filter into a PE in the group, wherein the PE is to perform one or more MAC operations on the segment of the input tensor and the segment of the filter and to output the output tensor segment (page 4, paragraph 2 and page 9, paragraph 3: Here, MAC operations are performed inside the arrays of PEs) As per dependent claim 9, Gautham discloses wherein the PE comprises: an input register file for storing the segment of the input tensor (page 9, paragraph 3- page 10, paragraph 1) a weight register file for storing the segment of the filter (page 9, paragraph 3- page 10, paragraph 1) an output register file for storing the output tensor segment (page 9, paragraph 3- page 10, paragraph 1; page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN) a MAC unit for performing the one or more MAC operations (page 4, paragraph 2: Here, MAC operations are performed inside the arrays of PEs) As per dependent claim 10, Gautham discloses the limitations similar to those in claim 8, and the same rejection is incorporated herein. Gautham discloses an output segment (page 9, paragraph 3- page 10, paragraph 1; page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN). However, Gautham fails to specifically disclose wherein the output is one or more integer values or one or more floating-point values. However, the examiner takes official notice that it was notoriously well-known in the art at the time of the applicant’s effective filing date to calculate a value and outputting an integer or floating point value. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the well-known with Gautham, with a reasonable expectation of success, as it would have allowed for outputting data in a known data format type. With respect to claims 11-12, 14, and 18-20, the applicant discloses the limitations substantially similar to those in claims 1-2, 4, and 8-10, respectively. Claims 11-12, 14, and 18-20 are similarly rejected. With respect to claim 21, the applicant discloses the limitations substantially similar to those in claim 1. Claim 21 is similarly rejected. As per dependent claim 22, Gautham discloses wherein the DNN accelerators further comprises a plurality of tile sets that includes the tile set, and each of the plurality of tile sets is a combination of different PE arrays (page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN). As per dependent claim 23, Gautham discloses wherein the DNN comprises a plurality of convolutional layers, and the tile set is selected from the plurality of tile sets based on one or more of the plurality of convolutional layers (page 5, paragraphs 3-4: Here, the flexible scheduling-aware tensor data distribution module (FSAD) is configured to distribute the input feature (IF) and weight/filter (FL) tensor data to the processing element (PE) array based on the optimal scheduling of the layer of the DNN). With respect to claim 25, the applicant discloses the limitations substantially similar to those in claim 1. Claim 25 is similarly rejected. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Gautham and Fais and further in view of Biswas et al. (US 2020/0134833, published 30 April 2020, hereafter Biswas). As per dependent claim 3, Gautham and Fais disclose the limitations similar to those in claim 2, and the same rejection is incorporated herein. Gautham discloses identifying the plurality of convolutional layers from all convolutional layers in the DNN (page 5, paragraph 2: Here, a plurality of layers of the DNN are identified and a tensor is selected for implementing the processing of the layer). Gautham fails to specifically disclose wherein the dimensions of the layer are within one or more predetermined dimension ranges. However, Biswas, which is analogous to the claimed invention because it is directed toward a neural network architecture, discloses wherein the dimensions of the layer are within one or more predetermined dimension ranges (Figure 4; paragraph 0059: Here, a selection of a convolutional layer is done based upon a predetermined set of dimensions based upon the number of bytes). It would have been obvious to one or ordinary skill in the art at the time of the applicant’s effective filing date to have combined Biswas with Gautham, with a reasonable expectation of success, as it would have allowed for assigning processing to layers capable of performing the processing. With respect to claim 13, the applicant discloses the limitations substantially similar to those in claim 3. Claim 13 is similarly rejected. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Gautham and Fais and further in view of Zhang et al. (US 2023/0259758, filed 16 February 2022, hereafter Zhang). As per dependent claim 5, Gautham and Fais disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Gautham further discloses determining the dimensions of the output tensor based on the input tensor (page 5, paragraph 3: Here, the output activation/output feature (OF) is generated with respect to the input feature (IF) at runtime). However, Gautham fails to specifically disclose determining a dimension of the tensor based on a number of kernels in the filter and dimensions of the kernels. However, Zhang, which is analogous to the claimed invention because it is directed toward adaptive tensors for neural networks, discloses determining a dimension of the tensor based on a number of kernels in the filter and dimensions of the kernels (paragraph 0004: Here, a tensor compute kernel and its shape and dimensions are determined based upon multiple kernel filters). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Zhang with Gautham, with a reasonable expectation of success, as it would have allowed for more efficient use of tensors based upon tailoring the tensor to the kernel filters and dimensions (Zhang: paragraph 0002). With respect to claim 15, the applicant discloses the limitations substantially similar to those in claim 5. Claim 15 is similarly rejected. Claims 6-7, 16-17, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Gautham and Fais and further in view of Whatmough et al. (US 2023/0076138, filed 9 September 2021, Whatmough). As per dependent claim 6, Gautham and Fais disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Gautham discloses the output tensor comprising a set of output channels (page 5, paragraph 2). Gautham fails to specifically disclose: each output channel comprising a matrix the dimensions of the output tensor comprise a first dimension indicating a number of elements in a row in the matrix, a second dimension indicating a number of elements in a column in the matrix, and a third dimension indicating a number of output channels in the set of output channel However, Whatmough, which is analogous to the claimed invention because it is directed toward a tensor output including an output matrix, discloses: each output channel comprising a matrix (paragraph 0045) the dimensions of the output tensor comprise a first dimension indicating a number of elements in a row in the matrix, a second dimension indicating a number of elements in a column in the matrix, and a third dimension indicating a number of output channels in the set of output channel (paragraph 0045: Here, each tensor includes a height, width, and depth. This includes elements in a row, column, and output channels) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Whatmough with Gautham, with a reasonable expectation of success, as it would have allowed for encoding data in a matrix (Whatmough: paragraph 0021). As per dependent claim 7, Gautham, Fais, and Whatmough disclose the limitations similar to those in claim 6, and the same rejection is incorporated herein. Whatmough discloses determining and fourth dimension and a fifth dimension of each output sensor segment based on the first number (paragraph 0045), determining a sixth dimension based on the second number (paragraph 0045), wherein the fourth dimension indicates a number of elements in a row in the matrix, the fifth dimension indicates a number of elements in a column in the matrix, and the sixth dimension indicates a number of output channels in the set of output channels (paragraph 0045: Here, each cell in the matrix may further include a matrix corresponding to an input data matrix, a weight matrix, and an output matrix). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Whatmough with Gautham, with a reasonable expectation of success, as it would have allowed for encoding data in a matrix (Whatmough: paragraph 0021). With respect to claims 16-17, the applicant discloses the limitations substantially similar to those in claims 6-7, respectively. Claims 16-17 are similarly rejected. With respect to claim 24, the applicant discloses the limitations substantially similar to those in claim 6. Claim 24 is similarly rejected. Response to Arguments Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Gautham and Fais. The factual assertions set forth in the Office Action dated 13 November 2025 have not been traversed. According to MPEP 2144.03 (C) the official notice statement is taken to be admitted prior art because the appellant failed to traverse the examiner’s assertion. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Yazdanbakhsh et al. (US 2023/0376664): Discloses a hardware accelerator for performing a machine learning task by repeatedly selecting a respective value for each hardware parameter, determining a candidate hardware architecture, devaluing the candidate, and generating a final hardware architecture (Abstract) Rodgers et al. (US 11853734): Discloses identifying sequences of tileable source cade that can be replace by tensor operations that invoke a special-purpose hardware accelerator and replacing these instructions with tensor operations that invoke the hardware accelerator (Abstract) Zivkovic et al. (US 2023/0205730): Discloses a hybrid architecture that designates computationally intensive blocks to a hardware accelerator to maintain flexibility (Abstract) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Aug 19, 2022
Application Filed
Feb 16, 2023
Response after Non-Final Action
Nov 10, 2025
Non-Final Rejection — §103
Jan 28, 2026
Interview Requested
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Applicant Interview (Telephonic)
Feb 12, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103
Apr 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month