Prosecution Insights
Last updated: April 18, 2026
Application No. 17/851,306

NEURAL NETWORK COMPRISING MATRIX MULTIPLICATION

Non-Final OA §103§112§DP
Filed
Jun 28, 2022
Examiner
LE, PHAT NGOC
Art Unit
2182
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
4y 2m
To Grant
0%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
4 granted / 6 resolved
+11.7% vs TC avg
Minimal -67% lift
Without
With
+-66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
29 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
15.9%
-24.1% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
33.3%
-6.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Mapping unit in claim 16. Memory manipulation module in claims 13, 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. As to claims 13, 17’s memory manipulation module, the examiner interprets the means plus function limitation to the corresponding structure: memory reading block, internal buffer, memory writing block as disclosed in Fig. 5; [00107] of the applicant’s specification. As to claim 16’s mapping unit, corresponding structure does not seem to be disclosed in applicant’s specification. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 16-18 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 16’s limitation of “mapping unit” invokes USC 112(f) or pre-AIA 35 USC 112, sixth paragraph. However, the written description fails to provide an adequate description of the structure, material, or acts to perform the claimed functions of these limitations. See rejection under 35 USC 112(b) below for further details as. Claims 17-18 are rejected upon being dependent to claim 16 without correcting the issue. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11-12, 16-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 11-12 recites the limitation "one of the tensors". It is unclear if the set of tensors that one is being picked from consists of all prior declared tensors or a subset. For examination purposes, the Examiner interprets the set of tensors to consist of: a first tensor, a second tensor, a derived first tensor, and a derived second tensor. Claim limitation “mapping unit” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification appears to only disclose the functions of the mapping unit without suggestion as to the structure performing the functions. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 8, 14-16, 17-19, 20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 11, 12-14, 16-18, 20 of U.S. Patent No. 12488253, hereinafter “the ‘253 patent”. Although the claims at issue are not identical, they are not patentably distinct from each other because application claims 1, 8, 14-16, 17-19, 20 are anticipated by patent claims 1, 11, 12-14, 16-18, 20. Claim 1 in the instant application recites a method of mapping a matrix multiplication operation including at least one element-wise operation, while claim 1 of the ‘253 patent recites a method of mapping a matrix multiplication operation including at least one convolution operation and at least one transformation operation. It is understood by one of ordinary skill in the art that a convolution is made up of a series of element-wise operations, as it is the multiplication of pairs of data and summation of those products (See Wikipedia page on Convolution, of note the Discrete Convolution section). Therefore, including at least one convolution operation anticipates including at least one element-wise operation. The same analysis applies for Claim 16 in the instant application with claim 14 in the ‘253 patent. The same analysis applies for Claim 20 in the instant application with claim 20 in the ‘253 patent. Current Application: 17/851306 U.S. Patent No. 12,488,253 Claim 1 Claim 1 A method of implementing, using a neural network accelerator comprising fixed-function hardware, a neural network comprising a plurality of layers, wherein at least one of the layers comprises a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., P,..., Q,...] and a second tensor Y having dimensions [..., Q,..., R,...], the method comprising: A method of implementing, using a neural network accelerator comprising fixed-function hardware, a neural network comprising a plurality of layers, wherein at least one of the layers comprises a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., P,..., Q,...] and a second tensor Y having dimensions [..., Q,..., R,...], the method comprising: mapping the matrix multiplication operation to a graph of neural network operations including at least one element-wise operation; mapping the matrix multiplication operation to a graph of neural network operations including at least one transformation and at least one convolution operation; and evaluating the graph of neural network operations to thereby evaluate the matrix multiplication operation; evaluating the graph of neural network operations to thereby evaluate the matrix multiplication operation, wherein the at least one element-wise operation is evaluated in the fixed-function hardware. wherein the at least one convolution operation is evaluated in the fixed- function hardware; and implementing, in said fixed-function hardware, said neural network to include a matrix multiplication layer configured to perform a matrix multiplication operation in dependence on the evaluation of the graph. Claim 8 Claim 11 The method of claim 1, wherein the first tensor X has dimensions [M, N, P, 1] and the second tensor Y has dimensions [M', N', 1, R]. The method of claim 1, wherein the first tensor X has dimensions [M, N, P, 1] and the second tensor Y has dimensions [M', N', 1, R]. Claim 14 Claim 12 The method of claim 1, further comprising, before mapping the matrix multiplication operation to the graph of neural network operations: analysing the matrix multiplication operation; and determining, based on a result of the analysing, how to implement the matrix multiplication operation, comprising determining that the matrix multiplication operation should be implemented using the at least one element-wise operation, and rejecting at least one alternative method for implementing the matrix multiplication operation. The method of claim 1, further comprising, before mapping the matrix multiplication operation to the graph of neural network operations, analysing the matrix multiplication operation, and determining, based on a result of the analysing, how to implement the matrix multiplication operation, comprising determining that the matrix multiplication operation should be implemented using the at least one transformation and the at least one convolution operation, and rejecting at least one alternative method for implementing the matrix multiplication operation. Claim 15 Claim 13 The method of claim 14, wherein the determining how to implement the matrix multiplication operation is based on one or more of: a size of the first tensor in one or more dimensions; a size of the second tensor in one or more dimensions; a memory-access bandwidth required to implement the matrix multiplication operation using the selected method; a memory size required to implement the matrix multiplication operation using the selected method; a number of hardware passes through the fixed-function hardware that will be required to implement the matrix multiplication operation using the selected method; an execution time on the fixed function hardware that will be required to implement the matrix multiplication operation using the selected method; a power consumption required to implement the matrix multiplication operation using the selected method; and a capability of the fixed-function hardware. The method of claim 12, wherein the determining how to implement the matrix multiplication operation is based on one or more of: a size of the first tensor in one or more dimensions; a size of the second tensor in one or more dimensions; a memory-access bandwidth required to implement the matrix multiplication operation using the selected method; a memory size required to implement the matrix multiplication operation using the selected method; a number of hardware passes through the fixed-function hardware that will be required to implement the matrix multiplication operation using the selected method; an execution time on the fixed function hardware that will be required to implement the matrix multiplication operation using the selected method; a power consumption required to implement the matrix multiplication operation using the selected method; and a capability of the fixed-function hardware. Claim 17 Claim 16 The data processing system of claim 16, wherein the graph of neural network operations further comprises at least one transformation, applied to the first tensor X and/or the second tensor Y; wherein the data processing system comprises Claim 14: the data processing system comprising: a mapping unit, configured to map the matrix multiplication operation to a graph of neural network operations including at least one transformation a memory manipulation module for manipulating data stored in a memory; and wherein the data processing system is configured to perform the at least one transformation using memory manipulation module. The data processing system of claim 14, further comprising a memory manipulation module, for manipulating data stored in a memory, wherein the at least one transformation is performed using the memory manipulation module. Claim 18 Claim 17 The data processing system of claim 17, wherein the memory manipulation module comprises: an internal buffer; a memory reading block, configured to read data from the memory and write the data to the internal buffer; a memory writing block, configured to read the data from the internal buffer and write the data to the memory; and a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. The data processing system of claim 16, wherein the memory manipulation module comprises: an internal buffer; a memory reading block, configured to read data from the memory and write the data to the internal buffer; a memory writing block, configured to read the data from the internal buffer and write the data to the memory; and a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively. Claim 19 Claim 18 The method of claim 1, wherein the layer comprising the matrix multiplication operation is a classification layer for classifying an input to the neural network into one of a number of categories. The method of claim 1, wherein the layer comprising the matrix multiplication operation is a classification layer, for classifying an input to the neural network into one of a number of categories. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 8-9, 16, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sworna et al. (A LUT-Based Matrix Multiplication Using Neural Networks, hereinafter “Sworna”) in view of Zhang et al. (Caffine: Toward Uniformed representation and Acceleration for Deep Convolutional Neural Networks, provided in IDS filed 01/31/2023, hereinafter “Zhang”). As per claim 1, Sworna teaches A method of implementing, using a neural network accelerator comprising fixed- function hardware, a neural network comprising a plurality of layers, wherein at least one of the layers comprises a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., P,..., Q,...] and a second tensor Y having dimensions [..., Q,..., R,...] (Sworna: Section IV equation (1), wherein i corresponds to P, k corresponds to Q, and j corresponds to R), the method comprising: and evaluating the graph of neural network operations to thereby evaluate the matrix multiplication operation (Sworna: Fig. 2; section IV.4); wherein the at least one element-wise operation is evaluated in the fixed-function hardware (Sworna: Fig. 3; pg. 1984 right col). However, while Sworna discloses a method of matrix multiplication using neural networks, Sworna does not explicitly disclose the act of mapping the matrix multiplication. Thus, Sworna does not teach mapping the matrix multiplication operation to a graph of neural network operations including at least one element-wise operation; Zhang teaches mapping the matrix multiplication operation to a graph of neural network operations including at least one element-wise operation (Section VI.A, of note the section describes mapping high-level network definitions to customized instructions for specialized hardware); Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the method of Sworna with the mapping of Zhang. One would have been motivated to combine these references because both references disclose neural network accelerators, and combining prior art elements according to known methods to yield predictable results (how Sworna can command the proposed matrix multiplication method). As per claim 2, Sworna/Zhang further teaches The method of claim 1, wherein the graph of neural network operations further comprises at least one transformation, applied to the first tensor X and/or the second tensor Y (Zhang: Section IV.A.3). As per claim 8, Sworna/Zhang further teaches The method of claim 1, wherein the first tensor X has dimensions [M, N, P, 1] and the second tensor Y has dimensions [M', N', 1, R] (Sworna: pg. 1983 left col; Fig. 2, wherein both input tensors are processed on a one-dimensional vector basis regardless of their original dimensions). As per claim 9, Sworna/Zhang further teaches The method of claim 1, wherein: the first tensor X has dimensions [M, N, P, 1] and the second tensor Y has dimensions [M', N', 1, R]; and the element-wise operation comprises an element-wise multiplication of the first tensor, with the second tensor (Sworna: pg. 1983 left col; Fig. 2, wherein both input tensors are processed on a one-dimensional vector basis regardless of their original dimensions). As per claim 16, the claim is directed to a data processing system that implements the same or similar features as the method of claim 1, and is therefore rejected for at least the same reasons therein. As per claim 20, the claim is directed to a non-transitory computer readable storage medium that implements the same or similar features as the method of claim 1, and is therefore rejected for at least the same reasons therein. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Sworna/Zhang in further view of Chen et al. (Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices, hereinafter “Chen”). As per claim 10, Sworna/Zhang further teaches The method of claim 9, However, Sworna does not explicitly teach wherein the element-wise multiplication is performed using broadcasting over two dimensions. Chen teaches wherein the element-wise multiplication is performed using broadcasting over two dimensions (Chen: Fig. 5(b); section II, wherein the mesh network is 2-dimensional). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the memory accessing of Sworna with the network on chip of Chen. One would have been motivated to combine these references because both references disclose neural network accelerators, and the on-chip network of Chen improves energy efficiency and processing seeds (Chen: section I.C). As per claim 11, Sworna/Zhang further teaches The method of claim 9, However, Sworna does not explicitly teach wherein the element-wise multiplication is performed using broadcasting over one dimension and repeating one of the tensors over the other dimension. Chen teaches wherein the element-wise multiplication is performed using broadcasting over one dimension and repeating one of the tensors over the other dimension (Chen: Fig. 8(c); section III.B). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the memory accessing of Sworna with the network on chip of Chen. For at least the same reasons as discussed above in claim 10. As per claim 12, Sworna/Zhang further teaches The method of claim 9, However, Sworna does not explicitly teach wherein the element-wise multiplication comprises repeating one of the tensors over one dimension and repeating the other of the tensors over the other dimension. Chen teaches wherein the element-wise multiplication comprises repeating one of the tensors over one dimension and repeating the other of the tensors over the other dimension (Chen: Fig. 8(e); section III.B). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the memory accessing of Sworna with the network on chip of Chen. For at least the same reasons as discussed above in claim 10. Claims 13, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sworna/Zhang in further view of Kumar et al. (US 20200174747 A1, hereinafter “Kumar”). As per claim 13, Sworna/Zhang further teaches The method of claim 1, wherein the at least one transformation is performed (Zhang: Section IV.A.3). However, while Sworna/Zhang discloses rearranging the matrix data, Sworna/Zhang does not explicitly disclose circuitry for performing the function. Thus, Sworna/Zhang does not teach at least in part using a memory manipulation module configured to manipulate data stored in a memory; Kumar teaches at least in part using a memory manipulation module configured to manipulate data stored in a memory (Kumar: Fig. 3; [0046]); Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the data arrangement of Sworna with the data reordering system of Kumar. One would have been motivated to combine these references because both references disclose neural network accelerators, and the reordering system of Kumar improves computational efficiency (Kumar: [0032]). As per claim 17, the claim is directed to a data processing system that implements the same or similar features as the method of claim 13, and is therefore rejected for at least the same reasons therein. As per claim 18, Sworna/Zhang/Kumar further teaches The data processing system of claim 17, wherein the memory manipulation module comprises: an internal buffer (Kumar: Fig. 3 elements 320, 340; [0049]); a memory reading block, configured to read data from the memory and write the data to the internal buffer (Kumar: Fig. 3 element 304; [0046], [0048]]); a memory writing block, configured to read the data from the internal buffer and write the data to the memory (Kumar: Fig. 3 element 350; [0046]) and a control channel between the memory reading block and the memory writing block, wherein the memory reading block and the memory writing block are configured to communicate via the control channel to maintain synchronisation between them when writing the data to the internal buffer and reading the data from the internal buffer, respectively (Fig. 3 element 370; [0050]). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Sworna/Zhang in further view of Brothers et al. (US 20170011288 A1, hereinafter “Brothers”). As per claim 19, Sworna/Zhang further teaches The method of claim 1, However, while Sworna discloses matrix multiplication using neural networks, Sworna does not explicitly disclose the matrix multiplication may be used as a classification layer. Thus, Sworna does not teach wherein the layer comprising the matrix multiplication operation is a classification layer for classifying an input to the neural network into one of a number of categories. Brothers teaches wherein the layer comprising the matrix multiplication operation is a classification layer for classifying an input to the neural network into one of a number of categories (Brothers: [0025], wherein Brothers states matrix multiplies are applicable to feature classification layers). Therefore, it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify, with a reasonable expectation of success, the matrix multiplication method of Sworna with the teaching of Brothers. One would have been motivated to combine these references because both references disclose matrix multiplication related neural networks, and combining prior art elements according to known methods to yield predictable results (performing neural network classification through matrix multiplication). Allowable Subject Matter Claims 3-7, 14-15 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As to claims 3-7, the prior art of record does not teach or suggest a combination as claimed including: reconfiguring the second tensor Y to form a third tensor having dimensions [..., R, Q]; and splitting the third tensor into R constituent tensors each of dimensions [..., 1, Q], wherein the at least one element-wise operation comprises an element-wise multiplication between the first tensor and each of the R constituent tensors. Sworna discloses a method of performing matrix multiplication with neural networks. Sworna does not suggest transforming one of the input tensors, then splitting the resulting tensor to become inputs for the element-wise multiplication. Therefore, Sworna does not teach or suggest a combination as claimed including the limitations identified above. Chen discloses a flexible on-chip network to adapt to data reuse and bandwidth requirements (abstract). Chen does not suggest transforming the input tensors and routing the resulting transformed tensor such that it is split for all element-wise operations. Therefore, Chen does not teach or suggest a combination as claimed including the limitations identified above. Park et al. (US 20190179869 A1, provided in IDS filed 01/31/2023, hereinafter “Park”) discloses a matrix-multiplication subsystem in an accelerator that comprises input matrix transformation and element-wise multiplication (Fig. 1; [0033]). However, Park discloses the transformation is done via matrix multiplication (Fig. 3 element 320; [0039]). Park does not suggest splitting the resulting transformed matrix, as recited in the claim, to become inputs for the element-wise multiplication. Therefore, Park does not teach or suggest a combination as claimed including the limitations identified above. Du et al. (US 20190026626 A1, hereinafter “Du”) discloses a core computing module and multi-ALU device to process nonlinear operations (abstract). Du does not suggest the set of nonlinear operations include matrix multiplication. Furthermore, Du does not suggest transforming the input tensors. Therefore, Du does not teach or suggest a combination as claimed including the limitations identified above. As to claims 14-15, the prior art of record does not teach or suggest a combination as claimed including: before mapping the matrix multiplication operation to the graph of neural network operations: analysing the matrix multiplication operation; and determining, based on a result of the analysing, how to implement the matrix multiplication operation, comprising determining that the matrix multiplication operation should be implemented using the at least one element-wise operation, and rejecting at least one alternative method for implementing the matrix multiplication operation. Sworna discloses a method of performing matrix multiplication with neural networks. Sworna does not suggest modifying the implementation of the neural network based on the matrix multiplication operation. Therefore, Sworna does not teach or suggest a combination as claimed including the limitations identified above. Park discloses a matrix-multiplication subsystem in an accelerator that comprises input matrix transformation and element-wise multiplication (Fig. 1; [0033]). However, Park discloses that matrix multiplication is used for transformation (Fig. 3 element 320; [0039]). Furthermore, Park discloses the method performs a convolution operation (Fig. 3 element 350; [0050]). Park does not suggest the method implements a matrix multiplication operation mapped onto neural network operations. Therefore, Park does not teach or suggest a combination as claimed including the limitations identified above. Du discloses a core computing module and multi-ALU device to process nonlinear operations (abstract). Du does not suggest the set of nonlinear operations include matrix multiplication. Therefore, Du does not teach or suggest a combination as claimed including the limitations identified above. Benfield et al. (US 12175222 B1, hereinafter “Benfield”) discloses mapping an input tensor to an output tensor using matrix multiplications (col 20 lines 17-20). Benfield does not suggest mapping matrix multiplication operations onto neural network operations. Therefore, Benfield does not teach or suggest a combination as claimed including the limitations identified above. Diamat et al. (US 11782706 B1, hereinafter “Diamat”) discloses merging neural network operators to single-entry-single exit merged operators (abstract, Fig. 6C). Diamat does not suggest the merging of neural network operators may result in a matrix multiplication operation. Therefore, Diamat does not teach or suggest a combination as claimed including the limitations identified above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHAT N LE whose telephone number is (571)272-0546. The examiner can normally be reached Monday-Friday 8:30AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew T Caldwell can be reached at (571) 272-3702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /P.N.L./ Phat LeExaminer, Art Unit 2182 (571) 272-0546 /ANDREW CALDWELL/Supervisory Patent Examiner, Art Unit 2182
Read full office action

Prosecution Timeline

Jun 28, 2022
Application Filed
Dec 31, 2025
Non-Final Rejection — §103, §112, §DP
Apr 08, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541340
ACCUMULATOR FOR DIGITAL COMPUTATION-IN-MEMORY ARCHITECTURES
2y 5m to grant Granted Feb 03, 2026
Patent 12499175
MATRIX MULTIPLICATION METHOD AND DEVICE BASED ON WINOGRAD ALGORITHM
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
0%
With Interview (-66.7%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month