Prosecution Insights
Last updated: April 19, 2026
Application No. 18/055,315

ACCELERATING DATA LOAD AND COMPUTATION IN FRONTEND CONVOLUTIONAL LAYER

Non-Final OA §101§103§112
Filed
Nov 14, 2022
Examiner
SPANN, COURTNEY P
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
206 granted / 258 resolved
+24.8% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
21 currently pending
Career history
279
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
44.6%
+4.6% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
28.3%
-11.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is responsive to the application filed on 11/14/22. Claims 1-25 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “read module configured to read” and a “padding module configured to receive/modify” in claims 18, 20, 23 and 24. From the disclosure the “padding module” and “reading module” are shown as black box elements 320 and 340 of Fig. 3. Thus, specific structures do not appear to be disclosed in the specification. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 18-20 and 23-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 18, 20, 23 and 24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as claims 18, 20, 23 and 24 invokes 35 U.S.C. 112(f) but the written description fails to disclose each corresponding structure, material, or acts for the claimed functions. (See claim construction above regarding “read module” and “padding module” configured to perform functions) Claim 19 is dependent upon one of the claims above and therefore is similarly rejected on the same basis as one of the claims above based upon dependency. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 16-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In regards to claim 16, line 10 the limitation stating “the processing element” lacks clarity. The limitation lacks clarity because it is unclear which processing element of the “processing element array” of line 8 the limitation is referring too. In regards to claim 18, lines 5-6 and 8 each include a recitation of the limitation stating “the processing element” which lacks clarity. The limitation lacks clarity because it is unclear which processing element of the “processing element array” of claim 16, line 8 the limitation is referring too. In regards to claim 21, line 12 the limitation stating “the processing element” lacks clarity. The limitation lacks clarity because it is unclear which processing element of the “processing element array” of line 10 the limitation is referring too. In regards to claim 23, lines 5-6 and 8 each include a recitation of the limitation stating “the processing element” which lacks clarity. The limitation lacks clarity because it is unclear which processing element of the “processing element array” of claim 21, line 10 the limitation is referring too. Claims 18, 20, 23 and 24 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for the claimed functions. As the specification does not provide adequate disclosure, the claim boundaries are not known, thus rendering the claim indefinite. See Claim construction above. For the purposes of prior art examination, Examiner is interpreting as logic in the processor for performing the functions. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; or (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the claimed function, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 17-20 and 22-25 are dependent upon one or more claims above and therefore are similarly rejected on the same basis as one or more claims above based upon dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea) without significantly more. Regarding claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites “A method” and thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 1 recites “…deep learning…a convolutional layer in a deep neural network (DNN)… perform multiplication operations…” which describe a process that under its broadest reasonable interpretation encompasses mathematical concepts. That is other than reciting generic computing components (e.g. memory, datastore, processing elements and a multiplier) nothing in the claimed elements precludes the steps from practically being performed in the mind with the aid of pen and paper. For example, the claim discusses convolution layers in a DNN for deep learning and performing multiplication operations, thus the limitation encompasses mathematical calculations and/or relationships (MPEP 2106.04(a)(2)(I)) (see [0027-0028 and 0047-0049]: wherein convolution operations of DNN’s includes multiply-accumulate operations) If a claim, limitation, under its broadest reasonable interpretation, covers performance of a mathematical calculation/relationship in the mind with the aid of pen and paper but for the recitation of generic computer components then it falls within the “Mathematical concepts” grouping of abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 further recites additional elements of accelerating deep learning…a memory…a datastore comprising one or more databanks… a databank storing a group of activations in the channel …processing element comprising a multiplier storing, in a memory, an input tensor …the input tensor comprising one or more channels, a channel comprising activations arranged in rows and columns; reading at least a portion of the input tensor from the memory into a datastore; and providing a vector to a processing element, the vector comprising one or more activations in the group These additional elements do not integrate the abstract idea into a practical application because (a) recites at a high-level of generality the words “apply it” (or an equivalent) with the judicial exception, or use mere instructions to implement the abstract idea on a computer, or merely uses a computer (including generic computing components) as a tool to perform the abstract idea (See MPEP 2106.05(f)) (note it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of accelerator for machine learning (MPEP 2106.05(h)) and (b) recites insignificant extra-solution activity for particular type of data (i.e. data gathering and data outputting of vector/tensor data) (See MPEP 2106.05 (h and g)). Therefore, claim 1 is directed to the abstract idea. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination, because (a) uses mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of accelerators (MPEP 2106.05(h)); (b) recites insignificant extra-solution activity of data gathering and outputting of particular types of data (e.g. tensor/vector data) (see MPEP 2106.05(g and h)) which the courts have deemed to be well-understood, routine and conventional activities that do not provide significantly more (MPEP 2106.05(d)); the courts have recognized that receiving or transmitting data over a network ((Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362), as well as storing and retrieving information in memory are well‐understood, routine, and conventional functionalities (Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93)). Furthermore, based on applicants’ own admission in paragraphs [0027-0028] it is well-known, routine and conventional to use DNN accelerators to accelerate convolution layers. Therefore, based on the discussion of the additional elements above, claim 1 is not patent eligible. Claim 2 recites further abstract ideas such as “…wherein the DNN further comprises one or more backend layers, the one or more backend layers comprise one or more other convolutional layers, and the convolutional layer is a frontend layer arranged before the one or more backend layers” which encompass mathematical concepts (see claim 1 rejection). Thus, additional abstract ideas cannot integrate the abstract idea of claim 1 into a practical application nor provide significantly more than the abstract idea of claim 1. Thus, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 3 further recites “…wherein the activations in the group are in one of the rows of the channel” which discusses further embellishments of the type of data operated on in claim 1 and thus can be viewed as nothing more than an attempt to generally link the use of the judicial exception to a particular technological field (e.g. a particular type of data) (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 4 further recites “…wherein the convolutional layer has a kernel comprising weights arranged in rows and columns, and a number of activations in the vector equals a number of weights in a row of the kernel” which discusses further embellishments of the type of data included in the mathematical calculation of the convolutional layer of claim 1 and thus can be viewed as nothing more than an attempt to generally link the use of the judicial exception to a particular technological field (e.g. a particular type of data) (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 5 recites further abstract ideas such as “…wherein the multiplier is to perform the multiplication operations on the vector and a weight vector, and the weight vector comprises weights in one of the rows of the kernel” which encompasses mathematical concepts and a particular type of data used in the mathematical concepts (see claim 1 rejection). Thus, can be viewed as nothing more than an attempt to generally link the use of the judicial exception to a particular technological field (e.g. a particular type of data) (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 6 further recites “…wherein providing the vector to the processing element comprises: reading a sequence of activations from the datastore into a storage unit of the processing element, wherein a number of the activations in the sequence is larger than the number of the weights in the row of the kernel; and reading a bitmap from the datastore into the storage unit of the processing element, wherein the bitmap comprises a sequence of bits, a number of bits having values of one in the bitmap equals the number of the weights in the row of the kernel, and the bitmap is to be applied on the sequence of activations to extract the one or more activations from the group” which encompasses insignificant extra-solution activities of particular types of data (e.g. reading of activations and a bitmap) (MPEP 2106.05(g-h)) and an additional abstract idea including a mental process and/or mathematical relationship (applying a bitmap to extract activations). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 7 further recites “…wherein: the sequence of activations starts with a first activation and is read from the datastore at a first time, a different sequence of activation starts with a second activation and is read from the datastore at a second time that is different from the first time, and a position of the second activation in the input tensor is determined based on a position of the first activation in the input tensor and a stride size of the convolutional layer” which encompasses insignificant extra-solution activities of particular types of data (e.g. reading of activation data) (MPEP 2106.05(g-h)) and an additional abstract idea including a mental process and/or mathematical relationship/calculation (determining a position of a second activation based on position of first activation input tensor and stride size of convolution layer). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 8 further recites “…wherein providing the vector to the processing element comprises: reading a sequence of activations from the datastore; modifying the sequence of activations by adding one or more pad elements into the sequence to generate a new sequence of activations, the one or more pad elements having a predetermined value; writing the new sequence of activations into a storage unit of the processing element; and transferring a bitmap from the datastore into the storage unit of the processing element, wherein the bitmap comprises a sequence of bits that includes one or more bits have a value of zero and one or more bits have a value of one, and the vector is generated based on the bitmap and the new sequence of activations” which encompasses insignificant extra-solution activities of particular types of data (e.g. providing a vector, reading of activation data, writing new sequence, transferring a bitmap) (MPEP 2106.05(g-h)) and additional abstract ideas including a mental processes and/or mathematical relationship/calculations (modifying the sequence by adding pad elements to generate a new sequence…vector is generated based on the bitmap and the new sequence). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 9 further recites “…wherein: reading the sequence of activations from the datastore comprises reading the sequence of activations from the datastore at a first time, the one or more pad elements comprises two pad elements, the sequence of activations is read from the datastore at a second time that is later than the first time, and after the sequence of activations is read from the datastore at the second time, another new sequence of activations is generated by adding one pad element into the sequence of activations” which encompasses insignificant extra-solution activities of particular types of data (e.g. reading of activation data) (MPEP 2106.05(g-h)) and additional abstract ideas including a mental processes and/or mathematical relationship/calculations (generating new sequence of activations by adding one pad element). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 10 further recites “…wherein: the vector is a first vector in the input tensor, the multiplier is a first multiplier in the processing element, the method further comprises transmitting a second vector from another databank of the datastore to the processing element, a second multiplier of the processing element is to perform multiplication operations on the second vector, and the first vector is in a different row of the input tensor from the second vector” which encompasses insignificant extra-solution activities of particular types of data (e.g. transmitting vector data) (MPEP 2106.05(g-h)), merely uses a computer (including generic computing components of multipliers, processing elements, and datastores) as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and additional abstract ideas including mathematical relationship/calculations (multiplication operations). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 11, further recites “…wherein: the vector is a first vector in the input tensor, the multiplier is to perform multiplication operations on the first vector at a first time, the multiplier is to perform multiplication operations on a second vector in the input tensor at a second time that is different from the first time, and the second vector comprises one or more activations in the first vector”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer (including generic computer components such as multipliers) as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment vector processing (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 12, further recites “…wherein: the multiplier is to perform multiplication operations on a third vector in the input tensor at a third time, the second time is after the first time and before the third time, and the third vector comprises one or more activations in the first vector or in the second vector”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer (including generic computer components such as multipliers) as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment vector processing (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 13 further recites “…wherein transmitting the vector from the datastore to the processing element comprises: reading another vector from the databank into a register file of the processing element, wherein the another vector comprises activations in the first vector and activations in the second vector, and the one or more activations in the first vector are determined based on a stride size of the convolutional layer” which encompasses insignificant extra-solution activities of particular types of data (e.g. transmitting the vector, reading vector data comprising activations) (MPEP 2106.05(g-h)) and an additional abstract idea including a mental process and/or mathematical relationship/calculation (determining one or more activations based on stride size of convolution layer). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 14, further recites “…wherein: the multiplier is a first multiplier of the processing element, the vector is a first vector in the input tensor, the input tensor further comprises a second vector and a third vector that have one or more different activations from the first vector, the first multiplier is to perform the multiplication operations on the first vector in a first operation round of the processing element and is to perform multiplication operations on a second vector in a second operation round of the processing element, and a second multiplier of the processing element is to perform multiplication operations on a third vector in the first operation round and in the second operation round”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer (including generic computer components such as multipliers) as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment vector processing (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claim 15, further recites “…wherein: the first multiplier is configured to perform multiplication operations on the second vector in a third operation round of the processing element, and the second operation round is between the first operation round and the third operation round”, which discloses additional limitations which use mere instructions to implement an abstract idea on a computer, or merely uses a computer (including generic computer components such as multipliers) as a tool to perform an abstract idea which cannot provide significantly more (e.g. “apply it”) (see MPEP 2106.05(f)) and/or it can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment vector processing (MPEP 2106.05(h)). Therefore, the claim recites no additional elements which could integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself. Claims 16-20 and 21-25 are similarly rejected on the same basis as claims 1-2 and 6-8 above. (Note: Independent claims 16 and 21 include additional computer components such as a compute block, processing element array, accelerator, and external memory; as well as an additional abstract idea of performing multiply-accumulate operations. However, the additional limitations merely recite generic computing components which fall under MPEP 2106.05(f) or recite an additional abstract idea. Thus, the additional limitations would not integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 11-12, 16-17, 21-22 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mohapatra, PGPUB No. 2020/0134417 (cited on IDS filed on 11/14/2022) and further in view of McQuillan, GB2585810. In regards to claim 1, Mohapatra discloses A method for accelerating deep learning ([0022-0023, 0079 and 0099]: wherein a method for performing a deep neural network on an accelerator is disclosed (See Fig. 1)) comprising: storing, in a memory, an input tensor of a convolutional layer in a deep neural network (DNN), the input tensor comprising one or more channels, a channel comprising activations arranged in rows and columns ([0027 and 0030]: wherein storing of input activation tensor (element 205) of a convolutional layer in a DNN in memory (element 125) is disclosed. Wherein the input tensor comprises channels Ic which comprises activations arranged in rows and columns (See Fig. 2)) reading at least a portion of the input tensor from a memory into a datastore ([0039]: wherein input activation data (IF) is read from memory into column buffers data storage) and providing a vector to a processing element, the vector comprising one or more activations in a group ([0029-0032 and 0036]: wherein the PE array includes a vector-vector template, wherein ones of the PEs of Fig. 1 multiply a vector of the input activations by a vector of filter weights (See Figs. 1-3 for further clarity)) the processing element comprising a multiplier that is to perform multiplication operations on the vector. ([0040 and Fig. 1]: wherein processing element comprises a multiplier in MAC unit (element 150)) Mohapatra does not disclose reading at least a portion of the input tensor from the memory into a datastore, the datastore comprising one or more databanks, a databank storing a group of activations in the channel. Mohapatra discloses reading data from a memory into a datastore (group of column buffers), however Mohapatra does not disclose the buffer storage including databanks. McQuillan discloses reading at least a portion of the input tensor from a memory into a datastore, the datastore comprising one or more databanks (page 10, lines 4-13 and page 13, lines 24-25: wherein the input tensor of memory (element 600) is read into input buffer (element 200) which includes data banks(see page 8, lines 24-37 and page 9, lines 11-24 for further details on input tensor data of convolutional layers)) a databank storing a group of activations in a channel. (page 9, lines 11-24 and page 11, lines 22-33: wherein the databanks stores groups of input activation data in a plane) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the Neural Network Accelerator buffer storage of Mohapatra to include databanks as the input buffer storage of the Neural Network Accelerator of McQuillan. It would have been obvious to one of ordinary skill in the art because it would have been the simple substitution of one known element (buffer storage including databanks as taught in McQuillan) for another (generic buffer storage of Mohapatra) to yield predictable results (buffering input tensor activation data using databanks) (MPEP 2143, Example B). Furthermore, it would have been obvious because buffering input data used in neural networks evenly in databanks can improve buffer performance and increase data throughput (McQuillan: page 2, lines 1-10). Claims 16 is similarly rejected on the same basis as claim 1 above as claim 16 is the compute block corresponding to the method of claim 1 above. (Examiner notes that claim 16 includes an additional limitation stating “A compute block comprising…a processing element array configured to receive a vector and to perform multiply-accumulate operations based on the vector”. However, Mohapatra discloses the above limitation in Figs. 1 and 3 and paragraphs [0036 and 0040]: which illustrates a configurable processor array for convolutional networks that performs vector operations including multiply-accumulate operations) Claim 21 is similarly rejected on the same basis as claim 1 above as claim 21 is the accelerator corresponding to the method of claim 1 above. (Examiner notes that claim 21 includes an additional limitation stating “A deep neural network accelerator, comprising…an external memory…a processing element array configured to receive a vector and to perform multiply-accumulate operations based on the vector”. However, Mohapatra discloses the above limitation in Figs. 1, 3 and 27 and paragraphs [0036, 0040, 0091 and 0094]: which illustrates a processor platform (element 2700) comprising external memory (elements 2714) and configurable processor array (element 100) for convolutional networks that performs vector operations including multiply-accumulate operations) In regards to claim 2, the combination of Mohapatra and McQuillan disclose The method of claim 1 (see rejection of claim 1 above) wherein the DNN further comprises one or more backend layers, the one or more backend layers comprise one or more other convolutional layers, and the convolutional layer is a frontend layer arranged before the one or more backend layers. (Mohapatra [0019, 0021, 0023 and 0030]: wherein the deep learning network comprises one or more successive layers including intermediate convolutional layers and a first input layer arranged before the intermediate layers |McQuillan: page 14, lines 18-26) Claims 17 is similarly rejected on the same basis as claim 1 above as claim 17 is the compute block corresponding to the method of claim 1 above. Claims 22 is similarly rejected on the same basis as claim 2 above as claim 22 is the accelerator corresponding to the method of claim 2 above. In regards to claim 3, the combination of Mohapatra and McQuillan disclose The method of claim 1 (see rejection of claim 1 above) wherein the activations in the group are in one of the rows of the channel. (Mohapatra [0030 and Fig. 2| McQuillan: page 9, lines 11-24 and page 11, lines 22-33) In regards to claim 4, the combination of Mohapatra and McQuillan disclose The method of claim 1 (see rejection of claim 1 above) wherein the convolutional layer has a kernel comprising weights arranged in rows and columns, and a number of activations in the vector equals a number of weights in a row of the kernel. (Mohapatra [0030-0032 and See Fig. 2]) In regards to claim 5, the combination of Mohapatra and McQuillan disclose The method of claim 4 (see rejection of claim 4 above) wherein the multiplier is to perform the multiplication operations on the vector and a weight vector, and the weight vector comprises weights in one of the rows of the kernel. (Mohapatra [0031-0032, 0040, 0058] and See Figs. 1-2 and 6) In regards to claim 11, the combination of Mohapatra and McQuillan disclose The method of claim 1 (see rejection of claim 1 above) wherein: the vector is a first vector in the input tensor (Mohapatra [0033-0034 and 0072]: wherein a first vector of input activation data of input tensor (element 205) is disclosed) the multiplier is to perform multiplication operations on the first vector at a first time (Mohapatra [0034-0036, 0072 and Figs. 1-3 and 16]: wherein multiplier (of element 150) performs operation on a first vector at a first iteration of the loop) the multiplier is to perform multiplication operations on a second vector in the input tensor at a second time that is different from the first time, and the second vector comprises one or more activations in the first vector. (Mohapatra [0034-0036, 0072 and Figs. 1-3 and 16]: wherein multiplier (of element 150) performs multiplications on a second vector comprising the same input activation as the first vector on a second iteration of the loop different from the first iteration of the loop as input activations are stationary) In regards to claim 12, the combination of Mohapatra and McQuillan disclose The method of claim 11 (see rejection of claim 11 above) wherein: the multiplier is to perform multiplication operations on a third vector in the input tensor at a third time (Mohapatra [0034-0036, 0072 and Figs. 1-3 and 16]: wherein multiplier (of element 150) performs multiplications on a third vector comprising the same input activation as the first vector on a third iteration of the loop as input activations are stationary) the second time is after the first time and before the third time (Mohapatra [0034-0036, 0072 and Figs. 1-3 and 16]: wherein iterations of a loop are successive thus the second iteration is after the first iteration and before the third iteration) and the third vector comprises one or more activations in the first vector or in the second vector. (Mohapatra [0034-0036, 0072 and Figs. 1-3 and 16]: wherein multiplier (of element 150) performs multiplications on a third vector comprising the same input activation as the first vector on a third iteration of the loop as input activations are stationary) In regards to claim 25, the combination of Mohapatra and McQuillan disclose The DNN accelerator of claim 21 (see rejection of claim 21 above) wherein the external memory is a dynamic random-access memory (Mohapatra [0091]) and the local memory is a static random-access memory. (Mohapatra [0027]) Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mohapatra, McQuillan and further in view of Hu, PGPUB No. 2021/0264257. In regards to claim 10, the combination of Mohapatra and McQuillan disclose The method of claim 1 (see rejection of claim 1 above) wherein: the vector is a first vector in the input tensor (Mohapatra [0030-0036 and 0072]: wherein processing of tensor multiplications is done in a vector-vector format, thus a first vector from the input activation tensor is disclosed (See Figs. 2-4 and 16)) the multiplier is a first multiplier in the processing element (Mohapatra [0040 and Fig. 1]: wherein processing element comprises a multiplier in MAC unit (element 150)) the method further comprises transmitting a second vector from datastore to the processing element (Mohapatra [0036, 0047, 0065, Fig. 2 and 16]: wherein the processing element performs convolution operations for input tensor on a vector-by-vector execution format. Thus, the column buffers would transmit a second vector to the processing element. For example, each row of input tensor (element 205) would include an input vector, and therefore a first and second vector are disclosed) and the first vector is in a different row of the input tensor from the second vector. (Mohapatra [0036, 0047, Fig. 2 and 16]: wherein the input tensor of Fig. 2 includes multiple rows each including vectors used in processing a tensor operation in a vector-vector format) The combination of Mohapatra and McQuillan does disclose transmitting a second vector from another databank of the datastore to the processing element, a second multiplier of the processing element is to perform multiplication operations on the second vector. Hu discloses transmitting a second vector from another databank of the datastore to the processing element ([0034 and 0039]: wherein activation vectors are transferred from different databanks of unified buffer to PE (element 160) (See Fig. 1B and 3)) a second multiplier of the processing element is to perform multiplication operations on the second vector. ([0027, 0034 and 0044]: wherein each processing element includes an array of multipliers to perform multiplication operation a second activation vector) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the processing elements of Mohapatra to include an array of multipliers to perform vector multiplications as taught in Hu. It would have been obvious to one of ordinary skill in the art because using an array of multipliers would increase operation rate (Hu [0044]). Allowable Subject Matter Claims 6-9, 13-15, 18-20 and 23-24 would be allowable if rewritten to overcome the respective rejection(s) under 35 U.S.C. 112 and/or 35 U.S.C 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record, alone or in combination, fail to disclose or render obvious claim 6 filed on 11/14/2022. The prior art of record has not taught either individually or in combination and together with all other claimed features “The method of claim 4, wherein providing the vector to the processing element comprises: reading a sequence of activations from the datastore into a storage unit of the processing element, wherein a number of the activations in the sequence is larger than the number of the weights in the row of the kernel; and reading a bitmap from the datastore into the storage unit of the processing element, wherein the bitmap comprises a sequence of bits, a number of bits having values of one in the bitmap equals the number of the weights in the row of the kernel, and the bitmap is to be applied on the sequence of activations to extract the one or more activations from the group” as claimed in claim 6. The closest prior art of record, Mohapatra discloses reading sequences of activations from column buffers into a register file of a processing element, however Mohapatra does not disclose reading a bitmap nor storing a bitmap used to extract one or more activations in a processing element. While, Woo (USPAT No. 10,360,163) and Chinya (PGPUB No. 2020/0228137, cited on IDS filed on 11/14/2022) disclose using a bitmap to extract activation values, however neither reference discloses the bitmap including a number of bits having values of one in the bitmap equals the number of the weights in the row of the kernel as claimed. Thus, none of the prior art references disclose all limitations of claim 6 above, which includes all limitations of claims 1 and 4. Furthermore, while some limitations may be broadly disclosed in the references above, the specific combination of limitations would not be obvious as claimed absent impermissible hindsight. Claims 18 and 23 are similarly allowable over the prior art based on the same basis as claim 6 above. Claims 7 and 19 are dependent upon claims 6 and 18 and are thus allowable over the prior art at least based upon their respective dependencies. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record, alone or in combination, fail to disclose or render obvious claim 8 filed on 11/14/2022. The prior art of record has not taught either individually or in combination and together with all other claimed features “The method of claim 1, wherein providing the vector to the processing element comprises: reading a sequence of activations from the datastore; modifying the sequence of activations by adding one or more pad elements into the sequence to generate a new sequence of activations, the one or more pad elements having a predetermined value; writing the new sequence of activations into a storage unit of the processing element; and transferring a bitmap from the datastore into the storage unit of the processing element, wherein the bitmap comprises a sequence of bits that includes one or more bits have a value of zero and one or more bits have a value of one, and the vector is generated based on the bitmap and the new sequence of activations.” The closest prior art of record, McQuillan discloses padding input values, however McQuillan does not disclose using a bitmap to generate a sequence of activations after the activations have been padded with values. While, Woo (USPAT No. 10,360,163) and Chinya (PGPUB No. 2020/0228137, cited on IDS filed on 11/14/2022) disclose using a bitmap to extract activation values, neither reference discloses using a bitmap to generate a new sequence of activations after the activations have been padded with zero values. Thus, none of the prior art references disclose all limitations of claim 8 above. Furthermore, while some limitations may be broadly disclosed in the references above, the specific combination of limitations would not be obvious as claimed absent impermissible hindsight. Claims 20 and 24 are similarly allowable over the prior art based on the same basis as claim 8 above. Claim 9 is dependent upon claim 8 is thus allowable over the prior art at least based upon its respective dependency. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record, alone or in combination, fail to disclose or render obvious claim 13 filed on 11/14/2022. The prior art of record has not taught either individually or in combination and together with all other claimed features “The method of claim 11, wherein transmitting the vector from the datastore to the processing element comprises: reading another vector from the databank into a register file of the processing element, wherein the another vector comprises activations in the first vector and activations in the second vector, and the one or more activations in the first vector are determined based on a stride size of the convolutional layer” as claimed in claim 13. The closest prior art of record, Mohapatra discloses reading sequences of activations from column buffers into a register file of a processing element, wherein input vectors include the same activation values when input activations are stationary. However, Mohapatra does not disclose reading another vector from the databank into a register file of the processing element, wherein another vector comprises activations in the first vector and activations in the second vector, and the one or more activations in the first vector are determined based on a stride size of the convolutional layer. Thus, none of the prior art references disclose all limitations of claim 13 above, which includes all limitations of claims 1 and 11. Furthermore, while some limitations may be broadly disclosed in the references above, the specific combination of limitations would not be obvious as claimed absent impermissible hindsight. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record, alone or in combination, fail to disclose or render obvious claim 14 filed on 11/14/2022. The prior art of record has not taught either individually or in combination and together with all other claimed features “The method of claim 1, wherein: the multiplier is a first multiplier of the processing element, the vector is a first vector in the input tensor, the input tensor further comprises a second vector and a third vector that have one or more different activations from the first vector, the first multiplier is to perform the multiplication operations on the first vector in a first operation round of the processing element and is to perform multiplication operations on a second vector in a second operation round of the processing element, and a second multiplier of the processing element is to perform multiplication operations on a third vector in the first operation round and in the second operation round” as claimed in claim 14. The closest prior art of record, Mohapatra discloses using a processing element including a single multiplier to perform multiplication operations on vector of activation data, however, Mohapatra does not disclose a second multiplier of the processing element performing multiplication operations on a third vector in the first operation round and in the second operation round. While, Hu discloses using a multiplier array, the reference does not disclose a second multiplier of the processing element is to perform multiplication operations on a third vector in the first operation round and in the second operation round as claimed. Furthermore, while some limitations may be broadly disclosed in the references above, the specific combination of limitations would not be obvious as claimed absent impermissible hindsight. Claim 15 is dependent upon claim 14 and is thus allowable over the prior art at least based upon dependency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Woo, USPAT No. 10,360,163 using a bitmap to extract activation values and exploit input data sparsity Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY P SPANN whose telephone number is (571)431-0692. The examiner can normally be reached M-F, 9am-6pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached at 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY P SPANN/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Dec 30, 2022
Response after Non-Final Action
Feb 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596550
Dual-Mode Floating Point Processor Operation
2y 5m to grant Granted Apr 07, 2026
Patent 12585468
APPARATUS AND METHOD USING HINT CAPABILITY FOR CONTROLLING MICRO-ARCHITECTURAL CONTROL FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12572362
PROCESSOR AND METHOD FOR EXECUTING A LOOPING CODE SEGMENT WITH ZERO OVERHEAD
2y 5m to grant Granted Mar 10, 2026
Patent 12566609
MICROPROCESSOR WITH APPARATUS AND METHOD FOR HANDLING OF INSTRUCTIONS WITH LONG THROUGHPUT
2y 5m to grant Granted Mar 03, 2026
Patent 12566724
SEQUENTIAL PROCESSING METHOD AND APPARATUS OF DATA PACKET
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month