Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,709

Implementing Traditional Computer Vision Algorithms as Neural Networks

Non-Final OA §101§103§112§DP
Filed
Apr 19, 2023
Examiner
MAHARAJ, DEVIKA S
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Imagination Technologies Limited
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
5y 0m
To Grant
63%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
43 granted / 78 resolved
At TC average
Moderate +8% lift
Without
With
+7.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
28 currently pending
Career history
106
Total Applications
across all art units

Statute-Specific Performance

§101
27.4%
-12.6% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
16.6%
-23.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 78 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION 1. This communication is in response to the Application No. 18/136,709 field on April 19, 2023 in which Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statements submitted on 04/19/2023, 02/26/2024, and 04/08/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Objections 4. Claim 10 is objected to because of the following informalities: Claim 10 recites “[…] a max poling function” but should instead recite “[…] a max pooling function […]”. Appropriate correction is required. Double Patenting 5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 6. Claims 1, 12-13, and 15-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1, 7-8, and 12-17 of U.S. Patent No. 11,636,306. Although the claims at issue are not identical, they are not patentably distinct from each other because the subject matter claimed in the instant application is disclosed in U.S. Patent No. 11,636,306, since the instant application claims common subject matter. Note: An explanation of the anticipatory relationship of the claim limitations is listed under each Independent claim mapping below. The bolded portions below highlight the differences between the instant application and the Patent, which illustrates the anticipatory relationship of the claim limitations at issue: Instant Application (18/136,709) U.S. Patent No. 11,636,306 Claim 1: A method of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network, the method comprising: receiving a definition of the non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm; mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation; linking the set of one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm; and configuring hardware logic capable of implementing a neural network to implement the neural network that represents the non-trainable algorithm. Claim 1: A method of implementing a traditional computer vision algorithm as a neural network, the method comprising: receiving a definition of the traditional computer vision algorithm that identifies a sequence of one or more traditional computer vision algorithm operations which form the traditional computer vision algorithm; mapping each of the one or more traditional computer vision algorithm operations to a set of one or more neural network primitives that is mathematically equivalent to that traditional computer vision algorithm operation; linking the set of one or more neural network primitives mapped to each traditional computer vision algorithm operation according to the sequence to form a neural network representing the traditional computer vision algorithm; and configuring hardware logic capable of implementing a neural network to implement the neural network that represents the traditional computer vision algorithm. Examiner notes that Applicant’s specification Par. [0048] states “In the context of this description, traditional computer vision algorithms can be considered to be any computer vision algorithms which are not in the form of a trainable neural network, e.g. relying on deep or shallow learning techniques.”, therefore, Examiner asserts that such a “traditional computer vision algorithm”, as claimed by the Patent, would correspond to a “non-trainable algorithm formed of one or more matrix and/or vector operations”, as claimed by the instant claims or vice versa. The patent claims are essentially a “species” of the generic invention of the instant claims. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Although the claims at issue are not identical, they are not patentably distinct from each other because a “traditional computer vision algorithm” directly correlates to a “non-trainable algorithm formed of one or more matrix and/or vector operations” as also supported by Applicant’s specification, hence the instant application claims common subject matter and the instant claims are anticipated by the patent claims. Claim 12: The method of claim 1, further comprising training, using one or more neural network training techniques, the neural network that represents the non-trainable algorithm prior to configuring the hardware logic to implement the neural network that represents the non-trainable algorithm. Claim 7: The method of claim 1, further comprising training, using one or more neural network training techniques, the neural network representing the traditional computer vision algorithm prior to configuring the hardware logic to implement the neural network. Claim 13: The method of claim 1, wherein the mapping is automatically performed based on a library that comprises a mapping of matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives. Claim 8: The method of claim 1, wherein the mapping is automatically performed based on a library that comprises a mapping of traditional computer vision algorithm operations to mathematically equivalent sets of one or more neural network primitives. Claim 15: The method of claim 1, wherein the hardware logic capable of implementing a neural network comprises a neural network accelerator. Claim 12: The method of claim 1, wherein the hardware logic capable of implementing a neural network comprises a neural network accelerator. Claim 16: The method of claim 15, wherein the neural network accelerator is embodied in hardware on an integrated circuit. Claim 13: The method of claim 12, wherein the neural network accelerator is embodied in hardware on an integrated circuit. Claim 17: A system for implementing a non-trainable algorithm formed of one or more matrix and/or vector operations as a neural network, the system comprising: hardware logic capable of implementing a neural network; and a converter configured to: receive a definition of the non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm; map each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation; link the set of one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm; and configure the hardware logic capable of implementing a neural network to implement the neural network that represents the non-trainable algorithm. Claim 15: A system for implementing a traditional computer vision algorithm as a neural network, the system comprising: hardware logic capable of implementing a neural network; and a converter configured to: receive a definition of the traditional computer vision algorithm that identifies a sequence of one or more traditional computer vision algorithm operations which form the traditional computer vision algorithm; map each of the one or more traditional computer vision algorithm operations to a set of one or more neural network primitives that is mathematically equivalent to that traditional computer vision algorithm operation; link the set of one or more neural network primitives mapped to each traditional computer vision algorithm operation according to the sequence to form a neural network representing the traditional computer vision algorithm; and configure the hardware logic capable of implementing a neural network to implement the neural network that represents the traditional computer vision algorithm. Examiner notes that Applicant’s specification Par. [0048] states “In the context of this description, traditional computer vision algorithms can be considered to be any computer vision algorithms which are not in the form of a trainable neural network, e.g. relying on deep or shallow learning techniques.”, therefore, Examiner asserts that such a “traditional computer vision algorithm”, as claimed by the Patent, would correspond to a “non-trainable algorithm formed of one or more matrix and/or vector operations”, as claimed by the instant claims or vice versa. The patent claims are essentially a “species” of the generic invention of the instant claims. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Although the claims at issue are not identical, they are not patentably distinct from each other because a “traditional computer vision algorithm” directly correlates to a “non-trainable algorithm formed of one or more matrix and/or vector operations” as also supported by Applicant’s specification, hence the instant application claims common subject matter and the instant claims are anticipated by the patent claims. Claim 18: A neural network accelerator configured to implement a neural network that represents a non-trainable algorithm that is formed by a sequence of one or more matrix and/or vector operations, the neural network having been generated by mapping each matrix and/or vector operation forming the traditional computer vision algorithm to a mathematically equivalent set of one or more neural network primitives and linking the one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form the neural network that represents the non-trainable algorithm. Claim 16: A neural network accelerator configured to implement a neural network that represents a traditional computer vision algorithm that is formed by a sequence of one or more traditional computer vision algorithm operations, the neural network having been generated by mapping each traditional computer vision algorithm operation forming the traditional computer vision algorithm to a mathematically equivalent set of one or more neural network primitives and linking the one or more neural network primitives mapped to each traditional computer vision algorithm operation according to the sequence to form the neural network that represents the traditional computer vision algorithm. Examiner notes that Applicant’s specification Par. [0048] states “In the context of this description, traditional computer vision algorithms can be considered to be any computer vision algorithms which are not in the form of a trainable neural network, e.g. relying on deep or shallow learning techniques.”, therefore, Examiner asserts that such a “traditional computer vision algorithm”, as claimed by the Patent, would correspond to a “non-trainable algorithm formed of one or more matrix and/or vector operations”, as claimed by the instant claims or vice versa. The patent claims are essentially a “species” of the generic invention of the instant claims. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Although the claims at issue are not identical, they are not patentably distinct from each other because a “traditional computer vision algorithm” directly correlates to a “non-trainable algorithm formed of one or more matrix and/or vector operations” as also supported by Applicant’s specification, hence the instant application claims common subject matter and the instant claims are anticipated by the patent claims. Claim 19: A computer-implemented automated tool for forming a neural network, the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives, wherein the automated tool is configured to: receive a definition of a non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm; use the library to map each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation; link the set of one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm; and output a definition of the neural network that represents the non-trainable algorithm for use in configuring hardware logic to implement the neural network that represents the non-trainable algorithm. Claim 17: A computer-implemented automated tool for forming a neural network, the automated tool having access to a library of mappings from traditional computer vision algorithm operations to mathematically equivalent sets of one or more neural network primitives, wherein the automated tool is configured to: receive a definition of a traditional computer vision algorithm that identifies a sequence of one or more traditional computer vision algorithm operations which form the traditional computer vision algorithm; use the library to map each of the one or more traditional computer vision algorithm operations to a set of one or more neural network primitives that is mathematically equivalent to that traditional computer vision algorithm operation; link the set of one or more neural network primitives mapped to each computer vision algorithm operation according to the sequence to form a neural network representing the traditional computer vision algorithm; and output a definition of the neural network for use in configuring hardware logic to implement the neural network. Examiner notes that Applicant’s specification Par. [0048] states “In the context of this description, traditional computer vision algorithms can be considered to be any computer vision algorithms which are not in the form of a trainable neural network, e.g. relying on deep or shallow learning techniques.”, therefore, Examiner asserts that such a “traditional computer vision algorithm”, as claimed by the Patent, would correspond to a “non-trainable algorithm formed of one or more matrix and/or vector operations”, as claimed by the instant claims or vice versa. The patent claims are essentially a “species” of the generic invention of the instant claims. It has been held that a generic invention is “anticipated” by a “species” within the scope of the generic invention. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Although the claims at issue are not identical, they are not patentably distinct from each other because a “traditional computer vision algorithm” directly correlates to a “non-trainable algorithm formed of one or more matrix and/or vector operations” as also supported by Applicant’s specification, hence the instant application claims common subject matter and the instant claims are anticipated by the patent claims. Claim 20: A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform the method as set forth in claim 1. Claim 14: A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform the method at set forth in claim 1. Claim Rejections - 35 USC § 112 7. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 8. Claim 18 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 18 recites the limitation "the computer vision algorithm" without any prior recitation of a computer vision algorithm. There is insufficient antecedent basis for this limitation in the claim. As such, Examiner interprets the limitation to instead be directed to the previously recited “non-trainable algorithm”, as consistent with Claim 18 and all of Claims 1-2. Claim Rejections - 35 USC § 101 9. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 10. Claims 17-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 17 recites “A system […] comprising: hardware logic capable of implementing a neural network; and a converter […]”, Claim 18 recites “A neural network accelerator […]”, and Claim 19 recites “A computer-implemented automated tool […]”. The means to implement the hardware logic and converter of Claim 17, the accelerator of Claim 18, and the automated tool of Claim 19 may be interpreted as software per se, as these components are not tangibly embodied on any sort of physical medium. Applicant is encouraged to amend the claims to provide further details regarding the hardware and/or tangible embodiment of these components to avoid the interpretation as software per se. Respective dependent claims are rejected under 35 U.S.C. 101 by the virtue of their dependency. 11. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Step 1: Claim 1 is a method type claim. Therefore, Claims 1-16 and 20 are directed to either a process, machine, manufacture, or composition of matter. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network (mental process – implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network may be performed manually by a user observing/analyzing the matrix and/or vector operations that form the non-trainable algorithm and using judgement/evaluation to order and implement the operations in a sequence that forms a neural network (with the aid of pen and paper)) mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation (mental process – mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent may be performed manually by a user observing/analyzing both the matrix and/or vector operations and the set of neural network primitives and using judgement/evaluation to map the matrix and/or vector operations to a set of one or more neural network primitives. For example, the user may observe matrix and/or vector operations such as an activation function (i.e., ReLU, sigmoid, etc.) and use judgement to map the operation comprising an activation function to an activation primitive. Similarly, the user may observe a matrix and/or vector operation such as a normalization function and use judgement to map the operation comprising a normalization function to a normalization primitive. See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations which correlate to a relevant primitive) linking the set of one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm (mental process – linking the set of primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm may be performed manually by a user observing/analyzing the mapped primitives and the received definition that identifies the sequence of operations and accordingly using judgement/evaluation to link the primitives according to the sequence to form a neural network (with the aid of pen and paper). For example, the user may observe/analyze the definition identifying a sequence comprising an input layer followed by a convolution layer, followed by an activation layer, etc. and use judgement/evaluation to link the according primitives (convolution primitive, activation primitive, etc.) to the identified sequence (See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations and how they may be ordered in a sequence)) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: receiving a definition of the non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) configuring hardware logic capable of implementing a neural network to implement the neural network that represents the non-trainable algorithm (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generically “configuring hardware logic […] to implement the neural network […]” without significantly more) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: receiving a definition of the non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) configuring hardware logic capable of implementing a neural network to implement the neural network that represents the non-trainable algorithm (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generically “configuring hardware logic […] to implement the neural network […]” without significantly more. This cannot provide an inventive concept) For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-16 and 20. The additional limitations of the dependent claims are addressed below. Regarding Claim 2: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. Step 2A Prong 2 & Step 2B: wherein each set of one or more neural network primitives comprises neural network primitives from a group of neural network primitives supported by the hardware logic capable of implementing a neural network (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that each set of the one or more neural network primitives comprises primitives from a group supported by the hardware logic does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 3: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 3 depends on. Step 2A Prong 2 & Step 2B: wherein the group of neural network primitives comprises a convolution primitive (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the group of primitives comprises a convolution primitive does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 4: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 4 depends on. Step 2A Prong 2 & Step 2B: wherein the group of neural network primitives comprises a normalisation primitive and/or a fully-connected primitive (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the group of primitives comprises a normalization and/or fully-connected primitive does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 5: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 5 depends on. Step 2A Prong 2 & Step 2B: wherein the group of neural network primitives comprises a pooling primitive (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the group of primitives comprises a pooling primitive does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 6: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 6 depends on. Step 2A Prong 2 & Step 2B: wherein the group of neural network primitives comprises an activation primitive (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the group of primitives comprises an activation primitive does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 7: Step 2A Prong 1: See the rejection of Claim 2 above, which Claim 7 depends on. Step 2A Prong 2 & Step 2B: wherein the group of neural network primitives comprises an element-wise operations primitive (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the group of primitives comprises an element-wise operations primitive does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 8: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 8 depends on. wherein mapping a matrix or vector operation to a set of one or more neural network primitives that is mathematically equivalent to that matrix or vector operation comprises identifying the set of one or more neural network primitives and identifying a specific implementation of a plurality of alternative implementations for at least one of the neural network primitives in the set (mental process – identifying the set of one or more neural network primitives and identifying a specific implementation of a plurality of alternative implementations for at least one of the neural network primitives in the set may be performed manually by a user observing/analyzing the set of primitives and accordingly using judgement/evaluation to identify a specific implementation of a plurality for at least one primitive based on said analysis) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 9: Step 2A Prong 1: See the rejection of Claim 8 above, which Claim 9 depends on. Step 2A Prong 2 & Step 2B: wherein the at least one of the neural network primitives is an activation primitive that can implement any one of a ReLU function, a PReLU function, and one or more alternative non-linear functions (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that at least one of the neural network primitives is an activation primitive that can implement any of a ReLU/PReLU/alternative non-linear function does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 10: Step 2A Prong 1: See the rejection of Claim 8 above, which Claim 10 depends on. Step 2A Prong 2 & Step 2B: wherein the at least one of the neural network primitives is a pooling primitive that can implement any one of: a max poling function, a mean pooling function, and one or more other pooling functions (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that at least one of the neural network primitives is an pooling primitive that can implement any of a max pooling/mean pooling/pooling function does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 11: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 11 depends on. Step 2A Prong 2 & Step 2B: wherein the non-trainable algorithm is one of: a scientific computing algorithm, a computer game animation algorithm, an audio processing algorithm, a signal processing algorithm, and a ray tracing algorithm (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the non-trainable algorithm is one of a scientific computing algorithm/computer game animation algorithm/audio processing algorithm/signal processing algorithm/ray tracing algorithm does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 12: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 12 depends on. Step 2A Prong 2 & Step 2B: training, using one or more neural network training techniques, the neural network that represents the non-trainable algorithm prior to configuring the hardware logic to implement the neural network that represents the non-trainable algorithm (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of training a machine learning model with generic techniques without significantly more. This cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 13: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 13 depends on. Step 2A Prong 2 & Step 2B: wherein the mapping is automatically performed based on a library that comprises a mapping of matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner's note: high level recitation of mapping automatically “based on” a library without significantly more. This cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 14: Step 2A Prong 1: See the rejection of Claim 13 above, which Claim 14 depends on. when the library comprises more than one mapping for a matrix and/or vector operation of the one or more matrix and/or vector operations forming the non-trainable algorithm, selecting one of the more than one mapping based on one or more of: the hardware logic capable of implementing a neural network, one or more other matrix and/or vector operations forming the non-trainable algorithm, and the set of one or more neural network primitives selected for one or more other matrix and/or vector operations forming the non-trainable algorithm (mental process – selecting one of the more than one mappings, when the library comprises more than one mapping, may be performed manually by a user observing/analyzing the plurality of mappings, hardware logic, one or more matrix/vector operations, and primitives and using judgement/evaluation to select one mapping based on said analysis) Step 2A Prong 2 & Step 2B: Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 15: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 15 depends on. Step 2A Prong 2 & Step 2B: wherein the hardware logic capable of implementing a neural network comprises a neural network accelerator (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the hardware comprises an accelerator does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Regarding Claim 16: Step 2A Prong 1: See the rejection of Claim 15 above, which Claim 16 depends on. Step 2A Prong 2 & Step 2B: wherein the neural network accelerator is embodied in hardware on an integrated circuit (Field of Use – limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception does not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application; in this case specifying that the accelerator is embodied in hardware on an integrated circuit does not integrate the exception into a practical application nor amount to significantly more – See MPEP 2106.05(h)) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Independent Claim 17 recites substantially the same limitations as Claim 1, in the form of a system, including generic computer components. The claim is also directed to performing mental processes without significantly more, therefore it is rejected under the same rationale. Claim 17 is also a system type claim but reads as software per se – see preceding rejection of Claim 17 above. For the reasons above, Claim 17 is rejected as being directed to an abstract idea without significantly more. Regarding Claim 18: Step 1: Claim 18 is a system type claim but reads as software per se – see preceding rejection of Claim 18 above. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. implement a neural network that represents a non-trainable algorithm that is formed by a sequence of one or more matrix and/or vector operations (mental process – implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network may be performed manually by a user observing/analyzing the matrix and/or vector operations that form the non-trainable algorithm and using judgement/evaluation to order and implement the operations in a sequence that forms a neural network (with the aid of pen and paper)) the neural network having been generated by mapping each matrix and/or vector operation forming the traditional computer vision algorithm to a mathematically equivalent set of one or more neural network primitives (mental process – mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent may be performed manually by a user observing/analyzing both the matrix and/or vector operations and the set of neural network primitives and using judgement/evaluation to map the matrix and/or vector operations to a set of one or more neural network primitives. For example, the user may observe matrix and/or vector operations such as an activation function (i.e., ReLU, sigmoid, etc.) and use judgement to map the operation comprising an activation function to an activation primitive. Similarly, the user may observe a matrix and/or vector operation such as a normalization function and use judgement to map the operation comprising a normalization function to a normalization primitive. See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations which correlate to a relevant primitive) linking the one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form the neural network that represents the non-trainable algorithm (mental process – linking the set of primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm may be performed manually by a user observing/analyzing the mapped primitives and the received definition that identifies the sequence of operations and accordingly using judgement/evaluation to link the primitives according to the sequence to form a neural network (with the aid of pen and paper). For example, the user may observe/analyze the definition identifying a sequence comprising an input layer followed by a convolution layer, followed by an activation layer, etc. and use judgement/evaluation to link the according primitives (convolution primitive, activation primitive, etc.) to the identified sequence (See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations and how they may be ordered in a sequence) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: neural network accelerator (recited at a high-level of generality (i.e., as a generic accelerator configured to perform the specific operations of claim 18) such that it amounts to no more than mere instructions to apply the exception using generic computer components) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: neural network accelerator (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) For the reasons above, Claim 18 is rejected as being directed to an abstract idea without significantly more. Regarding Claim 19: Step 1: Claim 18 is a system type claim but reads as software per se – see preceding rejection of Claim 18 above. 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation by mathematical calculation but for the recitation of generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. […] forming a neural network (mental process – forming a neural network may be performed manually by a user observing/analyzing a plurality of matrix and/or vector operations and accordingly ordering them in sequence to form a neural network (with the aid of pen and paper)) use the library to map each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation (mental process – mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent may be performed manually by a user observing/analyzing the library, the matrix and/or vector operations, and the set of neural network primitives and using judgement/evaluation to map the matrix and/or vector operations to a set of one or more neural network primitives based on referencing the library. For example, the user may observe matrix and/or vector operations such as an activation function (i.e., ReLU, sigmoid, etc.) and use judgement to map the operation comprising an activation function to an activation primitive. Similarly, the user may observe a matrix and/or vector operation such as a normalization function and use judgement to map the operation comprising a normalization function to a normalization primitive. See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations which correlate to a relevant primitive) link the set of one or more neural network primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm (mental process – linking the set of primitives mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm may be performed manually by a user observing/analyzing the mapped primitives and the received definition that identifies the sequence of operations and accordingly using judgement/evaluation to link the primitives according to the sequence to form a neural network (with the aid of pen and paper). For example, the user may observe/analyze the definition identifying a sequence comprising an input layer followed by a convolution layer, followed by an activation layer, etc. and use judgement/evaluation to link the according primitives (convolution primitive, activation primitive, etc.) to the identified sequence (See Applicant’s specification Par. [0002-0007] which detail the types of matrix and/or vector operations and how they may be ordered in a sequence)) 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a computer-implemented automated tool […] (recited at a high-level of generality (i.e., as a generic automated tool for performing the specific operations of claim 19) such that it amounts to no more than mere instructions to apply the exception using generic computer components) […] the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives […] (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) receive a definition of a non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) output a definition of the neural network that represents the non-trainable algorithm […] (Adding insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g)) configuring hardware logic to implement the neural network that represents the non-trainable algorithm (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generically “configuring hardware logic […] to implement the neural network […]” without significantly more) 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a computer-implemented automated tool […] (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) […] the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives […] (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) receive a definition of a non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (MPEP 2106.05(d)(II) indicates that merely “Receiving or transmitting data over a network” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) output a definition of the neural network that represents the non-trainable algorithm […] (MPEP 2106.05(d)(II) indicates that merely “Presenting offers and gathering statistics” is a well-understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed limitation is well-understood, routine, conventional activity is supported under Berkheimer) configuring hardware logic to implement the neural network that represents the non-trainable algorithm (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of generically “configuring hardware logic […] to implement the neural network […]” without significantly more) For the reasons above, Claim 19 is rejected as being directed to an abstract idea without significantly more. Regarding Claim 20: Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 20 depends on. Step 2A Prong 2 & Step 2B: a non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform the method as set forth in claim 1 (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) Accordingly, under Step 2A Prong 2 and Step 2B, these additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limits on practicing the abstract idea, as discussed above in the rejection of claim 1. Claim Rejections - 35 USC § 103 12. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 13. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (hereinafter Patel) (US PG-PUB 20180082172), in view of Srivastava et al. (hereinafter Srivastava) (US PG-PUB 20190205747). Regarding Claim 1, Patel teaches a method of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network (Patel, Par. [0012], “The present invention relates to the field of machine learning, and more particularly, to a compilation procedure for converting a probabilistic description of an inference task into a neural network for realizing the inference task.”, thus, methods of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network are disclosed. Examiner asserts that instant claim 11 defines the non-trainable algorithm to be one of an audio or signal processing algorithm – similarly, Patel Claims 5 & 6 explicitly limit the inference task to audio and/or signal processing, hence the probabilistic description of an audio/signal processing inference task (comprising input/measurement data in the form of a vector fed into the network for further processing operations – See Patel Par. [0020]) is analogous to the claimed non-trainable algorithm comprising matrix and/or vector operations), the method comprising: receiving a definition of the non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (Patel, Par. [0408-0409], “At 1010, model input is received, where the model input specifies a generative probabilistic model that characterizes a conditional probability distribution for measurement data given a set of latent variables. […] At 1015, a factor graph corresponding to the generative probabilistic model may be generated, wherein the factor graph includes a measurement data node, latent variable nodes and factor nodes.”, thus, a model input specifying the model that characterizes the algorithm is received and then a factor graph identifying a sequence and dependency of one or more operations forming this algorithm is obtained. This is better depicted by Figure 2B, which according to Par. [0028] depicts the factor graph representation of the deep rendering model designed specifically for inference algorithms such as the max-sum message passing); mapping each of the one or more matrix and/or vector operations to a set of one or more neural network primitives (Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (analogous to a primitive), but does not explicitly recite the limitation “a set of one or more neural network primitives” – therefore, See introduction of Srivastava reference below for explicit recitation of “a set of one or more neural network primitives”) that is mathematically equivalent to that matrix and/or vector operation (Patel, Par. [0410], “At 1020, each factor node of the factor graph may be processed (e.g., expanded) based on a specified inference task and a specified kind of message passing algorithm, wherein each factor node is processed to determine (or expanded into) a corresponding sequence of arithmetic operations, e.g., as variously described above. The factor graph and the sequences of arithmetic operations specify a structure of a neural network for performance of the inference task. Each arithmetic operation of each node-specific sequence may correspond to a respective layer of the neural network. For example, the “max” element of a given node may correspond to a max pooling layer. As another example, the “sum” element of a given node may correspond to a convolutional layer of the neural network.”, therefore, one or more matrix and/or vector operations (arithmetic operations) are mapped to a set of one or more neural network layers that are mathematically equivalent to that matrix and/or vector operation. For example, the “max” element may correspond to a max pooling layer, the “sum” element may correspond to a convolutional layer); linking the set of one or more neural network primitives (Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (analogous to a primitive), but does not explicitly recite the limitation “a set of one or more neural network primitives” – therefore, See introduction of Srivastava reference below for explicit recitation of “a set of one or more neural network primitives”) mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm (Patel, Par. [0413], “At 1030, information specifying a trained state of the neural network may be stored in memory, where the information includes the sequences of arithmetic operations and the determined parameter values. The information may also include structural information specifying the structure of the factor graph” & Par. [0415], “In some embodiments, the method 1000 may also include executing the neural network based on the stored information.”, therefore, the set of one or more neural network layers/operations mapped to each of one or more matrix and/or vector operations are linked according to the sequence to form a trained neural network); and configuring hardware logic capable of implementing a neural network to implement the neural network that represents the non-trainable algorithm (Patel, Par. [0407], “The method 1000 may be implemented by a computer system (or more generally, by a set of one or more computer systems), by one or more programmable hardware elements such as FPGAs, by dedicated digital circuitry such as one or more ASICs, or by any combination of the foregoing.”, therefore, hardware logic (comprising one or more hardware elements) is configured for implementing a neural network representing the non-trainable algorithm according to the aforementioned methods (See Patel Figure 10 which depicts the method of constructing a neural network)). While Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (See Patel Par. [0410]), Patel does not explicitly recite a set of one or more neural network primitives However, Srivastava explicitly teaches a set of one or more neural network primitives (Srivastava, Par. [0173], “The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms.”, therefore, a set of one or more neural network primitives are disclosed) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network, as disclosed by Patel to include a set of one or more neural network primitives, as disclosed by Srivastava. One of ordinary skill in the art would have been motivated to make this modification to enable the use of a library of neural network primitives, which may provide highly optimized, pre-built implementations of essential neural network operations – hence reducing development time and maximizing performance across diversified hardware (Srivastava, Par. [0173], “Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504.”). Regarding Claim 2, Patel in view of Srivastava teaches the method of claim 1, wherein each set of one or more neural network primitives comprises neural network primitives from a group of neural network primitives supported by the hardware logic capable of implementing a neural network (Srivastava, Par. [0173], “Hardware acceleration for the machine learning application 1502 can be enabled via a machine learning framework 1504. The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504.”, therefore, the set of one or more neural network primitives comprises primitives from a group supported by the hardware logic capable of implementing a neural network). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 3, Patel in view of Srivastava teaches the method of claim 2, wherein the group of neural network primitives comprises a convolution primitive (Srivastava, Par. [0173], “Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN).”, thus, the group of neural network primitives comprises a convolution primitive). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 4, Patel in view of Srivastava teaches the method of claim 2, wherein the group of neural network primitives comprises a normalisation primitive and/or a fully-connected primitive (Patel, Par. [0079], “We will now demonstrate that the sequence of operations in the MS-RMC in Eq. 7 coincides exactly with the operations involved in one layer of a DCN (or, more generally, a max-out neural network (10)): image normalization, linear template matching, thresholding, and max pooling. See FIG. 2C.”, thus, Srivastava discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (although Srivastava does not explicitly recite the term neural network primitives). Hence, Srivastava teaches the group of neural network operations comprising a normalization primitive). Regarding Claim 5, Patel in view of Srivastava teaches the method of claim 2, wherein the group of neural network primitives comprises a pooling primitive (Srivastava, Par. [0173], “Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN).”, thus, the group of neural network primitives comprises a pooling primitive). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 6, Patel in view of Srivastava teaches the method of claim 2, wherein the group of neural network primitives comprises an activation primitive (Srivastava, Par. [0173], “Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN).”, thus, the group of neural network primitives comprises an activation primitive). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 7, Patel in view of Srivastava teaches the method of claim 2, wherein the group of neural network primitives comprises an element-wise operations primitive (Srivastava, Par. [0173], “The machine learning framework 1504 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.”, thus, the group of neural network primitives comprises an element-wise operations primitive). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 8, Patel in view of Srivastava teaches the method of claim 1, wherein mapping a matrix or vector operation to a set of one or more neural network primitives that is mathematically equivalent to that matrix or vector operation comprises identifying the set of one or more neural network primitives and identifying a specific implementation of a plurality of alternative implementations for at least one of the neural network primitives in the set (Patel, Par. [0201], “Thus far, we have talked about estimating the right parameters of fixed architectures. But another problem is actually finding good architectures—i.e. structure learning. Currently, determining the right deep architectures quite difficult, typically requiring exhaustive search over a large space (e.g. number of layers, filters per layer, filter sizes, layer types) and lots of intuition. In this section, armed with the DRM, we show how to infer such parameters using the EM algorithm.”, therefore, mapping includes identifying different architectures (including different layer types which may correspond to different neural network primitives) and identifying a specific implementation of the plurality of different architectures). Regarding Claim 9, Patel in view of Srivastava teaches the method of claim 8, wherein the at least one of the neural network primitives is an activation primitive that can implement any one of a ReLU function, a PReLU function, and one or more alternative non-linear functions (Srivastava, Par. [0187], “In the detector stage 1618, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as ƒ(x)=max(0,x), such that the activation is thresholded at zero.”, thus, at least one of the neural network primitives is an activation primitive that can implement any one of a ReLU/PReLU/alternative non-linear function). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 10, Patel in view of Srivastava teaches the method of claim 8, wherein the at least one of the neural network primitives is a pooling primitive that can implement any one of: a max poling function, a mean pooling function, and one or more other pooling functions (Srivastava, Par. [0188], “The pooling stage 1620 uses a pooling function that replaces the output of the second convolutional layer 1606 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 1620, including max pooling, average pooling, and 12-norm pooling.”, therefore, at least one of the neural network primitives is a pooling primitive that can implement any one of a max pooling/mean pooling/other pooling function). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 11, Patel in view of Srivastava teaches the method of claim 1, wherein the non-trainable algorithm is one of: a scientific computing algorithm, a computer game animation algorithm, an audio processing algorithm, a signal processing algorithm, and a ray tracing algorithm (Patel, Par. [0429], “In some embodiments, the operational measurement data represents a measured audio signal, and the inference data represents at least one of the following: a category to which the audio signal belongs; phonemes of a speech signal present in the audio signal; words being spoken in the audio signal; a determination of a language being spoken in the audio signal; […]”, thus, the non-trainable algorithm may be one of an audio processing and/or signal processing algorithm) Regarding Claim 12, Patel in view of Srivastava teaches the method of claim 1, further comprising training, using one or more neural network training techniques, the neural network that represents the non-trainable algorithm prior to configuring the hardware logic to implement the neural network that represents the non-trainable algorithm (Patel, Par. [0413], “At 1030, information specifying a trained state of the neural network may be stored in memory, where the information includes the sequences of arithmetic operations and the determined parameter values. The information may also include structural information specifying the structure of the factor graph.”, thus the neural network representing the non-trainable algorithm (audio processing algorithm) may be trained prior to configuring the hardware logic to implement the neural network, as the trained state of the neural network may be stored in memory). Regarding Claim 13, Patel in view of Srivastava teaches the method of claim 1, wherein the mapping is automatically performed based on a library that comprises a mapping of matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives (Srivastava, Par. [0173], “The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms.”, thus, the mapping may be automatically performed based on a library that comprises a mapping of matrix/vector operations to mathematically equivalent sets of primitives). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 14, Patel in view of Srivastava teaches the method of claim 13, further comprising, when the library comprises more than one mapping for a matrix and/or vector operation of the one or more matrix and/or vector operations forming the non-trainable algorithm, selecting one of the more than one mapping based on one or more of: the hardware logic capable of implementing a neural network, one or more other matrix and/or vector operations forming the non-trainable algorithm, and the set of one or more neural network primitives selected for one or more other matrix and/or vector operations forming the non-trainable algorithm(Srivastava, Par. [0173], “Hardware acceleration for the machine learning application 1502 can be enabled via a machine learning framework 1504. The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504.”, therefore, the set of one or more neural network primitives comprises primitives from a group supported by the hardware logic capable of implementing a neural network). The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 15, Patel in view of Srivastava teaches the method of claim 1, wherein the hardware logic capable of implementing a neural network comprises a neural network accelerator (Patel, Par. [0465], “In some embodiments, the computer system 1100 may include other devices, e.g., devices such as one or more graphics accelerators, one or more speakers, a sound card, a video camera and a video card, a data acquisition system.”, therefore, the hardware logic capable of implementing a neural network may comprise a neural network accelerator). Regarding Claim 16, Patel in view of Srivastava teaches the method of claim 15, wherein the neural network accelerator is embodied in hardware on an integrated circuit (Patel, Par. [0039], “The term “memory medium” includes within its scope of meaning the possibility that a given memory medium might be a union of two or more memory media that reside at different locations, e.g., in different portions of an integrated circuit or on different integrated circuits in an electronic system or on different computers in a computer network.”, thus, the neural network accelerator may be embodied in hardware on an integrated circuit). Regarding Claim 17, Patel in view of Srivastava teaches a system (Patel, Figure 11, label 1100 depicting a computer system) for implementing a non-trainable algorithm formed of one or more matrix and/or vector operations as a neural network (Patel, Par. [0012], “The present invention relates to the field of machine learning, and more particularly, to a compilation procedure for converting a probabilistic description of an inference task into a neural network for realizing the inference task.”, thus, methods of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network are disclosed. Examiner asserts that instant claim 11 defines the non-trainable algorithm to be one of an audio or signal processing algorithm – similarly, Patel Claims 5 & 6 explicitly limit the inference task to audio and/or signal processing, hence the probabilistic description of an audio/signal processing inference task (comprising input/measurement data in the form of a vector fed into the network for further processing operations – See Patel Par. [0020]) is analogous to the claimed non-trainable algorithm comprising matrix and/or vector operations), the system comprising: hardware logic capable of implementing a neural network (Patel, Par. [0407], “The method 1000 may be implemented by a computer system (or more generally, by a set of one or more computer systems), by one or more programmable hardware elements such as FPGAs, by dedicated digital circuitry such as one or more ASICs, or by any combination of the foregoing.”, therefore, hardware logic (comprising one or more hardware elements) is configured for implementing a neural network representing the non-trainable algorithm according to the aforementioned methods (See Patel Figure 10 which depicts the method of constructing a neural network)).; and a converter (Patel, Figure 11, label 1110 which depicts a processing unit for implementing the steps of Claim 17. Examiner notes that the ‘converter’ is interpreted as any computing-based device capable of performing the steps of the claim, as supported by Applicant’s specification Par. [0067]) configured to: […] The rest of the claim language in Claim 17 recites substantially the same limitations as Claim 1, in the form of a system, therefore it is rejected under the same rationale. The reasons of obviousness have been noted in the rejection of Claim 1 above and applicable herein. Regarding Claim 18, Patel teaches a neural network accelerator (Patel, Par. [0465], “In some embodiments, the computer system 1100 may include other devices, e.g., devices such as one or more graphics accelerators, one or more speakers, a sound card, a video camera and a video card, a data acquisition system.”, thus, a neural network accelerator is disclosed) configured to implement a neural network that represents a non-trainable algorithm that is formed by a sequence of one or more matrix and/or vector operations (Patel, Par. [0012], “The present invention relates to the field of machine learning, and more particularly, to a compilation procedure for converting a probabilistic description of an inference task into a neural network for realizing the inference task.”, thus, methods of implementing a non-trainable algorithm formed of matrix and/or vector operations as a neural network are disclosed. Examiner asserts that instant claim 11 defines the non-trainable algorithm to be one of an audio or signal processing algorithm – similarly, Patel Claims 5 & 6 explicitly limit the inference task to audio and/or signal processing, hence the probabilistic description of an audio/signal processing inference task (comprising input/measurement data in the form of a vector fed into the network for further processing operations – See Patel Par. [0020]) is analogous to the claimed non-trainable algorithm comprising matrix and/or vector operations), the neural network having been generated by mapping each matrix and/or vector operation forming the traditional computer vision algorithm to a mathematically equivalent set of one or more neural network (Patel, Par. [0410], “At 1020, each factor node of the factor graph may be processed (e.g., expanded) based on a specified inference task and a specified kind of message passing algorithm, wherein each factor node is processed to determine (or expanded into) a corresponding sequence of arithmetic operations, e.g., as variously described above. The factor graph and the sequences of arithmetic operations specify a structure of a neural network for performance of the inference task. Each arithmetic operation of each node-specific sequence may correspond to a respective layer of the neural network. For example, the “max” element of a given node may correspond to a max pooling layer. As another example, the “sum” element of a given node may correspond to a convolutional layer of the neural network.”, therefore, one or more matrix and/or vector operations (arithmetic operations) are mapped to a set of one or more neural network layers that are mathematically equivalent to that matrix and/or vector operation. For example, the “max” element may correspond to a max pooling layer, the “sum” element may correspond to a convolutional layer. Further, regarding the limitation “traditional computer vision algorithm”, See the 35 U.S.C. 112(b) rejection above – as such, the limitation is interpreted as any “non-trainable algorithm” consistent with the preceding claim language) primitives (Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (analogous to a primitive), but does not explicitly recite the limitation “a set of one or more neural network primitives” – therefore, See introduction of Srivastava reference below for explicit recitation of “a set of one or more neural network primitives”) and linking the one or more neural network primitives (Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (analogous to a primitive), but does not explicitly recite the limitation “a set of one or more neural network primitives” – therefore, See introduction of Srivastava reference below for explicit recitation of “a set of one or more neural network primitives”) mapped to each of the one or more matrix and/or vector operations according to the sequence to form the neural network that represents the non-trainable algorithm (Patel, Par. [0413], “At 1030, information specifying a trained state of the neural network may be stored in memory, where the information includes the sequences of arithmetic operations and the determined parameter values. The information may also include structural information specifying the structure of the factor graph” & Par. [0415], “In some embodiments, the method 1000 may also include executing the neural network based on the stored information.”, therefore, the set of one or more neural network layers/operations mapped to each of one or more matrix and/or vector operations are linked according to the sequence to form a trained neural network). While Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (See Patel Par. [0410]), Patel does not explicitly recite a set of one or more neural network primitives However, Srivastava explicitly teaches a set of one or more neural network primitives (Srivastava, Par. [0173], “The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms.”, therefore, a set of one or more neural network primitives are disclosed) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the neural network accelerator configured to implement a neural network that represents a non-trainable algorithm formed by a sequence of one or more matrix and/or vector operations, as disclosed by Patel to include a set of one or more neural network primitives, as disclosed by Srivastava. One of ordinary skill in the art would have been motivated to make this modification to enable the use of a library of neural network primitives, which may provide highly optimized, pre-built implementations of essential neural network operations – hence reducing development time and maximizing performance across diversified hardware (Srivastava, Par. [0173], “Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504.”). Regarding Claim 19, Patel teaches a computer-implemented automated tool for forming a neural network (Patel, Claim 18, “A computer system for constructing a neural network, the computer system comprising: a processor; and a memory storing program instructions, wherein the program instructions, when executed by the processor, cause the processor to: […]”, therefore, a computer-implemented automated (see title of Patel’s invention for explicit recitation of ‘automated’) tool for forming a neural network is disclosed), the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives (See introduction of Srivastava reference below for teaching of a library of mappings from matrix/vector operations to mathematically equivalent sets of one or more neural network primitives), wherein the automated tool is configured to: receive a definition of a non-trainable algorithm that identifies a sequence of one or more matrix and/or vector operations which form the non-trainable algorithm (Patel, Par. [0408-0409], “At 1010, model input is received, where the model input specifies a generative probabilistic model that characterizes a conditional probability distribution for measurement data given a set of latent variables. […] At 1015, a factor graph corresponding to the generative probabilistic model may be generated, wherein the factor graph includes a measurement data node, latent variable nodes and factor nodes.”, thus, a model input specifying the model that characterizes the algorithm is received and then a factor graph identifying a sequence and dependency of one or more operations forming this algorithm is obtained. This is better depicted by Figure 2B, which according to Par. [0028] depicts the factor graph representation of the deep rendering model designed specifically for inference algorithms such as the max-sum message passing); use the library (See introduction of Srivastava reference below for teaching of a library of mappings from matrix/vector operations to mathematically equivalent sets of one or more neural network primitives) to map each of the one or more matrix and/or vector operations to a set of one or more neural network primitives that is mathematically equivalent to that matrix and/or vector operation (Patel, Par. [0410], “At 1020, each factor node of the factor graph may be processed (e.g., expanded) based on a specified inference task and a specified kind of message passing algorithm, wherein each factor node is processed to determine (or expanded into) a corresponding sequence of arithmetic operations, e.g., as variously described above. The factor graph and the sequences of arithmetic operations specify a structure of a neural network for performance of the inference task. Each arithmetic operation of each node-specific sequence may correspond to a respective layer of the neural network. For example, the “max” element of a given node may correspond to a max pooling layer. As another example, the “sum” element of a given node may correspond to a convolutional layer of the neural network.”, therefore, one or more matrix and/or vector operations (arithmetic operations) are mapped to a set of one or more neural network layers that are mathematically equivalent to that matrix and/or vector operation. For example, the “max” element may correspond to a max pooling layer, the “sum” element may correspond to a convolutional layer); link the set of one or more neural network primitives (Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (analogous to a primitive), but does not explicitly recite the limitation “a set of one or more neural network primitives” – therefore, See introduction of Srivastava reference below for explicit recitation of “a set of one or more neural network primitives”) mapped to each of the one or more matrix and/or vector operations according to the sequence to form a neural network that represents the non-trainable algorithm (Patel, Par. [0413], “At 1030, information specifying a trained state of the neural network may be stored in memory, where the information includes the sequences of arithmetic operations and the determined parameter values. The information may also include structural information specifying the structure of the factor graph” & Par. [0415], “In some embodiments, the method 1000 may also include executing the neural network based on the stored information.”, therefore, the set of one or more neural network layers/operations mapped to each of one or more matrix and/or vector operations are linked according to the sequence to form a trained neural network); and output a definition of the neural network that represents the non-trainable algorithm for use in configuring hardware logic to implement the neural network that represents the non-trainable algorithm (Patel, Par. [0413], “At 1030, information specifying a trained state of the neural network may be stored in memory, where the information includes the sequences of arithmetic operations and the determined parameter values. The information may also include structural information specifying the structure of the factor graph.”, thus, a definition of the neural network that represents the non-trainable algorithm is outputted for use in configuring hardware logic to implement the neural network that represents the non-trainable algorithm (See Par. [0407] which describes how hardware logic may be configured to implement the neural network)). While Patel discloses the matrix and/or vector operations being mapped to one or more layers and/or operations of a neural network (See Patel Par. [0410]), Patel does not explicitly recite: the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives a set of one or more neural network primitives However, Srivastava teaches: the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives (Srivastava, Par. [0173], “Hardware acceleration for the machine learning application 1502 can be enabled via a machine learning framework 1504. The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. […] Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 1504 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.”, thus, the automated tool may have access to a library of mappings from matrix/vector operations to mathematically equivalent sets of one or more neural network primitives) a set of one or more neural network primitives (Srivastava, Par. [0173], “The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms.”, therefore, a set of one or more neural network primitives are disclosed) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the computer-implemented automated tool for forming a neural network, as disclosed by Patel to include the automated tool having access to a library of mappings from matrix and/or vector operations to mathematically equivalent sets of one or more neural network primitives and a set of one or more neural network primitives, as disclosed by Srivastava. One of ordinary skill in the art would have been motivated to make this modification to enable the use of a library of neural network primitives, which may provide highly optimized, pre-built implementations of essential neural network operations – hence reducing development time and maximizing performance across diversified hardware (Srivastava, Par. [0173], “Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504.”). Regarding Claim 20, Patel in view of Srivastava teaches a non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform the method (Patel, Par. [0470], “In some embodiments, a non-transitory computer-readable memory medium may be configured so that it stores program instructions and/or data, where the program instructions, if executed by a computer system, cause the computer system to perform a method, e.g., any of the method embodiments described herein, or, any combination of the method embodiments described herein, or, any subset of any of the method embodiments described herein, or, any combination of such subsets.”, thus, a non-transitory computer-readable storage medium having stored thereon computer readable instructions to be executed by a computer system is disclosed) as set forth in claim 1 (See rejection of Claim 1 above – Claim 20 recites substantially the same limitations and is therefore rejected under the same rationale). Conclusion 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Devika S Maharaj whose telephone number is (571)272-0829. The examiner can normally be reached Monday - Thursday 8:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571)270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVIKA S MAHARAJ/Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Apr 19, 2023
Application Filed
Feb 17, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585948
NEURAL PROCESSING DEVICE AND METHOD FOR PRUNING THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579426
Training a Neural Network having Sparsely-Activated Sub-Networks using Regularization
2y 5m to grant Granted Mar 17, 2026
Patent 12572795
ANSWER SPAN CORRECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12561577
AUTOMATIC FILTER SELECTION IN DECISION TREE FOR MACHINE LEARNING CORE
2y 5m to grant Granted Feb 24, 2026
Patent 12554969
METHOD AND SYSTEM FOR THE AUTOMATIC SEGMENTATION OF WHITE MATTER HYPERINTENSITIES IN MAGNETIC RESONANCE BRAIN IMAGES
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
63%
With Interview (+7.7%)
5y 0m
Median Time to Grant
Low
PTA Risk
Based on 78 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month