Prosecution Insights
Last updated: April 19, 2026
Application No. 18/301,921

NEURAL NETWORK ARCHITECTURE

Non-Final OA §101§102§112
Filed
Apr 17, 2023
Examiner
STARKS, WILBERT L
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
80%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
493 granted / 653 resolved
+20.5% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
47 currently pending
Career history
700
Total Applications
across all art units

Statute-Specific Performance

§101
40.3%
+0.3% vs TC avg
§103
13.1%
-26.9% vs TC avg
§102
35.7%
-4.3% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 653 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Claims 1-20 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “modulate” in claims 1-9 and18-20 is not defined in the Specification, but is used by the claim to apparently, colloquially mean “change magnitude or frequency of something.” Modulating an effect on features is an undefined transformation based on an undefined tensor. Further, through incorporation by reference into dependent claims 2 and 19, that term is applied to “de-noising,” “de-mosaicing,” etc. Application of the term “modulate” to those processes is not explained. Claim Rejections - 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The invention, as taught in Claims 1-20, is directed to “mental steps” and “mathematical steps” without significantly more. The claims recite: • generate a first output tensor • input tensor • first output tensor comprising values to impart an effect to one or more features in the input tensor • generate a second output tensor based on the input tensor • modulating the effect to be imparted to the one or more features based, at least in part, on the second output tensor Claim 1 Step 1 inquiry: Does this claim fall within a statutory category? The preamble of the claim recites “1. A method comprising…” Therefore, it is a “method” (or “process”), which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES.” Step 2A (Prong One) inquiry: Are there limitations in Claim 1 that recite abstract ideas? YES. The following limitations in Claim 1 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”: • generate a first output tensor • input tensor • first output tensor comprising values to impart an effect to one or more features in the input tensor • generate a second output tensor based on the input tensor • modulating the effect to be imparted to the one or more features based, at least in part, on the second output tensor Step 2A (Prong Two) inquiry: Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception? Applicant’s claims contain the following “additional elements”: (1) An “executing” (2) A “first neural network”/“second neural network” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.04(d)(I) recites: The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); • Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and • Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). This “executing” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Claim 1 merely teaches the embodiment where the claimed “model” is in the form of a “neural network”, which is an “additional element”. The neural network is not used to calculate anything at all. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) This “first neural network”/“second neural network” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application. Step 2B inquiry: Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim? Applicant’s claims contain the following “additional elements”: (1) An “executing” (2) A “first neural network”/“second neural network” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.05 (I)(A)(i-ii) recites: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include: i. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); Further, M.P.E.P. § 2016.05(f) recites: 2106.05(f) Mere Instructions To Apply An Exception [R-10.2019] Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”). Further, M.P.E.P. § 2106.05(f)(2) recites: (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. Further, Applicant's Specification, paragraph [0066] recites: [0066] Computing devices such as cloud server 702, smartphone 724, and other such devices that may employ signal processing and/or filtering architectures can take many forms and can include many features or functions including those already described and those not described herein. Figure 8 shows a block diagram of a general-purpose computerized system, consistent with an example embodiment. Figure 8 illustrates only one particular example of computing device 800, and other computing devices 800 may be used in other embodiments. Although computing device 800 is shown as a standalone computing device, computing device 800 may be any component or system that includes one or more processors or another suitable computing environment for executing software instructions in other examples, and need not include all of the elements shown here. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Specification recites: [0025] Neural networks and layers of Figure 2 may perform respective functions particularly efficiently in part due to isolating functions of feature detection and filtering into different networks, and in part due to improved nonlinearity. More specifically, multiplying values of output tensors of respective neural networks 208 and 212 may introduce a significant nonlinearity, allowing fewer neural network layers having fewer nodes in neural networks of Figure 2 than in a typical signal processing and/or filtering neural network architecture to produce a desired result. Efficiencies gained by a reduced size and improved nonlinearity in combining values of output tensors of neural network layers 208 and 212 may be further enhanced by an ability of processing stage 202 to perform multiple functions at the same time by concurrently executing two different neural networks having different objectives in parallel. Neural network layers 206-208 and 210-212 may in one example, be convolutional neural network layers, but in other examples may be any type of neural network as are commonly known or may become known in the art. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application. Claim 1 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 2 Claim 2 recites: 2. The method of claim 1, wherein the effect comprises tone mapping, color grading, mesh shading, de-mosaicing or de-noising, super-resolution, or a combination thereof. Applicant’s Claim 2 merely teaches mathematical image processing functions. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 2 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 3 Claim 3 recites: 3. The method of claim 1, wherein the second output tensor comprises coefficients based, at least in part, on detection of at least one of the one more features in the input tensor. Applicant’s Claim 3 merely teaches a mathematical quantity called a “tensor”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 3 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 4 Claim 4 recites: 4. The method of claim 3, wherein modulating the effect to be imparted to the one or more features further comprises applying the coefficients to the first output tensor to compute residual values, and combining the computed residual values with the input tensor or a tensor derived from the first input tensor to impart the effect. Applicant’s Claim 4 merely teaches mathematical “applying” of coefficients and mathematical “combining”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 4 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 5 Claim 5 recites: 5. The method of claim 1, wherein the input tensor is determined based, at least in part, on image intensity values of one or more image frames. Applicant’s Claim 5 merely teaches the mathematical calculation of a tensor. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 5 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 6 Claim 6 recites: 6. The method of claim 1, wherein at least one of the first neural network and the second neural network comprise convolutional neural networks. Applicant’s Claim 6 merely teaches a generic convolutional neural network. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 6 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 7 Claim 7 recites: 7. The method of claim 1, further comprising a third neural network to generate a third output tensor based, at least in part, on the input tensor, wherein the effect to be imparted to the one or more features is based, at least in part, on at least one of the first output tensor and the third output tensor as selectively determined by the second output tensor. Applicant’s Claim 7 merely teaches mathematical calculation of a tensor. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 7 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 8 Claim 8 recites: 8. The method of claim 1, further comprising multiplying one or more values in the first output tensor by one or more values in the second output tensor to produce a product tensor, and adding the product tensor to the input tensor or a tensor derived from the first input tensor. Applicant’s Claim 8 merely teaches mathematical multiplication and addition. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 8 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 9 Claim 9 recites: 9. The method of claim 1, wherein the executing the first neural network, executing the second neural network, and modulating the effect are employed to form one or more layers of a larger network architecture. Applicant’s Claim 9 merely teaches a generic neural network being executed on a generic computer and mathematical “modulation”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 9 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 10 Step 1 inquiry: Does this claim fall within a statutory category? The preamble of the claim recites “10. A computing device, comprising…” Therefore, it is a “device” (or “apparatus”), which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES.” Step 2A (Prong One) inquiry: Are there limitations in Claim 10 that recite abstract ideas? YES. The following limitations in Claim 10 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”: • input tensor • produce a first output tensor • first output tensor to indicate one or more detected features in the input tensor • modulating the effect to be imparted to the one or more features based, at least in part, on the second output tensor • produce a second output tensor comprising an effect applied to the input tensor • apply the effect of the second output tensor to the one or more detected features in the first output tensor • apply the combined output tensor to the input tensor Step 2A (Prong Two) inquiry: Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception? Applicant’s claims contain the following “additional elements”: (1) An “execute”/“one or more processors” (2) A “first neural network”/“second neural network” (3) A “memory comprising one more storage devices” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.04(d)(I) recites: The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); • Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and • Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). This “executing” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Claim 10 merely teaches the embodiment where the claimed “model” is in the form of a “neural network”, which is an “additional element”. The neural network is not used to calculate anything at all. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) This “first neural network”/“second neural network” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). A “memory comprising one more storage devices” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(II) recites: The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. *** iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; This “memory comprising one more storage devices” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application. Step 2B inquiry: Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim? Applicant’s claims contain the following “additional elements”: (1) An “executing” (2) A “first neural network”/“second neural network” (3) A “memory comprising one more storage devices” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.05 (I)(A)(i-ii) recites: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include: i. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); Further, M.P.E.P. § 2016.05(f) recites: 2106.05(f) Mere Instructions To Apply An Exception [R-10.2019] Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”). Further, M.P.E.P. § 2106.05(f)(2) recites: (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. Further, Applicant's Specification, paragraph [0066] recites: [0066] Computing devices such as cloud server 702, smartphone 724, and other such devices that may employ signal processing and/or filtering architectures can take many forms and can include many features or functions including those already described and those not described herein. Figure 8 shows a block diagram of a general-purpose computerized system, consistent with an example embodiment. Figure 8 illustrates only one particular example of computing device 800, and other computing devices 800 may be used in other embodiments. Although computing device 800 is shown as a standalone computing device, computing device 800 may be any component or system that includes one or more processors or another suitable computing environment for executing software instructions in other examples, and need not include all of the elements shown here. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Specification recites: [0025] Neural networks and layers of Figure 2 may perform respective functions particularly efficiently in part due to isolating functions of feature detection and filtering into different networks, and in part due to improved nonlinearity. More specifically, multiplying values of output tensors of respective neural networks 208 and 212 may introduce a significant nonlinearity, allowing fewer neural network layers having fewer nodes in neural networks of Figure 2 than in a typical signal processing and/or filtering neural network architecture to produce a desired result. Efficiencies gained by a reduced size and improved nonlinearity in combining values of output tensors of neural network layers 208 and 212 may be further enhanced by an ability of processing stage 202 to perform multiple functions at the same time by concurrently executing two different neural networks having different objectives in parallel. Neural network layers 206-208 and 210-212 may in one example, be convolutional neural network layers, but in other examples may be any type of neural network as are commonly known or may become known in the art. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). A “memory comprising one more storage devices” is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(II) recites: The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. *** iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; Further, Applicant’s Specification recites: [0070] One or more storage devices 812 may be configured to store information within computing device 800 during operation. Storage device 812, in some examples, is known as a computer-readable storage medium. In some examples, storage device 812 comprises temporary memory, meaning that a primary purpose of storage device 812 is not long-term storage. Storage device 812 in some examples is a volatile memory, meaning that storage device 812 does not maintain stored contents when computing device 800 is turned off. In other examples, data is loaded from storage device 812 into memory 804 during operation. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, storage device 812 is used to store program instructions for execution by processors 802. Storage device 812 and memory 804, in various examples, are used by software or applications running on computing device 500 such as image processor 822 to temporarily store information during program execution. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application. Claim 10 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 11 Claim 11 recites: 11. The computing device of claim 10, wherein the one or more processors are further operable to multiply the first output tensor by the second output tensor to produce the combined output tensor. Applicant’s Claim 11 merely teaches mathematical multiplication. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 11 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 12 Claim 12 recites: 12. The computing device of claim 10, wherein the one or more processors are further operable to add the combined output tensor to the input tensor to produce a processing unit output. Applicant’s Claim 12 merely teaches one or more generic processors. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 12 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 13 Claim 13 recites: 13. The computing device of claim 10, wherein the first output tensor comprises coefficients based, at least in part, on detection of at least one of the one more features in the input tensor. Applicant’s Claim 13 merely teaches a mathematical tensor. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 13 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 14 Claim 14 recites: 14. The computing device of claim 10, wherein the input tensor is derived, at least in part, on image signal intensity values of one or more image frames. Applicant’s Claim 14 merely teaches mathematical calculation of a tensor. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 14 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 15 Claim 15 recites: 15. The computing device of claim 10, wherein the first neural network or the second neural network, or a combination thereof, comprise a convolutional neural network. Applicant’s Claim 15 merely teaches a generic convolutional neural network. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 15 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 16 Claim 16 recites: 16. The computing device of claim 10, wherein the one or more processors are further operable to execute a third neural network to generate a third output tensor based, at least in part, on the input tensor, wherein the effect to be imparted to the one or more detected features is based, at least in part, on the second output tensor or the third output tensor, or a combination thereof, as selectively determined by the first output tensor. Applicant’s Claim 16 merely teaches one or more processors that may be operable to perform certain functions. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 16 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 17 Claim 17 recites: 17. The computing device of claim 10, wherein the effect applied in the second neural network comprises tone mapping, color grading, mesh shading, de-mosaicing or de-noising, super-resolution, or a combination thereof. Applicant’s Claim 17 merely teaches mathematical image processing functions. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 17 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 18 Step 1 inquiry: Does this claim fall within a statutory category? The preamble of the claim recites “18. A computer-readable medium with instructions stored thereon, the instructions to be executable by one or more processors to cause a computerized system to…” Therefore, it fails to be a “non-transitory computer-readable medium.” Therefore, the answer to the inquiry is: “NO.” Step 2A (Prong One) inquiry: Are there limitations in Claim 18 that recite abstract ideas? YES. The following limitations in Claim 18 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”: • generate a first output tensor • input tensor • first output tensor comprising values to impart an effect to one or more features in the input tensor • generate a second output tensor based on the input tensor • modulating the effect to be imparted to the one or more features based, at least in part, on the second output tensor Step 2A (Prong Two) inquiry: Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception? Applicant’s claims contain the following “additional elements”: (1) An “executing” (2) A “first neural network”/“second neural network” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.04(d)(I) recites: The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); • Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and • Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). This “executing” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Claim 18 merely teaches the embodiment where the claimed “model” is in the form of a “neural network”, which is an “additional element”. The neural network is not used to calculate anything at all. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) This “first neural network”/“second neural network” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)). The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application. Step 2B inquiry: Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim? Applicant’s claims contain the following “additional elements”: (1) An “executing” (2) A “first neural network”/“second neural network” An “executing” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2106.05 (I)(A)(i-ii) recites: Limitations that the courts have found not to be enough to qualify as “significantly more” when recited in a claim with a judicial exception include: i. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); Further, M.P.E.P. § 2016.05(f) recites: 2106.05(f) Mere Instructions To Apply An Exception [R-10.2019] Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”). Further, M.P.E.P. § 2106.05(f)(2) recites: (2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field. Further, Applicant's Specification, paragraph [0066] recites: [0066] Computing devices such as cloud server 702, smartphone 724, and other such devices that may employ signal processing and/or filtering architectures can take many forms and can include many features or functions including those already described and those not described herein. Figure 8 shows a block diagram of a general-purpose computerized system, consistent with an example embodiment. Figure 8 illustrates only one particular example of computing device 800, and other computing devices 800 may be used in other embodiments. Although computing device 800 is shown as a standalone computing device, computing device 800 may be any component or system that includes one or more processors or another suitable computing environment for executing software instructions in other examples, and need not include all of the elements shown here. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). A “first neural network”/“second neural network” is a broad term which is described at a high level. Applicant’s Specification recites: [0025] Neural networks and layers of Figure 2 may perform respective functions particularly efficiently in part due to isolating functions of feature detection and filtering into different networks, and in part due to improved nonlinearity. More specifically, multiplying values of output tensors of respective neural networks 208 and 212 may introduce a significant nonlinearity, allowing fewer neural network layers having fewer nodes in neural networks of Figure 2 than in a typical signal processing and/or filtering neural network architecture to produce a desired result. Efficiencies gained by a reduced size and improved nonlinearity in combining values of output tensors of neural network layers 208 and 212 may be further enhanced by an ability of processing stage 202 to perform multiple functions at the same time by concurrently executing two different neural networks having different objectives in parallel. Neural network layers 206-208 and 210-212 may in one example, be convolutional neural network layers, but in other examples may be any type of neural network as are commonly known or may become known in the art. Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)). Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application. Claim 18 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 19 Claim 19 recites: 19. The computer-readable medium of claim 18, wherein the effect comprises tone mapping, color grading, mesh shading, de-mosaicing or de-noising, super-resolution, or a combination thereof. Applicant’s Claim 19 merely teaches mathematical image processing functions. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 19 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim 20 Claim 20 recites: 20. The computer-readable medium of claim 18, wherein the instructions to be further executable by the one or more processors to modulate the effect to be imparted to the one or more features based, at least in part, on: application of coefficients in the second output tensor to the first output tensor to compute residual values, and combination of the computed residual values with the input tensor to impart the effect. Applicant’s Claim 20 merely teaches a mathematical “application” and “combination”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).) Claim 20 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101. Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5-6, 9, and 18-19 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Tan, et al., Color Image Demosaicking Via Deep Residual Learning, 2017 IEEE International Conference on Multimedia and Expo (ICME), 10 JUL 2017, pp. 793-798 in its entirety. Specifically: Claim 1 Claim 1’s “executing a first neural network to generate a first output tensor based, at least in part, on an input tensor, the first output tensor comprising values to impart (interpreted as: “to make known”) an effect to one or more features in the input tensor” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “First stage: Estimate the intermediate G channel.” The prior art “first stage” anticipates the “first neural network.” The prior art “intermediate G channel information” anticipates the clamed “first output tensor”. The prior art “initial image” anticipates the claimed “input tensor.” Claim 1’s “executing a second neural network to generate a second output tensor based on the input tensor; and” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “Second stage: Recover the RGB channels.” The claimed “second output tensor” is anticipated by the prior art “output RGB.” It is “based on” the “input tensor” because it is calculated by the progress of the input tensor through the cascaded networks. Claim 1’s “modulating (interpreted to mean “amplified” or increased in magnitude) the effect to be imparted to the one or more features based, at least in part, on the second output tensor” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “Second stage: Recover the RGB channels.” The output of the prior art from the “residual learning” is “modulated” by the “intermediate R/B” tensor. Claim 2 Claim 2’s “2. The method of claim 1, wherein the effect comprises tone mapping, color grading, mesh shading, de-mosaicing or de-noising, super-resolution, or a combination thereof” is anticipated by Tan, et al., page 2, right column, first full paragraph, where it recites: The contribution of this work is summarized as follows. (1) We propose an end-to-end deep residual demosaicking model by taking advantage of the recent development of CNN technologies (2) We design a customized CNN model for CDM, which adopts a two-stage architecture to incorporate the demosaicking domain knowledge. Specifically, the network first constrains the G channel and then restores the full color images with the guidance of tentative G image. (3) We present a new dataset for more comprehensively evaluating the CDM algorithms. Experiments show that our method significantly outperforms state-of-the-arts on the Kodak, McMaster, and the new dataset both quantitatively and qualitatively. Claim 3 Claim 3’s “3. The method of claim 1, wherein the second output tensor comprises coefficients based, at least in part, on detection of at least one of the one more features in the input tensor” is anticipated by Tan, et al., page 2, right column, last partial paragraph, where it recites: Fig. 1 shows our proposed CNN architecture for CDM. The proposed network contains two basic modules of K-layer CNNs, stacked by convolutional layers, batch normalization and ReLU nonlinearity layers. For each module, the first layer uses 64 filters of size 3×3 to generate 64 feature maps (i.e., the clamed detected “features”), while the last convolutional layer adopts the filter of size 3×3×64 to generate the corresponding output. These feature maps are part of the calculation of the prior art “Output RGB” which anticipates the claimed “second output tensor.” Therefore, the output is “based on” the prior art feature maps. Claim 5 Claim 5’s “5. The method of claim 1, wherein the input tensor is determined based, at least in part, on image intensity values of one or more image frames” is anticipated by Tan, et al., page 3, left column, last full paragraph, where it recites: To sum up, the proposed CNN based CDM model has two distinct characteristics. First, instead of using the CFA image as input, we take the initial images by simple bilinear interpolation as input, and then the residual learning strategy is used to reconstruct the demosaicked images. Second, the proposed model adopts a two-stage scheme to make use of the detailed G channel information to guide the reconstruction of R/B channels. The prior art “bilinear interpolation” is performed based on the intensity values in the image. Claim 6 Claim 6’s “6. The method of claim 1, wherein at least one of the first neural network and the second neural network comprise convolutional neural networks” is anticipated by Tan, et al., page 2, right column, last partial paragraph, where it recites: Fig. 1 shows our proposed CNN architecture for CDM. The proposed network contains two basic modules of K-layer CNNs, stacked by convolutional layers, batch normalization and ReLU nonlinearity layers. For each module, the first layer uses 64 filters of size 3×3 to generate 64 feature maps, while the last convolutional layer adopts the filter of size 3×3×64 to generate the corresponding output. Claim 9 Claim 9’s “9. The method of claim 1, wherein the executing the first neural network, executing the second neural network, and modulating the effect are employed to form one or more layers of a larger network architecture” is anticipated by Tan, et al., page 2, right column, last partial paragraph, where it recites: Fig. 1 shows our proposed CNN architecture for CDM. The proposed network contains two basic modules of K-layer CNNs, stacked by convolutional layers, batch normalization and ReLU nonlinearity layers. For each module, the first layer uses 64 filters of size 3×3 to generate 64 feature maps, while the last convolutional layer adopts the filter of size 3×3×64 to generate the corresponding output. Claim 18 Claim 18’s “execute a first neural network to generate a first output tensor based on an input tensor, the first output tensor comprising values to impart an effect to one or more features in the input tensor;” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “First stage: Estimate the intermediate G channel.” The prior art “first stage” anticipates the “first neural network.” The prior art “intermediate G channel information” anticipates the clamed “first output tensor”. The prior art “initial image” anticipates the claimed “input tensor.” Claim 18’s “execute a second neural network to generate a second output tensor based on the input tensor; and” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “Second stage: Recover the RGB channels.” The claimed “second output tensor” is anticipated by the prior art “output RGB.” It is “based on” the “input tensor” because it is calculated by the progress of the input tensor through the cascaded networks. Claim 18’s “modulate (interpreted to mean “amplified” or increased in magnitude) the effect to be imparted to the one or more features based, at least in part, on the second output tensor.” is anticipated by Tan, et al., page 3, Fig. 1, where it shows the box labeled: “Second stage: Recover the RGB channels.” The output of the prior art from the “residual learning” is “modulated” by the “intermediate R/B” tensor. Claim 19 Claim 19’s “19. The computer-readable medium of claim 18, wherein the effect comprises tone mapping, color grading, mesh shading, de-mosaicing or de-noising, super-resolution, or a combination thereof.” is anticipated by Tan, et al., page 2, right column, first full paragraph, where it recites: The contribution of this work is summarized as follows. (1) We propose an end-to-end deep residual demosaicking model by taking advantage of the recent development of CNN technologies (2) We design a customized CNN model for CDM, which adopts a two-stage architecture to incorporate the demosaicking domain knowledge. Specifically, the network first constrains the G channel and then restores the full color images with the guidance of tentative G image. (3) We present a new dataset for more comprehensively evaluating the CDM algorithms. Experiments show that our method significantly outperforms state-of-the-arts on the Kodak, McMaster, and the new dataset both quantitatively and qualitatively. Claims 4, 7-8, 10-17, and 20 are not rejected under art since, when reading the claims in light of the specification, as per MPEP § 2111.01, none of the references of record, whether taken alone or in combination, discloses or suggests the combination of limitations specified in independent Claim 4. Specifically: Claim 4’s "...applying the coefficients to the first output tensor..." Claim 4’s "...combining the computed residual values with the input tensor or a tensor derived from the first input tensor..." Further, none of the references of record, whether taken alone or in combination, discloses or suggests the combination of limitations specified in independent Claim 7. Specifically: Claim 7’s "...as selectively determined by the second output tensor..." Further, none of the references of record, whether taken alone or in combination, discloses or suggests the combination of limitations specified in independent Claim 8. Specifically: Claim 8’s "...multiplying one or more values in the first output tensor by one or more values in the second output tensor to produce a product tensor, and adding the product tensor to the input tensor or a tensor derived from the first input tensor..." Further, none of the references of record, whether taken alone or in combination, discloses or suggests the combination of limitations specified in independent Claim 10 (and its dependent clams 11-17.) Specifically: Claim 10’s "...apply the effect of the second output tensor to the one or more detected features in the first output tensor to produce a combined output tensor..." Claim 10’s "...apply the combined output tensor to the input tensor..." Further, none of the references of record, whether taken alone or in combination, discloses or suggests the combination of limitations specified in independent Claim 20. Specifically: Claim 20’s "...application of coefficients in the second output tensor to the first output tensor to compute residual values..." Claim 20’s "...combination of the computed residual values with the input tensor to impart the effect..." Conclusion Any inquiries concerning this communication or earlier communications from the examiner should be directed to Wilbert L. Starks, Jr., who may be reached Monday through Friday, between 8:00 a.m. and 5:00 p.m. EST. or via telephone at (571) 272-3691 or email: Wilbert.Starks@uspto.gov. If you need to send an Official facsimile transmission, please send it to (571) 273-8300. If attempts to reach the examiner are unsuccessful the Examiner’s Supervisor (SPE), Kakali Chaki, may be reached at (571) 272-3719. Hand-delivered responses should be delivered to the Receptionist @ (Customer Service Window Randolph Building 401 Dulany Street, Alexandria, VA 22313), located on the first floor of the south side of the Randolph Building. Finally, information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Moreover, status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) toll-free @ 1-866-217-9197. /WILBERT L STARKS/ Primary Examiner, Art Unit 2122 WLS 05 JAN 2026
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
Jan 06, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561587
DATA PROCESSING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12555007
METHOD AND SYSTEM FOR INFERRING DEVICE FINGERPRINT
2y 5m to grant Granted Feb 17, 2026
Patent 12541694
GENERATING A DOMAIN-SPECIFIC KNOWLEDGE GRAPH FROM UNSTRUCTURED COMPUTER TEXT
2y 5m to grant Granted Feb 03, 2026
Patent 12525251
METHOD, SYSTEM AND PROGRAM PRODUCT FOR PERCEIVING AND COMPUTING EMOTIONS
2y 5m to grant Granted Jan 13, 2026
Patent 12518149
IMPLICIT VECTOR CONCATENATION WITHIN 2D MESH ROUTING
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
80%
With Interview (+4.4%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 653 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month