DETAILED ACTION
Claims 4, 6, 11, 12, 16-20, 27, and 28 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 U.S.C. § 101
35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The invention, as taught in Claims 4, 6, 11, 12, 16-20, 27, and 28, is directed to “mental steps” and “mathematical steps” without significantly more.
The claims recite:
• information on the hardware and a structure of the neural network (i.e., mental steps)
• determining a mapping parameter corresponding to an arbitrary mapping model based on the information and the structure (i.e., mental steps)
• determining the target mapping model based on the mapping parameter (i.e., mathematical steps)
• determining an operation performance for the arbitrary mapping model based on a partition structure of the neural network (i.e., mathematical steps)
• determining a memory access size (i.e., mental steps)
• determining the mapping parameter based on the operation performance and the memory access size (i.e., mental steps)
• optimized code (i.e., software per se)
• dividing the operation performance by the memory access size, where the target mapping model represents a maximized mapping parameter (i.e., mathematical steps)
• determining a utilization rate of the processing elements based on the partition structure of the neural network (i.e., mental steps)
• determining the operation performance based on the utilization rate (i.e., mental steps)
Claim 4
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “4. A processor-implemented method of generating a code, the method comprising…” Therefore, it is a “method” (or “process”), which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES.”
Step 2A (Prong One) inquiry:
Are there limitations in Claim 4 that recite abstract ideas?
YES. The following limitations in Claim 4 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• information on the hardware and a structure of the neural network (i.e., mental steps)
• determining a mapping parameter corresponding to an arbitrary mapping model based on the information and the structure (i.e., mental steps)
• determining the target mapping model based on the mapping parameter (i.e., mathematical steps)
• determining an operation performance for the arbitrary mapping model based on a partition structure of the neural network (i.e., mathematical steps)
• determining a memory access size (i.e., mental steps)
• determining the mapping parameter based on the operation performance and the memory access size (i.e., mental steps)
• optimized code (i.e., software per se)
• dividing the operation performance by the memory access size, where the target mapping model represents a maximized mapping parameter (i.e., mathematical steps)
• determining a utilization rate of the processing elements based on the partition structure of the neural network (i.e., mental steps)
• determining the operation performance based on the utilization rate (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "one or more processors"
(2) A "receiving information on hardware configured to perform a neural network operation of a neural network"
(3) A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"
A "one or more processors" is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
Further, M.P.E.P. § 2106.05(f)(2) recites:
(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field.
This "one or more processors" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "receiving information on hardware configured to perform a neural network operation of a neural network" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
This "receiving information on hardware configured to perform a neural network operation of a neural network" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model" is a broad term which is described at a high level. Applicant’s Specification recites:
[0057] The neural network may be trained to perform a desired operation by mapping input data and output data that have a nonlinear relationship therebetween through deep learning to perform various tasks. For example, a neural network may be trained through deep learning as a problem-solving process for optimization to find a point where energy is minimized while training the neural network using provided training data. Through deep learning, for example, supervised or unsupervised learning, weighted connections and other parameters corresponding to an architecture or a model of the neural network may be obtained, and the input data and the output data may be mapped to each other based on the obtained weight. In an example training, a parameter of each of the nodes of the neural network may be adjusted while an error of a result output by the output layer may be propagated backward along the neural network.
[0058] Various deep neural network structures are being studied. Since a large quantity of data is needed to perform an operation of the neural network, partially use data in a device having a small on-chip memory may be needed.
[0059] The neural network may include a deep neural network. The neural network may include neural networks such as, for example, for example, a convolutional neural network (CNN), a recurrent neural network (RNN), perceptron, feed forward (FF), a radial basis network (RBF), deep feed forward (DFF), a long short term memory (LSTM), a gated recurrent unit (GRU), an autoencoder (AE), a variational autoencoder (VAE), a denoising autoencoder (DAE), a sparse autoencoder (SAE), Markov Chain (MC), a Hopfield network (HN), a Boltzmann machine (BM), a restricted Boltzmann machine (RBM), a Depp (sic.) belief network (DBN), a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a liquid state machine (LSM), an extreme learning machine (ELM), an echo state network (ESN), a deep residual network (DRN), a differentiable neural computer (DNC), a neural turning machine (NTM), a capsule network (CN), a Kohonen network (KN), and an attention network (AN).
The claimed “target mapping model” is merely the training of a neural network model, as taught in Applicant's Specification, paragraph [0057].
The neural network and its various components (e.g., structure, coding, and looping) are claimed as a general class of supervised learning networks. Applicant's Specification, paragraph [0059], above shows some of the numerous generic structures encompassed by the limitation.
This "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model" limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "one or more processors"
(2) A "receiving information on hardware configured to perform a neural network operation of a neural network"
(3) A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"
A "one or more processors" is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
Further, M.P.E.P. § 2106.05(f)(2) recites:
(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "receiving information on hardware configured to perform a neural network operation of a neural network" is a broad term which is described at a high level. M.P.E.P. § 2106.05(d)(I)(2) recites in part:
2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) is well-understood, routine, conventional activity. Berkheimer v. HP, Inc., 881 F.3d 1360, 1368, 125 USPQ2d 1649, 1654 (Fed. Cir. 2018). However, this does not mean that a prior art search is necessary to resolve this inquiry. Instead, examiners should rely on what the courts have recognized, or those in the art would recognize, as elements that are well-understood, routine, conventional activity in the relevant field when making the required determination. For example, in many instances, the specification of the application may indicate that additional elements are well-known or conventional. See, e.g., Intellectual Ventures v. Symantec, 838 F.3d at 1317; 120 USPQ2d at 1359 ("The written description is particularly useful in determining what is well-known or conventional"); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (relying on specification’s description of additional elements as "well-known", "common" and "conventional"); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 614, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (Specification described additional elements as "either performing basic computer functions such as sending and receiving data, or performing functions ‘known’ in the art.").
Further, M.P.E.P. § 2106.05(d)(II) recites:
The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); …
Merely using the conventional computer to receive data is well known, understood, and conventional. Thus, it adds nothing significantly more to the judicial exception.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model" is a broad term which is described at a high level. Applicant’s Specification recites:
[0057] The neural network may be trained to perform a desired operation by mapping input data and output data that have a nonlinear relationship therebetween through deep learning to perform various tasks. For example, a neural network may be trained through deep learning as a problem-solving process for optimization to find a point where energy is minimized while training the neural network using provided training data. Through deep learning, for example, supervised or unsupervised learning, weighted connections and other parameters corresponding to an architecture or a model of the neural network may be obtained, and the input data and the output data may be mapped to each other based on the obtained weight. In an example training, a parameter of each of the nodes of the neural network may be adjusted while an error of a result output by the output layer may be propagated backward along the neural network.
[0058] Various deep neural network structures are being studied. Since a large quantity of data is needed to perform an operation of the neural network, partially use data in a device having a small on-chip memory may be needed.
[0059] The neural network may include a deep neural network. The neural network may include neural networks such as, for example, for example, a convolutional neural network (CNN), a recurrent neural network (RNN), perceptron, feed forward (FF), a radial basis network (RBF), deep feed forward (DFF), a long short term memory (LSTM), a gated recurrent unit (GRU), an autoencoder (AE), a variational autoencoder (VAE), a denoising autoencoder (DAE), a sparse autoencoder (SAE), Markov Chain (MC), a Hopfield network (HN), a Boltzmann machine (BM), a restricted Boltzmann machine (RBM), a Depp (sic.) belief network (DBN), a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a liquid state machine (LSM), an extreme learning machine (ELM), an echo state network (ESN), a deep residual network (DRN), a differentiable neural computer (DNC), a neural turning machine (NTM), a capsule network (CN), a Kohonen network (KN), and an attention network (AN).
The claimed “target mapping model” is merely the training of a neural network model, as taught in Applicant's Specification, paragraph [0057].
The neural network and its various components (e.g., structure, coding, and looping) are claimed as a general class of supervised learning networks. Applicant's Specification, paragraph [0059], above shows some of the numerous generic structures encompassed by the limitation.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 4 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 6
Claim 6 recites:
6. (Original) The method of claim 4, wherein the calculating of the memory access size comprises:
calculating a number of data reload of the arbitrary mapping model based on the loop structure; and
calculating the memory access size based on the number of data reload and the partition structure.
Applicant’s Claim 6 merely teaches mathematical calculation of numbers. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 6 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 11
Step 1 inquiry: Does this claim fall within a statutory category?
The preamble of the claim recites “11. An apparatus, the apparatus comprising…” Therefore, it is an “apparatus”, which is a statutory category of invention. Therefore, the answer to the inquiry is: “YES.”
Step 2A (Prong One) inquiry:
Are there limitations in Claim 11 that recite abstract ideas?
YES. The following limitations in Claim 11 recite abstract ideas that fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, they are “mental steps” and “mathematical steps”:
• information on the hardware and a structure of the neural network (i.e., mental steps)
• generate coding configured to cause the hardware to perform the preset neural network operation based on the target mapping model (i.e., mental steps)
Step 2A (Prong Two) inquiry:
Are there additional elements or a combination of elements in the claim that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception?
Applicant’s claims contain the following “additional elements”:
(1) A "one or more processors"/"processing element"/"hardware"
(2) A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"/"generating the target mapping model based on information on the hardware and a structure of the neural network"/"a loop structure"
A “processor” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
Further, M.P.E.P. § 2106.05(f)(2) recites:
(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field.
This “processor” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"/"generating the target mapping model based on information on the hardware and a structure of the neural network"/"a loop structure" is a broad term which is described at a high level. Applicant’s Specification recites:
[0057] The neural network may be trained to perform a desired operation by mapping input data and output data that have a nonlinear relationship therebetween through deep learning to perform various tasks. For example, a neural network may be trained through deep learning as a problem-solving process for optimization to find a point where energy is minimized while training the neural network using provided training data. Through deep learning, for example, supervised or unsupervised learning, weighted connections and other parameters corresponding to an architecture or a model of the neural network may be obtained, and the input data and the output data may be mapped to each other based on the obtained weight. In an example training, a parameter of each of the nodes of the neural network may be adjusted while an error of a result output by the output layer may be propagated backward along the neural network.
[0058] Various deep neural network structures are being studied. Since a large quantity of data is needed to perform an operation of the neural network, partially use data in a device having a small on-chip memory may be needed.
[0059] The neural network may include a deep neural network. The neural network may include neural networks such as, for example, for example, a convolutional neural network (CNN), a recurrent neural network (RNN), perceptron, feed forward (FF), a radial basis network (RBF), deep feed forward (DFF), a long short term memory (LSTM), a gated recurrent unit (GRU), an autoencoder (AE), a variational autoencoder (VAE), a denoising autoencoder (DAE), a sparse autoencoder (SAE), Markov Chain (MC), a Hopfield network (HN), a Boltzmann machine (BM), a restricted Boltzmann machine (RBM), a Depp (sic.) belief network (DBN), a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a liquid state machine (LSM), an extreme learning machine (ELM), an echo state network (ESN), a deep residual network (DRN), a differentiable neural computer (DNC), a neural turning machine (NTM), a capsule network (CN), a Kohonen network (KN), and an attention network (AN).
The claimed “target mapping model” is merely the training of a neural network model, as taught in Applicant's Specification, paragraph [0057].
The neural network and its various components (e.g., structure, coding, and looping) are claimed as a general class of supervised learning networks. Applicant's Specification, paragraph [0059], above shows some of the numerous generic structures encompassed by the limitation.
This “target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"/"generating the target mapping model based on information on the hardware and a structure of the neural network"/"a loop structure” limitation does not integrate the additional element into a practical application and represents “insignificant extra-solution activity”. (See, M.P.E.P. § 2106.05(I)(A)).
The answer to the inquiry is “NO”, no additional elements integrate the claimed abstract idea into a practical application.
Step 2B inquiry:
Does the claim provide an inventive concept, i.e., does the claim recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception in the claim?
Applicant’s claims contain the following “additional elements”:
(1) A "one or more processors"/"processing element"/"hardware"
(2) A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"/"generating the target mapping model based on information on the hardware and a structure of the neural network"/"a loop structure"
A “processor” is a broad term which is described at a high level and includes general purpose computers. M.P.E.P. § 2016.05(f) recites:
2106.05(f) Mere Instructions To Apply An Exception [R-10.2019]
Another consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two or recites significantly more than a judicial exception in Step 2B is whether the additional elements amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer. As explained by the Supreme Court, in order to make a claim directed to a judicial exception patent-eligible, the additional element or combination of elements must do “‘more than simply stat[e] the [judicial exception] while adding the words ‘apply it’”. Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014) (quoting Mayo Collaborative Servs. V. Prometheus Labs., Inc., 566 U.S. 66, 72, 101 USPQ2d 1961, 1965). Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1983. See also 573 U.S. at 224, 110 USPQ2d at 1984 (warning against a § 101 analysis that turns on “the draftsman’s art”).
Further, M.P.E.P. § 2106.05(f)(2) recites:
(2) Whether the claim invokes computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer” does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015). In contrast, a claim that purports to improve computer capabilities or to improve an existing technology may integrate a judicial exception into a practical application or provide significantly more. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314-15, 120 USPQ2d 1091, 1101-02 (Fed. Cir. 2016); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36, 118 USPQ2d 1684, 1688-89 (Fed. Cir. 2016). See MPEP §§ 2106.04(d)(1) and 2106.05(a) for a discussion of improvements to the functioning of a computer or to another technology or technical field.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
A "target mapping model"/"neural network"/"coding... to perform the preset neural network operation based on the target mapping model"/"generating the target mapping model based on information on the hardware and a structure of the neural network"/"a loop structure" is a broad term which is described at a high level. Applicant’s Specification recites:
[0057] The neural network may be trained to perform a desired operation by mapping input data and output data that have a nonlinear relationship therebetween through deep learning to perform various tasks. For example, a neural network may be trained through deep learning as a problem-solving process for optimization to find a point where energy is minimized while training the neural network using provided training data. Through deep learning, for example, supervised or unsupervised learning, weighted connections and other parameters corresponding to an architecture or a model of the neural network may be obtained, and the input data and the output data may be mapped to each other based on the obtained weight. In an example training, a parameter of each of the nodes of the neural network may be adjusted while an error of a result output by the output layer may be propagated backward along the neural network.
[0058] Various deep neural network structures are being studied. Since a large quantity of data is needed to perform an operation of the neural network, partially use data in a device having a small on-chip memory may be needed.
[0059] The neural network may include a deep neural network. The neural network may include neural networks such as, for example, for example, a convolutional neural network (CNN), a recurrent neural network (RNN), perceptron, feed forward (FF), a radial basis network (RBF), deep feed forward (DFF), a long short term memory (LSTM), a gated recurrent unit (GRU), an autoencoder (AE), a variational autoencoder (VAE), a denoising autoencoder (DAE), a sparse autoencoder (SAE), Markov Chain (MC), a Hopfield network (HN), a Boltzmann machine (BM), a restricted Boltzmann machine (RBM), a Depp (sic.) belief network (DBN), a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a liquid state machine (LSM), an extreme learning machine (ELM), an echo state network (ESN), a deep residual network (DRN), a differentiable neural computer (DNC), a neural turning machine (NTM), a capsule network (CN), a Kohonen network (KN), and an attention network (AN).
The claimed “target mapping model” is merely the training of a neural network model, as taught in Applicant's Specification, paragraph [0057].
The neural network and its various components (e.g., structure, coding, and looping) are claimed as a general class of supervised learning networks. Applicant's Specification, paragraph [0059], above shows some of the numerous generic structures encompassed by the limitation.
Therefore, the claim as a whole does not amount to significantly more than the exception itself (i.e., there is no inventive concept in the claim). (See, M.P.E.P. § 2106.05(II)).
Therefore, the answer to the inquiry is “NO”, no additional elements provide an inventive concept that is significantly more than the claimed abstract ideas the claimed abstract idea into a practical application.
Claim 11 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 12
Claim 12 recites:
12. (Previously Presented) The apparatus of claim 11,
wherein the information on the hardware comprises any one or any combination of a number of the plural processing elements, a structure of the plural processing elements, a memory bandwidth, a frequency, and a memory size.
Applicant’s Claim 12 merely teaches pure mathematical data. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 12 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 16
Claim 16 recites:
16. (Previously Presented) The apparatus of claim 14, wherein, for the calculating of the memory access size, the one or more processors are configured to:
calculate a number of data reload of the arbitrary mapping model based on the loop structure; and
calculate the memory access size based on the number of data reload and the partition structure.
Applicant’s Claim 16 merely teaches mathematical calculation of numbers. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 16 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 17
Claim 17 recites:
17. (Currently Amended) The apparatus of claim 11, wherein, for the generating of the target mapping model, the one or more processors are configured to:
calculate, based on the information of the hardware and the structure, a respective mapping parameter corresponding to each of plural arbitrary mapping models; and
determine an arbitrary mapping model, among the plural arbitrary mapping models, with a maximum corresponding mapping parameter among the respective mapping parameters to be the target mapping model.
Applicant’s Claim 17 merely teaches the mathematical step of calculating a parameter and the mental step of “determining an arbitrary mapping model, among the plural arbitrary mapping models”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 17 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 18
Claim 18 recites:
18. (Previously Presented) The apparatus of claim 13, wherein, for the generating of the target mapping model, the one or more processors are configured to:
prune an inadequate mapping model based on a partition structure of the neural network and the loop structure.
Applicant’s Claim 18 merely teaches the mathematical step of pruning “the inadequate mapping model based on a number of iterations of the loop structure” (i.e., using fewer parameters in the calculations). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 18 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 19
Claim 19 recites:
19. (Previously Presented) The apparatus of claim 18, wherein, for the pruning of the inadequate mapping model, the one or more processors are configured to:
prune the inadequate mapping model based on a partition structure of the neural network according to a utilization rate of the plural processing elements.
Applicant’s Claim 19 merely teaches the mathematical step of pruning “the inadequate mapping model based on a number of iterations of the loop structure” (i.e., using fewer parameters in the calculations). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 19 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 20
Claim 20 recites:
20. (Previously Presented) The apparatus of claim 18, wherein, for the pruning of the inadequate mapping model, the one or more processors are configured to:
prune the inadequate mapping model based on a number of iterations of the loop structure.
Applicant’s Claim 20 merely teaches the mathematical step of pruning “the inadequate mapping model based on a number of iterations of the loop structure” (i.e., using fewer parameters in the calculations). It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 20 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 27
Claim 27 recites:
27. (Currently Amended) The apparatus of claim 11,
wherein, for the generating of the coding, the one or more processors are configured to generate the coding by changing preset coding, which includes the loop structure, that is configured to cause at least one processing element to perform the preset neural network operation, and
wherein the preset coding exists prior to the generating of the target mapping model.
Applicant’s Claim 27 merely teaches the mental step of “changing preset coding”. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 27 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Claim 28
Claim 28 recites:
28. (Currently Amended) The apparatus of claim 11,
wherein the loop structure is a loop structure of operations of a layer of the neural network, and is included in a first source code configured to cause at least one processing element to perform the preset neural network operation, and
wherein the generated coding includes a second source code.
Applicant’s Claim 28 merely teaches pure software to implement a neural network operation. It does not integrate the abstract idea to a practical application, nor is it anything significantly more than the abstract idea. (See, 2106.05(a)(II).)
Claim 28 is, therefore, NOT ELIGIBLE subject matter under 35 U.S.C. § 101.
Response to Arguments
Applicant's arguments filed 04 DEC 2025 have been fully considered but they are not persuasive. Specifically, Applicant argues:
Argument 1
i. Claims 1-28 do not recite mental processes abstract ideas
The January 2019 Guidance and the USPTO's October 2019 Update: Subject Matter Eligibility ("October 2019 Guidance") make clear that the present claims do not recite "mental processes."
Applicant respectfully submits that, e.g.,
***
as recited in claim 4, cannot practically be performed in the human mind, as the human mind is not equipped to perform such complex operations.
Accordingly, Applicant respectfully submits the present claims do not recite any mental processes deemed to be abstract ideas.
The determinations recited in the claims are well within the capability of the human mind. For example, “determining the operation performance based on the utilization rate” is merely making a performance judgement by looking at the utilization rate.
Applicant's argument is unpersuasive.
The rejections stand.
Argument 2
In the present application, with respect to whether the present claims recite mathematical concepts, Applicant respectfully submits that the features of the present claims are more analogous to those of Example 38 than those of Example 41, wherein, while some of the features of the independent claims may be based on mathematical concepts, the mathematical concepts are not recited in the claims.
Accordingly, Applicant respectfully submits the present claims do not recite any mathematical concepts deemed to be abstract ideas.
Applicant recites in the claim “…dividing the operation performance by the memory access size…” This is clearly the mathematical operation of “division.”
Applicant's argument is unpersuasive.
The rejections stand.
Argument 3
It is respectfully submitted that the present claims improve the technical fields of computer capabilities, specifically, optimizing hardware utilization for neural networks.
The steps of calculating utilization rates based on partition structures to generate optimized code are not merely mental steps but complex technical operations performed by a processor to solve a technical problem, particularly, hardware inefficiency via neural network hardware optimization, code generation for specialized processors, and automated hardware-software co-design by optimizing the utilization of processing elements in neural network accelerators through mapping model generation and code optimization.
***
These improvements are realized in the claimed features. For example, independent claim 4 recites, inter alia
"generating: a target mapping model that maps a preset neural network operation of a neural network with respect to plural processing elements set to perform the preset neural network operation, where the plural processing elements are included in hardware configured to perform neural network operations, and where the hardware physically exists prior to the generating of the target mapping model" which generates hardware-optimized target mapping models that maximize processing element utilization by calculating and optimizing a mapping parameter defined as operation performance divided by memory access size ([00138]- [00139]); "generating: optimized code configured to cause the hardware to perform the preset neural network operation based on the target mapping model" which generates optimized code automatically tailored to specific hardware configurations, eliminating the need for manual optimization and improving hardware utilization rates ([00163]-[00164]) and improves actual hardware performance by reducing memory access overhead and increasing processing element utilization through intelligent tiling and dataflow optimization ([0082]-[0095]).
Accordingly, the claimed invention is not merely an abstract mathematical concept or mental process, but rather a concrete technological method for optimizing the interaction between software and specialized neural network hardware, resulting in measurable improvements in hardware performance and efficiency.
The invention represents a significant technological advancement in the field of neural network hardware optimization, providing automated tools that enable more efficient utilization of expensive specialized processing hardware.
Accordingly, the present claims recite features that reflect an improvement in the functioning of a computer, or an improvement to another technology or technical field, and therefore the claimed features are integrated into a practical application, and therefore the claims are not "directed to" an abstract idea, and therefore the claims recite patent-eligible subject matter.
Accordingly, Applicant respectfully submits the rejection of the claims under 35 U.S.C.§ 101 is deficient and Applicant respectfully requests the rejection be withdrawn.
The claims are for generating “optimized code” and “parameters”. It is the abstractions in the parameters and the “code” that are improved. The computer itself is not improved.
Applicant's argument is unpersuasive.
The rejections stand.
Argument 4
In rejecting claim 4, the Office Action asserts that Sarwar discloses every feature of claim 4.
Applicant respectfully disagrees. Sarwar is related to an incremental learning method for deep convolutional neural networks using partial network sharing. Page 11 of Sarwar merely discusses the existing hardware architecture used for evaluation purposes. There is no discussion of
"generating: a target mapping model mapping the neural network operation on processing elements, of the hardware, available to perform the neural network operation based on the information and a structure of the neural network, where the hardware physically exists prior to the generating of the target mapping model; generating optimized code to configure the hardware to perform the neural network operation of the neural network based on the target mapping model, wherein the generating of the target mapping model comprises: determining a mapping parameter corresponding to an arbitrary mapping model based on the information and the structure; and determining the target mapping model based on the mapping parameter, and wherein the determining of the mapping parameter comprises: determining an operation performance for the arbitrary mapping model based on a partition structure of the neural network; determining a memory access size for the arbitrary mapping model based on a loop structure included in the neural network operation; and determining the mapping parameter comprises dividing the operation performance by the memory access size, where the target mapping model represents a maximized mapping parameter, and wherein the determining of the operation performance comprises: determining a utilization rate of the processing elements based on the partition structure of the neural network; and determining the operation performance based on the utilization rate,
as recited by amended claim 4.
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 5
Page 11 of Sarwar sets forth:
***
As shown above, Sarwar focuses on an evaluation methodology. In particular, how to measure and validate the proposed incremental learning method of Sarwar. Sarwar discloses an incremental learning methodology (using partial network sharing) and merely evaluates the energy consumption results of that methodology. Sarwar does not disclose a system that generates a target mapping model based on hardware information to optimize code. Sarwar also does not discuss "determining a utilization rate of the processing elements based on the partition structure of the neural network; and determining the operation performance based on the utilization rate."
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 6
The Office Action cited the "energy consumption per iteration" in Sarwar as corresponding to the claimed generation of a mapping model. However, Sarwar merely measures the energy result of a training process. It does not calculate a "utilization rate" from a "partition structure" as an input/intermediate step to generate an optimized mapping model.
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 7
Furthermore, the "iteration" in Sarwar refers to a training step, which is different from the "loop structure" optimization as disclosed in the present disclosure.
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 8
Sarwar first discusses the CMOS digital baseline architecture that is used to analyze energy consumption. In particular, Sarwar uses a weight stationary dataflow with many cores architecture, where each core has Matrix Vector Multiplication (MVM) units with 32KB memory and 32 MACs as a hardware evaluation setup. The evaluation uses IBM 32nm technology and CACTI memory modeling for accurate energy estimation.
Sarwar's baseline means the comparison standard against which Sarwar's method is evaluated. Specifically, an incrementally trained network without any network sharing. In other words, training separate, complete networks for each new task with no parameter sharing between tasks.
Sarwar's baseline demonstrates the benefits of Sarwar's partial network sharing approach showing quantitative improvements such as 2.45x energy reduction, 1.55-6x training time reduction, and 67-99% storage requirement reduction compared to the standard incrementally trained network without any network sharing.
However, there is still no discussion in Sarwar regarding ""generating: a target mapping model mapping the neural network operation on processing elements, of the hardware, available to perform the neural network operation based on the information and a structure of the neural network, where the hardware physically exists prior to the generating of the target mapping model; generating optimized code to configure the hardware to perform the neural network operation of the neural network based on the target mapping model, wherein the generating of the target mapping model comprises: determining a mapping parameter corresponding to an arbitrary mapping model based on the information and the structure; and determining the target mapping model based on the mapping parameter, and wherein the determining of the mapping parameter comprises: determining an operation performance for the arbitrary mapping model based on a partition structure of the neural network; determining a memory access size for the arbitrary mapping model based on a loop structure included in the neural network operation; and determining the mapping parameter comprises dividing the operation performance by the memory access size, where the target mapping model represents a maximized mapping parameter, and wherein the determining of the operation performance comprises: determining a utilization rate of the processing elements based on the partition structure of the neural network; and determining the operation performance based on the utilization rate, as recited by amended claim 4.
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 9
In the present case, the Office Action has not established that each element of the claims is disclosed in Sarwar. There is no discussion in Sarwar regarding any generation of a target mapping model and there is no discussion in Sarwar regarding any generation of an optimized code, "utilization rate of the processing elements," which is determined based on a "partition structure of the neural network."
It is respectfully submitted that, based on the above explanation of the actual disclosure of Sarwar, Sarwar does not disclose or suggest all the claimed features of independent claim 4
The 35 U.S.C. § 102 rejections are withdrawn.
Argument 10
Accordingly, based on the above explanation of the actual disclosure of Sarwar, it is respectfully submitted that Sarwar, further does not disclose or suggest all the claimed features of independent claims 4, and 11, respectively.
Accordingly, independent claims 4 and 11 are all directed to patentable subject matter.
The 35 U.S.C. § 102 rejections are withdrawn.
Regarding the 35 U.S.C. § 101 rejections, similar arguments for similar claims are similarly unpersuasive.
Applicant's argument is unpersuasive.
The rejections stand.
Argument 11
The dependent claims are directed to patentable subject matter by virtue of their dependency as well as for the additional features recited. Accordingly, withdrawal of the rejection is respectfully requested.
The 35 U.S.C. § 102 rejections are withdrawn.
Regarding the 35 U.S.C. § 101 rejections, since there is no useful matter in the independent claims, there is no such matter that may be incorporated by reference to the dependent claims.
Applicant's argument is unpersuasive.
The rejections stand.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiries concerning this communication or earlier communications from the examiner should be directed to Wilbert L. Starks, Jr., who may be reached Monday through Friday, between 8:00 a.m. and 5:00 p.m. EST. or via telephone at (571) 272-3691 or email: Wilbert.Starks@uspto.gov.
If you need to send an Official facsimile transmission, please send it to (571) 273-8300.
If attempts to reach the examiner are unsuccessful the Examiner’s Supervisor (SPE), Kakali Chaki, may be reached at (571) 272-3719.
Hand-delivered responses should be delivered to the Receptionist @ (Customer Service Window Randolph Building 401 Dulany Street, Alexandria, VA 22313), located on the first floor of the south side of the Randolph Building.
Finally, information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Moreover, status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) toll-free @ 1-866-217-9197.
/WILBERT L STARKS/
Primary Examiner, Art Unit 2122
WLS
20 MAR 2026