DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on May 24, 2023 and January 29, 2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Objections
Claims 1, 12, 19, and 20 are objected to because of the following informalities:
Claims 1 and 12: “a circuit performance modeling” should read “circuit performance modeling”
Claim 19: “comprising of” should read “comprising”
Claims 19 and 20: “computing program product” should read “computer program product”
All claims dependent on a claim objected-to hereunder are also objected-to.
Appropriate correction is required.
Specification
The disclosure is objected to because of the following informalities:
Abstract: “a circuit performance modeling” should read “circuit performance modeling”
[0002]: "Artificial Intelligence" should read "artificial intelligence"
[0005]: “a circuit performance modeling” should read “circuit performance modeling”
[0013]: "illustrates is an" should read "illustrates an"
[0027]: “a circuit performance modeling” should read “circuit performance modeling”
[0032]: "an additional act that includes the Bi-LSTM outputs" should read "an additional act that includes the Bi-LSTM outputting"
[0036]: "an additional act that includes the stochastic gradient descent-based module processes parameters" should read "an additional act that includes the stochastic gradient descent-based module processing parameters"
[0038]: “a circuit performance modeling” should read “circuit performance modeling”
[0041]: "the embedding allows mapped" should read "the embedding allows mapping"
[0043]: "and map the electric circuit" should read "and mapping the electric circuit"
[0046]: "th use" should read "the use"
[0050]: "circuit. transformer learning " should read "circuit. Transformer learning"
[0052]: "Long-Term Short Term" should read "Long Short-Term"
[0054]: "transformer based" should read "transformer-based"
[0055]: "and efficiency rating" should read "an efficiency rating"
[0057]: "Long Term Short Term" should read "Long Short-Term"
[0059]: "Bi-LTSM 212" should read "Bi-LSTM 212"
[0061]: "parts of a sequence the sequence" should read "parts of a sequence"
[0067]: "Long Term Short Term" should read "Long Short-Term"
[0072]: "and stored" should read "are stored" and "854.The" should read "854. The"
[0073]: “network module 865” should read “network module 815”
[0075]: "multiple, coordinated" should read "multiple coordinated"
[0076]: “install advisor engine 800” should read “install advisor engine 862”
[0079]: “Install Advisor Engine 800” should read “Install Advisor Engine 862”
[0083]: "network module 865" should read “network module 815”
[0085]: "Cloud orchestration module 842" should read "Cloud orchestration module 841"
Appropriate correction is required.
Drawings
The drawings are objected to because:
In Fig. 1B, “deep learning-base model” should read “deep learning-based model.”
In Fig. 2A, reference character 220 is included, but is not mentioned in the description.
In Fig. 2G, reference character 217 is not depicted, but is mentioned in the description of this figure (see [0060]).
In Fig. 6A/6B, reference character 613 is included, but is not mentioned in the description.
In Fig. 8, reference character 862 is not depicted, but is mentioned in the description of this figure (see [0073]).
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of a computer program product comprising one or more “computer-readable storage devices” encompasses signals per se. Paragraph 71 of the specification defines a “computer-readable storage medium” as excluding signals per se; therefore, the examiner suggests that the claims be amended to change “device” to “medium,” or to recite one or more “non-transitory” computer-readable storage devices in order to overcome this rejection.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”).
Claim 1
Step 1: The claim recites a computing device, and is therefore directed to the statutory category of machines.
Step 2A Prong 1: The claim recites:
“identifying and extracting paths of an electric circuit between a plurality of designated components that represent the electric circuit”: This limitation encompasses mentally identifying and extracting paths of a graphical representation of an electric circuit.
“converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length”: This limitation encompasses mentally converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length.
“predicting…characteristics of the designated components that represent the electric circuit based on an input of circuit parameters of the electric circuit”: This limitation encompasses mentally predicting characteristics of the designated components that represent the electric circuit based on circuit parameters of the electric circuit.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “A computing device comprising: a processor; a storage device coupled to the processor, wherein the storage device stores instructions to cause the processor to perform acts to provide a circuit performance modeling.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer (MPEP § 2106.05(f)). The claim also recites that the prediction is performed “by a circuit representation-learning model,” however this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The computing device and circuit representation-learning model limitations amount to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above. As an ordered whole, the claim is directed to a generic computer performing a mentally performable process of identifying paths of a circuit, converting at least one of the paths to a vector of a fixed length, and predicting characteristics of the circuit. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 2
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites:
“mapping…the represented electric circuit to a scalar value”: This limitation encompasses mentally mapping the represented electric circuit to a scalar value.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the mapping is done “by a multi-layer perceptron network,” however this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The multi-layer perceptron network limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 3
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 2.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the circuit representation-learning model comprises a transformer model configured to… [perform the judicial exception].” However, this limitation merely further limits the circuit representation-learning model to a transformer model, and still amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The transformer model limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 4
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 3.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the transformer model includes a stack of multi-head attention modules, and wherein the instructions cause the processor to perform an additional act comprising operating attention mechanism functions in parallel.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The transformer model and operating attention mechanism functions in parallel limitations amount to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 5
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites:
“embedding the circuit parameters of the electric circuit…”: This limitation encompasses mentally embedding the circuit parameters.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “as an input to the transformer model.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “as an input to the transformer model” limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 6
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 1.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the converting of at least one of the extracted paths to a path embedding is performed by a bidirectional Long Short-Term Memory (Bi-LSTM) network.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)). The claim also recites “wherein the instructions cause the processor to perform an additional act comprising outputting, by the Bi-LSTM network, the vector of a fixed length for each path embedding.” However, this limitation amounts to the insignificant extra-solution activity of mere data gathering and outputting (MPEP § 2106.05(g)).
Step 2B: The claim does not contain significantly more than the judicial exception. The Bi-LSTM network limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above. The outputting the vector of a fixed length for each path embedding limitation, in addition to being insignificant extra-solution activity, is also directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory (MPEP § 2106.05(d)(II) Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93).
Claim 7
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites:
“representing the electric circuit as a device embedding…”: This limitation encompasses mentally representing the electric circuit as a device embedding by mentally converting the representation of the circuit into a vector embedding.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the device embedding is “input to the Bi-LSTM network.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “input to the Bi-LSTM network” limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 8
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 2.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “training, by a training model, the circuit representation-learning model to perform circuit performance modeling from beginning-to-end.” However, this limitation merely generally links the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)).
Step 2B: The claim does not contain significantly more than the judicial exception. The training limitation amounts to generally linking the judicial exception to a particular technological environment (MPEP § 2106.05(h)) as stated above.
Claim 9
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 8.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the training model comprises a stochastic gradient descent-based model.” However, this limitation merely further limits the training model that performs the training limitation of claim 8, which is still merely generally linking the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)).
Step 2B: The claim does not contain significantly more than the judicial exception. The training limitation amounts to generally linking the judicial exception to a particular technological environment (MPEP § 2106.05(h)) as stated above.
Claim 10
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 9.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim additionally recites that the circuit representation-learning model comprises a “transformer model.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)). The claim also recites “processing, by the stochastic gradient descent-based model, parameters in the path embedding, the transformer model, and the multi-layer perceptron network.” However, this limitation merely generally links the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “transformer model” limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)), and the processing parameters limitation amounts to generally linking the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)) as stated above.
Claim 11
Step 1: A machine, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 10 above.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “wherein the multi-layer perceptron network has an input size that is the same as an output size of the transformer model.” However, this limitation merely further limits the multi-layer perceptron network that performs the mental process of mapping the circuit to a scalar value, and still amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The multi-layer perceptron network limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 12
Step 1: The claim recites a computer-implemented method of a circuit performance modeling, and therefore is directed to the statutory category of processes.
Step 2A Prong 1: The claim recites:
“identifying and extracting paths between a plurality of designated components that represent an electric circuit”: This limitation encompasses mentally identifying and extracting paths of a graphical representation of an electric circuit.
“converting one or more of the extracted paths to respective path embeddings including a corresponding vector of a fixed length”: This limitation encompasses mentally converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length.
“predicting characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit”: This limitation encompasses mentally predicting characteristics of the represented electric circuit based circuit parameters and the path embeddings of the electric circuit.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the method is “computer-implemented,” however this limitation amounts to mere instructions to apply a judicial exception using a generic computer (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “computer-implemented” limitation amounts to mere instructions to apply a judicial exception using a generic computer (MPEP § 2106.05(f)) as stated above. As an ordered whole, the claim is directed to a mentally performable process of identifying paths of a circuit, converting at least one of the paths to a path embedding, and predicting characteristics of the circuit. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 13
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
“mapping…the represented electric circuit to a scalar value”: This limitation encompasses mentally mapping the represented electric circuit to a scalar value.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the mapping is done “by a multi-layer perceptron network,” however this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The multi-layer perceptron network limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 14
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
“predicting the characteristics of the represented electric circuit…”: This limitation encompasses mentally predicting the characteristics of the represented electric circuit.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the predicting is done “by a transformer model.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The transformer model limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 15
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
“embedding the circuit parameters of the electric circuit…”: This limitation encompasses mentally embedding the circuit parameters.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “as an input to the transformer model. However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “as an input to the transformer model” limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 16
Step 1: A process, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 12.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “training, by a training model, a circuit representation-learning model from beginning-to-end for circuit performance modeling.” However, this limitation merely generally links the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)).
Step 2B: The claim does not contain significantly more than the judicial exception. The training limitation amounts to generally linking the judicial exception to a particular technological environment (MPEP § 2106.05(h)) as stated above.
Claim 17
Step 1: A process, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 16.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “training the training model to process parameters for the path embeddings; predicting the characteristics of the represented electric circuit; and mapping the represented electric circuit to a scalar value by a multi-layer perceptron network.” However, this limitation merely generally links the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)).
Step 2B: The claim does not contain significantly more than the judicial exception. The “training the training model to process parameters” limitation amounts to generally linking the above-mentioned abstract ideas to the technological environment of model training (MPEP § 2106.05(h)) as stated above.
Claim 18
Step 1: For the purpose of the abstract idea rejection, the examiner will assume the claim is directed to the statutory category of articles of manufacture.
Step 2A Prong 1: The claim recites:
“…identify and extract paths between a plurality of designated components that represent an electric circuit”: This limitation encompasses mentally identifying and extracting paths of a graphical representation of an electric circuit.
“…convert one or more of the extracted paths to respective path embeddings including a vector of a fixed length”: This limitation encompasses mentally converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length.
“…predict… characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit”: This limitation encompasses mentally predicting characteristics of the represented electric circuit based on circuit parameters and the path embeddings of the electric circuit.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “A computer program product comprising: one or more computer-readable storage devices and program instructions stored on at least one of the one or more computer-readable storage devices, the program instructions executable by a processor, the program instructions comprising: program instructions to… [perform the judicial exception].” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer (MPEP § 2106.05(f)). The claim also further recites that that prediction is performed “by a transformer model.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The computer program product and transformer model limitations amount to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above. As an ordered whole, the claim is directed to a generic computer program product performing a mentally performable process of identifying paths of a circuit, converting at least one of the paths to a path embedding, and predicting characteristics of the circuit. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 19
Step 1: An article of manufacture, as above.
Step 2A Prong 1: The claim recites:
“mapping… of the represented electric circuit to a scalar value”: This limitation encompasses mentally mapping the represented electric circuit to a scalar value.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the mapping is performed “by a multi-layer perceptron network,” however this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The multi-layer perceptron network limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim 20
Step 1: An article of manufacture, as above.
Step 2A Prong 1: The claim recites the same judicial exceptions as claim 19.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “program instructions to operate attention mechanism functions in parallel, and to concatenate and linearly transform outputs of the attention mechanism functions for input to the multi-layer perceptron network.” However, this limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)).
Step 2B: The claim does not contain significantly more than the judicial exception. The operate attention mechanism functions in parallel and concatenate and linearly transform outputs of the attention mechanism functions limitation amounts to mere instructions to apply a judicial exception using a generic computer programmed with a generic class of computer algorithms (MPEP § 2106.05(f)) as stated above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 6, and 12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Cao et al. (US20240265181) (“Cao”).
Regarding claim 1, Cao discloses “A computing device comprising: a processor; a storage device coupled to the processor, wherein the storage device stores instructions to cause the processor to perform acts to provide a circuit performance modeling ([0054]: “For example, the voltage threshold classification network comprises three voltage threshold types, the number of training batches is 512, the learning rate is 0.001, the optimizer is an adaptive moment estimation optimizer Adam, and the loss function is a cross entropy loss function”; the examiner notes that this implies the method is performed by a computer), the acts comprising:
identifying and extracting paths of an electric circuit between a plurality of designated components that represent the electric circuit ([0009]: “S1: …extracting a critical path passing through each gate cell in the circuit to obtain a path feature sequence”);
converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length ([0023]: “In S3, the path feature sequence obtained in S1 is input to the BLSTM, the feature sequence is normalized first, the feature sequence is then compressed to remove invalid fill values generated in the forming process of the feature sequence, and then the feature sequence is forward and backward input to LSTM layers respectively to obtain a forward LSTM embedding vector and a backward LSTM embedding vector; and the forward LSTM embedding vector and the backward LSTM embedding vector are merged and then transformed by a weight matrix, the compressed sequence is then filled again to facilitate subsequent data processing; and finally, the sequence is input to a pooling layer to be subjected to dimension reduction to obtain a final LSTM embedding vector, such that the relation between the path-level topological information and the leakage power optimization result is established”; the examiner notes that “a final LSTM embedding vector” corresponds to a “path embedding” because it is an embedding that represents “path-level topological information,” and the vector is “of a fixed length” because “the compressed sequence is then filled again to facilitate subsequent data processing” refers to padding the sequence to a maximum path length, see [0045]: “sequence data is filled to a maximum path length to solve the problem of length inconsistency of the sequence data caused by length inconsistency of the paths”); and
predicting, by a circuit representation-learning model, characteristics of the designated components that represent the electric circuit based on an input of circuit parameters of the electric circuit ([0014]: “S5: merging an output of a GNN model obtained in S2, an output of the BLSTM obtained in S3 and an output of the ANN obtained in S4, and inputting a vector obtained after merging to a voltage threshold classification network, wherein voltage threshold classification network, after being trained, is able to establish a relation between the circuit-level topological information, the path-level topological information, the topological information of the gate cells and the voltage threshold types of the gate cells after leakage power optimization to predict the voltage threshold types of the gate cells in the circuit after optimization”; the examiner notes that “voltage threshold classification network” corresponds to a “circuit representation-learning model,” “voltage threshold types of the gate cells” corresponds to “characteristics of the designated components that represent the electric circuit” and “circuit-level topological information”(output of GNN model obtained in S2, see also [0038]), corresponds to “an input of circuit parameters of the electric circuit”).
Regarding claim 6, the rejection of claim 1 is incorporated. Cao further discloses “wherein the converting of at least one of the extracted paths to a path embedding is performed by a bidirectional Long Short-Term Memory (Bi-LSTM) network, and wherein the instructions cause the processor to perform an additional act comprising outputting, by the Bi-LSTM network, the vector of a fixed length for each path embedding” ([0012]: “S3: inputting the path feature sequence obtained in S1 to a bi-directional long short-term memory (BLSTM)” and [0023]: “In S3, the path feature sequence obtained in S1 is input to the BLSTM, the feature sequence is normalized first, the feature sequence is then compressed to remove invalid fill values generated in the forming process of the feature sequence, and then the feature sequence is forward and backward input to LSTM layers respectively to obtain a forward LSTM embedding vector and a backward LSTM embedding vector; and the forward LSTM embedding vector and the backward LSTM embedding vector are merged and then transformed by a weight matrix, the compressed sequence is then filled again to facilitate subsequent data processing; and finally, the sequence is input to a pooling layer to be subjected to dimension reduction to obtain a final LSTM embedding vector, such that the relation between the path-level topological information and the leakage power optimization result is established”).
Regarding claim 12, Cao discloses “A computer-implemented method of a circuit performance modeling, the method comprising:
identifying and extracting paths between a plurality of designated components that represent an electric circuit ([0009]: “S1: …extracting a critical path passing through each gate cell in the circuit to obtain a path feature sequence”);
converting one or more of the extracted paths to respective path embeddings including a corresponding vector of a fixed length ([0023]: “In S3, the path feature sequence obtained in S1 is input to the BLSTM, the feature sequence is normalized first, the feature sequence is then compressed to remove invalid fill values generated in the forming process of the feature sequence, and then the feature sequence is forward and backward input to LSTM layers respectively to obtain a forward LSTM embedding vector and a backward LSTM embedding vector; and the forward LSTM embedding vector and the backward LSTM embedding vector are merged and then transformed by a weight matrix, the compressed sequence is then filled again to facilitate subsequent data processing; and finally, the sequence is input to a pooling layer to be subjected to dimension reduction to obtain a final LSTM embedding vector, such that the relation between the path-level topological information and the leakage power optimization result is established”; the examiner notes that “a final LSTM embedding vector” corresponds to a “path embedding” because it is an embedding that represents “path-level topological information,” and the vector is “of a fixed length” because “the compressed sequence is then filled again to facilitate subsequent data processing” refers to padding the sequence to a maximum path length, see [0045]: “sequence data is filled to a maximum path length to solve the problem of length inconsistency of the sequence data caused by length inconsistency of the paths”); and
predicting characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit ([0014]: “S5: merging an output of a GNN model obtained in S2, an output of the BLSTM obtained in S3 and an output of the ANN obtained in S4, and inputting a vector obtained after merging to a voltage threshold classification network, wherein voltage threshold classification network, after being trained, is able to establish a relation between the circuit-level topological information, the path-level topological information, the topological information of the gate cells and the voltage threshold types of the gate cells after leakage power optimization to predict the voltage threshold types of the gate cells in the circuit after optimization”; the examiner notes that “voltage threshold types of the gate cells” corresponds to “characteristics of the represented electric circuit”, “circuit-level topological information”(output of GNN model obtained in S2, see also [0038]) corresponds to “an input of circuit parameters of the electric circuit” and “path-level topological information” (output of the BLSTM obtained in S3, see also [0039]) corresponds to “an input of the path embeddings of the electric circuit”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Wu et al. (US20220261654) (“Wu”).
Regarding claim 2, the rejection of claim 1 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Wu discloses “mapping, by a multi-layer perceptron network… [a] represented electric circuit to a scalar value” (Wu, [0067]: “The artificial neural network is used as an approximate but extremely fast simulator of the circuit under consideration in order to quickly explore the parameter space. First, we decide on the input parameter set I. This set of parameters I=Id∪Ip is decomposed into two groups Id and Ip. The design (or controllable) parameters Id are system parameters that the circuit designer can change (and can be continuous or discrete variables) such as device size, circuit topology, and resistor values. Circuit topology can be expressed via the set of circuit elements that have nonzero resistances or capacitances. Ip are the process, yield, and environmental parameters such as operating temperature, process corner, supply voltage variations, and other operating conditions. The output parameters O are the performance parameters such as power consumption, timing offset, group delay, and area” and [0071]: “From the 5000 simulations results for various input parameters I= Id ∪ Ip, 4950 of these results were used for training and 50 for testing. As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions. The results of the training are shown in FIG. 10 where the neural network can predict
V
c
m
o
f
f
very well when it is small; the examiner notes that the multi-layer perceptron maps the input parameters, which represent an electric circuit, to a predicted offset voltage
V
c
m
o
f
f
, which is a scalar value).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the disclosure of Cao to include “mapping, by a multi-layer perceptron network, the represented electric circuit to a scalar value” as disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Regarding claim 13, the rejection of claim 12 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Wu discloses “mapping, by a multi-layer perceptron network, the represented electric circuit to a scalar value” (Wu: [0067]: “The artificial neural network is used as an approximate but extremely fast simulator of the circuit under consideration in order to quickly explore the parameter space. First, we decide on the input parameter set I. This set of parameters I=Id∪Ip is decomposed into two groups Id and Ip. The design (or controllable) parameters Id are system parameters that the circuit designer can change (and can be continuous or discrete variables) such as device size, circuit topology, and resistor values. Circuit topology can be expressed via the set of circuit elements that have nonzero resistances or capacitances. Ip are the process, yield, and environmental parameters such as operating temperature, process corner, supply voltage variations, and other operating conditions. The output parameters O are the performance parameters such as power consumption, timing offset, group delay, and area” and [0071]: “From the 5000 simulations results for various input parameters I=Id∪Ip, 4950 of these results were used for training and 50 for testing. As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions. The results of the training are shown in FIG. 10 where the neural network can predict
V
c
m
o
f
f
very well when it is small”; the examiner notes that the multi-layer perceptron maps the input parameters, which represent an electric circuit, to a predicted offset voltage
V
c
m
o
f
f
, which is a scalar value).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the disclosure of Cao to include “mapping, by a multi-layer perceptron network, the represented electric circuit to a scalar value” as disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Cao et al. (US 20230195986) (“Cao1”).
Regarding claim 7, the rejection of claim 6 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Cao1 discloses “representing… [an] electric circuit as a device embedding input to… [a] Bi-LSTM network” (Cao1, [0018]: “In step S3, the topology information of the path includes the cell type, the cell size, and the corresponding load capacitance sequence, for two category-type sequence features of the cell type and the cell size, the problem of inconsistent sequence lengths of the two sequence features is first resolved through padding, then padded sequences are inputted into an embedding layer, a vector representation of an element is obtained through network learning, load capacitances are binned and filled by using a padding operation to a uniform length, and vector expressions are learned by using the embedding layer, next, vector splicing is performed on the foregoing expressions obtained after the learning using the embedding layer, and finally spliced vectors are inputted into the bi-directional long short-term memory neural network (BLSTM) to perform training”; the examiner notes that the vector expressions learned by using the embedding layer correspond to device embeddings because they are embeddings that represent information of cells on a path in a circuit).
Cao1 and the instant application both relate to predicting circuit performance using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method of Cao to include representing the electric circuit as a device embedding input to the Bi-LSTM network, as disclosed by Cao1, and one would have been motivated to do so for the purpose of achieving prediction with higher precision through more effective feature engineering processing in a case of low simulation overheads (see Cao1, Abstract).
Claims 14, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Cao et al. (US20240273272) (“Cao2”).
Regarding claim 14, the rejection of claim 12 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Cao2 discloses “predicting…[a] characteristic… of… [a] represented electric circuit by a transformer model (Cao2, [0026]: “the post-routing path delay prediction method for a digital integrated circuit disclosed by the invention captures the timing and physical correlation between cells in a path by means of the self-attention mechanism of a transformer network, thus being able to directly predict a path delay”; the examiner notes that a “path delay” corresponds to a characteristic of a represented electric circuit).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the “predicting characteristics of the represented electric circuit” step disclosed by Cao, to be performed by a transformer model as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Regarding claim 15, the rejection of claim 14 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Cao2 further discloses “embedding… circuit parameters of… [an] electric circuit as an input to… [a] transformer model” (Cao2, [0018]: “S41: converting an input feature sequence with a dimension of (samples, max_len) into a tensor with a dimension of (samples, max_len, dimk), wherein samples is the number of samples, max_len is a maximum path length, din is a designated word vector dimension of a kth feature in an embedding layer, k=1, 2, . . . , n, and n is the number of input feature sequences” and [0019]: “S42: merging the n new tensors obtained in S41 to obtain a tensor with a dimension of (samples,max_len,dim), which is used as an input X of the multi-head self-attention mechanism”; the examiner notes that “an input feature sequence” corresponds to circuit parameters of an electric circuit, see [0012]: “In S2, static timing analysis is performed on the circuit after the placement in S1, and timing and physical information of all the stages of cell in the path is extracted from the static timing analysis report and the layout information to form feature sequences of the path”).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the method of Cao to include embedding the circuit parameters of the electric circuit as an input to the transformer model, as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Regarding claim 18, Cao discloses “A computer program product comprising: one or more computer-readable storage devices and program instructions stored on at least one of the one or more computer-readable storage devices, the program instructions executable by a processor ([0054]: “For example, the voltage threshold classification network comprises three voltage threshold types, the number of training batches is 512, the learning rate is 0.001, the optimizer is an adaptive moment estimation optimizer Adam, and the loss function is a cross entropy loss function”; the examiner notes that this implies the method is performed by a computer), the program instructions comprising:
program instructions to identify and extract paths between a plurality of designated components that represent an electric circuit ([0009]: “S1: …extracting a critical path passing through each gate cell in the circuit to obtain a path feature sequence”);
program instructions to convert one or more of the extracted paths to respective path embeddings including a vector of a fixed length ([0023]: “In S3, the path feature sequence obtained in S1 is input to the BLSTM, the feature sequence is normalized first, the feature sequence is then compressed to remove invalid fill values generated in the forming process of the feature sequence, and then the feature sequence is forward and backward input to LSTM layers respectively to obtain a forward LSTM embedding vector and a backward LSTM embedding vector; and the forward LSTM embedding vector and the backward LSTM embedding vector are merged and then transformed by a weight matrix, the compressed sequence is then filled again to facilitate subsequent data processing; and finally, the sequence is input to a pooling layer to be subjected to dimension reduction to obtain a final LSTM embedding vector, such that the relation between the path-level topological information and the leakage power optimization result is established”; the examiner notes that “a final LSTM embedding vector” corresponds to a “path embedding” because it is an embedding that represents “path-level topological information,” and the vector is “of a fixed length” because “the compressed sequence is then filled again to facilitate subsequent data processing” refers to padding the sequence to a maximum path length, see [0045]: “sequence data is filled to a maximum path length to solve the problem of length inconsistency of the sequence data caused by length inconsistency of the paths”); and
program instructions to predict… characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit ([0014]: “S5: merging an output of a GNN model obtained in S2, an output of the BLSTM obtained in S3 and an output of the ANN obtained in S4, and inputting a vector obtained after merging to a voltage threshold classification network, wherein voltage threshold classification network, after being trained, is able to establish a relation between the circuit-level topological information, the path-level topological information, the topological information of the gate cells and the voltage threshold types of the gate cells after leakage power optimization to predict the voltage threshold types of the gate cells in the circuit after optimization”; the examiner notes that “voltage threshold types of the gate cells” corresponds to “characteristics of the represented electric circuit”, “circuit-level topological information”(output of GNN model obtained in S2, see also [0038]) corresponds to “an input of circuit parameters of the electric circuit” and “path-level topological information” (output of the BLSTM obtained in S3, see also [0039]) corresponds to “an input of the path embeddings of the electric circuit”).
Cao does not appear to explicitly disclose that the prediction is performed “by a transformer model.”
However, Cao2 discloses “a transformer model configured to predict… [a] characteristic… of… [a] represented electric circuit” (Cao2, [0026]: “the post-routing path delay prediction method for a digital integrated circuit disclosed by the invention captures the timing and physical correlation between cells in a path by means of the self-attention mechanism of a transformer network, thus being able to directly predict a path delay”; the examiner notes that a “path delay” corresponds to a characteristic of a represented electric circuit).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the “predict characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit” step disclosed by Cao, to be performed by a transformer model as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Zhang et al. (NPL: Student Network Learning via Evolutionary Knowledge Distillation) (“Zhang”).
Regarding claim 16, the rejection of claim 12 is incorporated. Cao further discloses “training… a circuit representation-learning model from beginning-to-end for circuit performance modeling” (Cao, [0054]: “For example, the voltage threshold classification network comprises three voltage threshold types, the number of training batches is 512, the learning rate is 0.001, the optimizer is an adaptive moment estimation optimizer Adam, and the loss function is a cross entropy loss function”).
Cao does not appear to explicitly disclose that the training is done “by a training model.”
However, Zhang discloses “training, by a training model, a… [student] model” (Zhang, III: “In our evolutionary knowledge distillation (EKD) approach, the teacher and student network are trained almost synchronously, and an evolutionary teacher can provide supervision information for the learning of student”; the examiner notes that the evolutionary teacher network corresponds to “a training model,” which trains the student model by providing supervision information).
Zhang and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified Cao to have the training be performed by a training model, as disclosed by Zhang, and one would have been motivated to do so for the purpose of achieving more robust performance and improved generalization ability of the student model (see Zhang, II.A).
Claims 3, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Wu, and further in view of Cao2.
Regarding claim 3, the rejection of claim 2 is incorporated. Neither Cao nor Wu appear to explicitly disclose the further limitations of the claim.
However, Cao2 discloses “wherein… [a] circuit representation-learning model comprises a transformer model configured to predict… [a] characteristic… of… designated components that represent… [an] electric circuit” (Cao2, [0026]: “the post-routing path delay prediction method for a digital integrated circuit disclosed by the invention captures the timing and physical correlation between cells in a path by means of the self-attention mechanism of a transformer network, thus being able to directly predict a path delay”; the examiner notes that a “path delay” corresponds to a characteristic of a represented electric circuit).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the “predict characteristics of the designated components that represent the electric circuit” step disclosed by the combination of Cao and Wu, to be performed by a transformer model as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Regarding claim 19, the rejection of claim 18 is incorporated. Neither Cao nor Cao2 appear to explicitly disclose the further limitations of the claim.
However, Wu discloses “mapping, by a multi-layer perceptron network, of the represented electric circuit to a scalar value” (Wu, [0067]: “The artificial neural network is used as an approximate but extremely fast simulator of the circuit under consideration in order to quickly explore the parameter space. First, we decide on the input parameter set I. This set of parameters I=Id∪Ip is decomposed into two groups Id and Ip. The design (or controllable) parameters Id are system parameters that the circuit designer can change (and can be continuous or discrete variables) such as device size, circuit topology, and resistor values. Circuit topology can be expressed via the set of circuit elements that have nonzero resistances or capacitances. Ip are the process, yield, and environmental parameters such as operating temperature, process corner, supply voltage variations, and other operating conditions. The output parameters O are the performance parameters such as power consumption, timing offset, group delay, and area” and [0071]: “From the 5000 simulations results for various input parameters I=Id∪Ip, 4950 of these results were used for training and 50 for testing. As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions. The results of the training are shown in FIG. 10 where the neural network can predict
V
c
m
o
f
f
very well when it is small”; the examiner notes that the multi-layer perceptron maps the input parameters, which represent an electric circuit, to a predicted offset voltage
V
c
m
o
f
f
, which is a scalar value).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Cao2 to include “mapping, by a multi-layer perceptron network, of the represented electric circuit to a scalar value” as disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Regarding claim 20, the rejection of claim 19 is incorporated. Cao does not appear to explicitly disclose the further limitations of the claim.
However, Cao2 further discloses “program instructions to operate attention mechanism functions in parallel, and to concatenate and linearly transform outputs of the attention mechanism functions for input into the… [feedforward neural network]” (Cao2, [0019]: “performing an attention function on the h groups of matrices Qi, Ki and Vi parallelly” and [0020]: “merging calculation results headi of the h-head attention mechanism, and performing linear transform is performed by means of a trainable matrix WO to obtain an output MultiHead(X) of the multi-head self-attention mechanism” and [0021]: “MultiHead (X)=Concat (head1, head2, . . . , headh)WO” and [0023]: “S44: inputting the output, normalized in S43, of the multi-head self-attention mechanism to the fully connected feedforward neural network”).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified Cao to include “program instructions to operate attention mechanism functions in parallel, and to concatenate and linearly transform outputs of the attention mechanism functions for input into the… [feedforward neural network],” as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Neither Cao nor Cao2 appear to explicitly disclose that the network is a “multi-layer perceptron network.”
However, Wu discloses “a multi-layer perceptron network” (Wu, [0067]: “As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions”).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Cao2 to have the input be into the multi-layer perceptron network disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Claims 8, 9, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Wu, and further in view of Zhang.
Regarding claim 8, the rejection of claim 2 is incorporated. Cao as modified by Wu further discloses “wherein the instructions cause the processor to perform an additional act comprising training… the circuit representation-learning model to perform circuit performance modeling from beginning-to-end” (Cao, [0054]: “For example, the voltage threshold classification network comprises three voltage threshold types, the number of training batches is 512, the learning rate is 0.001, the optimizer is an adaptive moment estimation optimizer Adam, and the loss function is a cross entropy loss function”).
Neither Cao nor Wu appear to explicitly disclose that the training is done “by a training model.”
However, Zhang discloses “training, by a training model, a… [student] model” (Zhang, III: In our evolutionary knowledge distillation (EKD) approach, the teacher and student network are trained almost synchronously, and an evolutionary teacher can provide supervision information for the learning of student; the examiner notes that the evolutionary teacher network corresponds to “a training model,” which trains the student model by providing supervision information).
Zhang and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Wu to have the training be performed by a training model, as disclosed by Zhang, and one would have been motivated to do so for the purpose of achieving more robust performance and improved generalization ability of the student model (see Zhang, II.A).
Regarding claim 9, the rejection of claim 8 is incorporated. Neither Cao nor Wu appear to explicitly disclose the further limitation of the claim.
However, Zhang further discloses “wherein the training model comprises a stochastic gradient descent-based model” (Zhang, III, Algorithm 1, line 8: “compute the total loss of teacher with Eq. (11) and line 9: “Compute gradient to model parameters wt and update with the SGD [stochastic gradient descent] optimizer”).
Zhang and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Wu to have the training be performed by a stochastic gradient descent-based training model, as disclosed by Zhang, and one would have been motivated to do so for the purpose of achieving more robust performance and improved generalization ability of the student model (see Zhang, II.A).
Regarding claim 17, the rejection of claim 16 is incorporated. Cao further discloses “training …[a BLSTM] model to process parameters for the path embeddings” (Cao, [0012]: “S3: inputting the path feature sequence obtained in S1 to a bi-directional long short-term memory (BLSTM), wherein the BLSTM network, after being trained, is able to model path-level topological information of the critical path passing through each gate cell in the circuit to establish a relation between path-level topological information and the leakage power optimization result”) and “predicting the characteristics of the represented electric circuit” (Cao, [0014]: “…wherein voltage threshold classification network, after being trained, is able to establish a relation between the circuit-level topological information, the path-level topological information, the topological information of the gate cells and the voltage threshold types of the gate cells after leakage power optimization to predict the voltage threshold types of the gate cells in the circuit after optimization”).
Cao does not appear to explicitly disclose the further limitations of the claim.
However, Wu discloses “mapping the represented electric circuit to a scalar value by a multi-layer perceptron network” (Wu, [0067]: “The artificial neural network is used as an approximate but extremely fast simulator of the circuit under consideration in order to quickly explore the parameter space. First, we decide on the input parameter set I. This set of parameters I=Id∪Ip is decomposed into two groups Id and Ip. The design (or controllable) parameters Id are system parameters that the circuit designer can change (and can be continuous or discrete variables) such as device size, circuit topology, and resistor values. Circuit topology can be expressed via the set of circuit elements that have nonzero resistances or capacitances. Ip are the process, yield, and environmental parameters such as operating temperature, process corner, supply voltage variations, and other operating conditions. The output parameters O are the performance parameters such as power consumption, timing offset, group delay, and area” and [0071]: “From the 5000 simulations results for various input parameters I= Id ∪ Ip, 4950 of these results were used for training and 50 for testing. As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions. The results of the training are shown in FIG. 10 where the neural network can predict
V
c
m
o
f
f
very well when it is small; the examiner notes that the multi-layer perceptron maps the input parameters, which represent an electric circuit, to a predicted offset voltage
V
c
m
o
f
f
, which is a scalar value).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified Cao to include “mapping the represented electric circuit to a scalar value by a multi-layer perceptron network” as disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Neither Cao nor Wu appear to explicitly disclose “training the training model to process parameters…”
However, Zhang discloses “training the training model to process parameters” (Zhang, III, Algorithm 1, line 8: “compute the total loss of teacher with Eq. (11) and line 9: “Compute gradient to model parameters wt and update with the SGD [stochastic gradient descent] optimizer”).
Zhang and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Wu to include “training the training model to process parameters for the path embeddings” as disclosed by Zhang, given the path embeddings disclosed by Cao, and one would have been motivated to do so for the purpose of achieving more robust performance and improved generalization ability of the student model (see Zhang, II.A).
Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Wu and Cao2, and further in view of An et al. (NPL: LPViT: A Transformer Based Model for PCB Image Classification and Defect Detection) (“An”).
Regarding claim 4, the rejection of claim 3 is incorporated. Neither Cao nor Wu appear to explicitly disclose the further limitations of the claim.
However, Cao2 further discloses “wherein the instructions cause the processor to perform an additional act comprising operating attention mechanism functions in parallel” (Cao2, [0019]: “performing an attention function on the h groups of matrices Qi, Ki and Vi parallelly”).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Wu to include operating attention mechanism functions in parallel as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Cao, Wu, and Cao2 do not appear to explicitly disclose “wherein the transformer model includes a stack of multi-head attention modules.”
However, An discloses “wherein… [a] transformer model includes a stack of multi-head attention modules (An, Fig. 2 depicts a transformer architecture with inputs being fed through a stack of two multi-head attention modules).
An and the instant application both relate to circuit-performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao, Wu and Cao2 such that the transformer model includes a stack of multi-head attention modules, as disclosed by An, and one would have been motivated to do so for the purpose of enhancing feature extraction capability (see An, III.A.1).
Regarding claim 5, the rejection of claim 4 is incorporated. Cao, Wu, and An do not appear to explicitly disclose the further limitations of the claim.
However, Cao2 further discloses “embedding…circuit parameters of… [an] electric circuit as an input to… [a] transformer model” (Cao2, [0018]: “S41: converting an input feature sequence with a dimension of (samples, max_len) into a tensor with a dimension of (samples, max_len,dim.sub.k), wherein samples is the number of samples, max_len is a maximum path length, din is a designated word vector dimension of a k.sup.th feature in an embedding layer, k=1, 2, . . . , n, and n is the number of input feature sequences” and [0019]: “S42: merging the n new tensors obtained in S41 to obtain a tensor with a dimension of (samples,max_len,dim), which is used as an input X of the multi-head self-attention mechanism”; the examiner notes that “an input feature sequence” corresponds to circuit parameters of an electric circuit, see [0012]: “In S2, static timing analysis is performed on the circuit after the placement in S1, and timing and physical information of all the stages of cell in the path is extracted from the static timing analysis report and the layout information to form feature sequences of the path”).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao, Wu, and An to include embedding the circuit parameters of the electric circuit as an input to the transformer model, as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Zhang, Wu, and Cao2.
Regarding claim 10, the rejection of claim 9 is incorporated. Cao further discloses “processing…parameters in the path embedding” (Cao, [0012]: “S3: inputting the path feature sequence obtained in S1 to a bi-directional long short-term memory (BLSTM), wherein the BLSTM network, after being trained, is able to model path-level topological information of the critical path passing through each gate cell in the circuit to establish a relation between path-level topological information and the leakage power optimization result”), but does not appear to explicitly disclose the further limitations of the claim.
However, Wu further discloses “processing…parameters in the multi-layer perceptron network” (Wu, [0071]: “From the 5000 simulations results for various input parameters I=Id∪Ip, 4950 of these results were used for training and 50 for testing. As the neural network 226, a multi-layer perceptron (MLP) with 7 layers was used with a total of 2070 neurons and 602369 weights with rectified linear unit (ReLu) activation functions. The results of the training are shown in FIG. 10”).
Wu and the instant application both relate to circuit performance modeling using neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the disclosure of Cao to include “processing… parameters in the multi-layer perceptron network” as disclosed by Wu, and one would have been motivated to do so for the purpose of allowing for automatic robust optimization and tuning of circuit designs using a trained machine learning model, which is faster and uses fewer computational resources than circuit simulators (see Wu, [0006]).
Neither Cao nor Wu appear to explicitly disclose the further limitations of the claim.
However, Zhang discloses “processing, by the stochastic gradient descent-based model, parameters...” (Zhang, III, Algorithm 1, line 8: “compute the total loss of teacher with Eq. (11) and line 9: “Compute gradient to model parameters wt and update with the SGD [stochastic gradient descent] optimizer”).
Zhang and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao and Wu to include “processing, by the stochastic gradient-descent based model, parameters in the path embedding…and the multi-layer perceptron network,” as disclosed by Zhang, and one would have been motivated to do so for the purpose of achieving more robust performance and improved generalization ability of the student model (see Zhang, II.A).
Cao, Wu and Zhang do not appear to explicitly disclose that the circuit representation- learning model comprises a “transformer model,” or “processing…parameters in the transformer model.”
However, Cao2 discloses “wherein the circuit representation-learning model comprises a transformer model” (Cao2, [0036]: “S4: a post-routing path delay prediction model is established, the sample data preprocessed in S3 is input to a transformer network, a pre-routing path delay and output data of the transformer network are merged, and dimension reduction is performed to obtain a predicted pre-routing and post-routing path delay residual, and the pre-routing path delay and the predicted pre-routing and post-routing path delay residual are added to obtain a predicted post-routing path delay finally”), and “processing…parameters in the transformer model” (Cao2, [0037]: “S5: the model established in S4 is trained and verified… During training, an Adam optimizer is used, the learning rate is 0.001, the number of training batches is 1080, and the loss function is the root-mean-square error (RMSE)”).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao, Wu, and Zhang to have the circuit representation-learning model comprise a transformer model and to include “processing, by the stochastic gradient-descent based model, parameters in the path embedding, the transformer model, and the multi-layer perceptron network,” as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract), and for improving the accuracy of the prediction.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Cao in view of Zhang, Wu, Cao2, and An.
Regarding claim 11, the rejection of claim 10 is incorporated. Cao, Zhang, Wu, and Cao2 do not appear to explicitly disclose the further limitations of the claim.
However, An discloses “…[a] multi-layer perceptron network has an input size that is the same as an output size of… [a] transformer model” (An, III.B: “3) TRANSFORMER The transformer module is used to extract features, which are then fed into the final classifier to produce the required classification results… 4) CLASSIFIER After enough good features have been retrieved, all that is required for reliable classification results is a simple network topology, and a shallow network like MLP(multilayer perceptron) can achieve incredibly high metrics” and Figure 3. ViT architecture: Transformer Encoder to MLP Head); the examiner notes that the input size of the multi-layer perceptron network must be the same as the output size of the transformer model given that the output features of the transformer module are input to the multi-layer perceptron network classifier).
An and the instant application both relate to circuit-performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of Cao, Zhang, Wu, and Cao2 to include “wherein the multi-layer perceptron network has an input size that is the same as an output size of the transformer model,” as disclosed by An, and one would have been motivated to do so for the purpose of transforming features into sequence information to understand relationships between them, thus increasing performance metrics (see An, III.B.4).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 and 12 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 12 and 1 of copending Application No. 18/455,745 in view of Cao.
Regarding claims 1 and 12, reference claims 12 and 1 anticipate all the limitations of instant claims 1 and 12 respectively, except for the vector being “of a fixed length,” and the prediction being based on “an input of circuit parameters of the electric circuit.” However, Cao discloses “a vector of a fixed length” ([0023]: “In S3, the path feature sequence obtained in S1 is input to the BLSTM… the compressed sequence is then filled again to facilitate subsequent data processing; and finally, the sequence is input to a pooling layer to be subjected to dimension reduction to obtain a final LSTM embedding vector, such that the relation between the path-level topological information and the leakage power optimization result is established”; the examiner notes that “a final LSTM embedding vector” corresponds to a path embedding because it is an embedding that represents “path-level topological information,” and the vector is “of a fixed length” because “the compressed sequence is then filled again to facilitate subsequent data processing” refers to padding the sequence to a maximum path length, see [0045]: “sequence data is filled to a maximum path length to solve the problem of length inconsistency of the sequence data caused by length inconsistency of the paths”) and “an input of circuit parameters of the electric circuit” ([0014]: “S5: merging an output of a GNN model obtained in S2, an output of the BLSTM obtained in S3 and an output of the ANN obtained in S4, and inputting a vector obtained after merging to a voltage threshold classification network, wherein voltage threshold classification network, after being trained, is able to establish a relation between the circuit-level topological information, the path-level topological information, the topological information of the gate cells and the voltage threshold types of the gate cells after leakage power optimization to predict the voltage threshold types of the gate cells in the circuit after optimization”; the examiner notes that “circuit-level topological information”(output of GNN model obtained in S2, see also [0038]), corresponds to “an input of circuit parameters of the electric circuit”).
It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the reference application to have the vectors be of a fixed length and the prediction to be based on an input of circuit parameters, as disclosed by Cao, and one would have been motivated to do so for the purpose of solving the problem of length inconsistency of the paths (see Cao, [0019]) and to establish a relationship between parameters of the circuit and a circuit optimization result (see Cao, [0011]).
Claim 18 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11 of copending Application No. 18/455,745 in view of Cao and further in view of Cao2.
Regarding claim 18, reference claim 11 anticipates all the limitations of instant claim 18, except for the vector being “of a fixed length,” the prediction being based on “an input of circuit parameters of the electric circuit,” and the prediction being performed by “a transformer model.” Claim 18 is a computer program product claim corresponding to device claim 1/method claim 12, and the rejection follows the same rationale as the nonstatutory double patenting rejection of claims 1 and 12 above, except insofar as claim 18 further recites that the prediction is done by a “transformer model.” Cao does not appear to explicitly disclose this limitation. However, Cao2 discloses “predict, by a transformer model [a] characteristic… of… [a] represented electric circuit” (Cao2, [0026]: “the post-routing path delay prediction method for a digital integrated circuit disclosed by the invention captures the timing and physical correlation between cells in a path by means of the self-attention mechanism of a transformer network, thus being able to directly predict a path delay”; the examiner notes that a “path delay” corresponds to a characteristic of a represented electric circuit).
Cao2 and the instant application both relate to circuit performance prediction using a transformer network and are analogous. It would have been obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to have modified the combination of the reference application and Cao, to have the prediction be performed by a transformer model as disclosed by Cao2, and one would have been motivated to do so for the purpose of capturing the timing and physical correlation of all stages of cells in a path (see Cao2, Abstract).
The claims of the instant application and the claims of the reference patent are compared in the table below.
Instant Application
Reference Application 18/455,745
1. A computing device comprising:
a processor;
a storage device coupled to the processor, wherein the storage device stores instructions to cause the processor to perform acts to provide a circuit performance modeling, the acts comprising:
identifying and extracting paths of an electric circuit between a plurality of designated components that represent the electric circuit;
converting at least one of the extracted paths to a path embedding comprising a vector of a fixed length; and
predicting, by a circuit representation-learning model, characteristics of the designated components that represent the electric circuit based on an input of circuit parameters of the electric circuit.
12. A system for circuit generation, comprising:
a hardware processor; and
a memory that stores a computer program which, when executed by the hardware processor, causes the hardware processor to:
generate a circuit design;
extract paths from the circuit design, with the paths representing sequences of connected circuit components from one terminal of the circuit to another;
embed the extracted paths as respective vectors in a latent space; and
determine a property of the circuit design using an ensemble of trained surrogate models that accept a sequence of the vectors as input.
12. A computer-implemented method of a circuit performance modeling, the method comprising:
identifying and extracting paths between a plurality of designated components that represent an electric circuit;
converting one or more of the extracted paths to respective path embeddings including a corresponding vector of a fixed length; and
predicting characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit.
1. A computer-implemented method for circuit generation, comprising:
generating a circuit design;
extracting paths from the circuit design, with the paths representing sequences of connected circuit components from one terminal of the circuit to another;
embedding the extracted paths as respective vectors in a latent space; and
determining a property of the circuit design using an ensemble of trained surrogate models that accept a sequence of the vectors as input.
18. A computer program product comprising:
one or more computer-readable storage devices and program instructions stored on at least one of the one or more computer-readable storage devices, the program instructions executable by a processor, the program instructions comprising:
program instructions to identify and extract paths between a plurality of designated components that represent an electric circuit;
program instructions to convert one or more of the extracted paths to respective path embeddings including a vector of a fixed length; and
program instructions to predict, by a transformer model, characteristics of the represented electric circuit based on an input of circuit parameters and the path embeddings of the electric circuit.
11. A computer program product for circuit generation, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being executable by a hardware processor to cause the hardware processor to:
generate a circuit design;
extract paths from the circuit design, with the paths representing sequences of connected circuit components from one terminal of the circuit to another;
embed the extracted paths as respective vectors in a latent space; and
determine a property of the circuit design using an ensemble of trained surrogate models that accept a sequence of the vectors as input.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GWYNEVERE A DETERDING whose telephone number is (571) 272-7657. The examiner can normally be reached Mon-Fri. 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.A.D./Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125