Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This Office Action is responsive to Applicants' Amendment filed on October 1, 2025, in which claims 1, 10, and 16 are currently amended. Claims 1-20 are currently pending.
Response to Arguments
The previous rejections to claims 1-20 under 35 U.S.C. § 112(b) are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections.
Applicant’s arguments with respect to rejection of claims 1-9 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive.
Specifically, with respect to Applicant’s arguments on p. 8 of the Remarks submitted 10/1/2025 that the claims are directed towards a technological improvement (Step 2A, Prong Two), Examiner respectfully disagrees. Examiner asserts that under Step 2A, Prong Two the generic computer components and insignificant extra-solution activity of gathering and outputting data (See MPEP 2106.05(g) and 2106.05(d)(II)) do not integrate the judicial exception of performing matrix multiplication (mathematical calculations and relationships) into a practical application nor are the claims seen as providing a technical improvement as the claim method can be implemented on generic computer components (as recited). Examiner asserts that merely using the generic computer components to perform the judicial exception (matrix multiplication) more efficiently is not an improvement to the computer itself. Examiner further notes MPEP 2106.05(a) "An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.” As well as MPEP 2106.07(a)(II) "employing well-known computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not integrate the exception into a practical application".
Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 103 based on amendment have been considered and are persuasive. The arguments are moot in view of a new ground of rejection set forth below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 10, and 16 recite "the transmitting" in “wherein the transmitting occurs in a vertical direction”. There is insufficient antecedent basis for this limitation in the claim. Specifically, claims 1, 10, and 16 introduce “transmit the first partial matrix multiplication results” and “transmit the second partial matrix multiplication results” such that it’s unclear which transmitting “the transmitting” refers to. “Wherein the transmitting the second partial matrix multiplication results” is recommended.
Claim Rejections - 35 USC § 101
101 Rejection
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter.
Regarding Claim 1: Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: Claim 1 under its broadest reasonable interpretation is a series of mathematical calculations and relationships. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following:
performing matrix multiplication for a forwards training pass using the weights in the first core in a first format to generate first partial matrix multiplication results (mathematical calculations and relationships)
performing matrix multiplication for a backwards training pass using the weights in the first core in a second format to generate second partial matrix multiplication results (mathematical calculations and relationships)
Therefore, claim 1 recites an abstract idea which is a judicial exception.
Step 2A Prong Two Analysis: Claim 1 recites additional elements “layer” and “wherein the pinning results in the first set of weights remaining in the first memory between the forwards training pass and the backwards training pass.”. However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application. Claim 1 also recites additional elements “storing weights for a first set of partial matrix multiplication operations for a first layer in a first local memory of a first core”, “transmitting the first partial matrix multiplication results towards a core at an end of a row in an array of cores for that includes the first core, wherein the transmitting occurs in a horizontal direction within the array, and wherein the core at the end of the row is configured to accumulate the first partial matrix multiplication results”, “pinning the first set of weights in the first local memory performing matrix multiplication for a backwards training pass using the weights in the first core in a second format to generate second partial matrix multiplication results […] wherein the pinning results in the first set of weights remaining in the first memory between the forwards training pass and the backwards training pass”, “transmitting the first partial matrix multiplication results towards a core at an end of a row in an array of cores for that includes the first core, wherein the transmitting occurs in a horizontal direction within the array, and wherein the core at the end of the row is configured to accumulate the first partial matrix multiplication results”, which amounts to gathering and outputting data which is insignificant extra-solution activity. “Pinning” explicitly taught as being synonymous with storing ([¶0032] “The weight memory 306 is configured to store (“pin”) weights through one or even multiple backwards and forwards passes such that the weights do not need to be moved out to memory”). Therefore, claim 1 is directed to a judicial exception.
Step 2B Analysis: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 1 amount to no more than mere instructions to apply the judicial exception using a generic computer component and insignificant extra-solution activity. The gathering and outputting data is seen as well-understood, routine, and conventional in the art (See MPEP 2106.05(d)(II)).
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to dependent claims 2-9. The additional limitations of the dependent claims are addressed briefly below:
Dependent claim 2 recites additional insignificant instructions to apply the judicial exception using generic computer components “wherein the first layer is a general matrix multiply layer.”
Dependent claim 3 recites additional mathematical calculations and relationships “the weights in the second format are organized as a matrix that is a transpose of the weights in first format”
Dependent claim 4 recites additional instructions to apply the judicial exception using generic computer components “the first layer is a convolution layer”
Dependent claim 5 recites additional mathematical calculations and relationships “the weights in the second format are organized as a matrix that is a convolution-based reshape of the weights in the first format” as well as additional instructions to apply the judicial exception using generic computer components “in the convolution-based reshape, columns include filters in the same input channel while in the weights in the first format, columns include filters in the same output channel”
Dependent claim 6 recites additional mathematical calculations and relationships “the forward training pass and the backwards pass include a plurality of matrix multiplication sub-operations involving portions of a larger matrix, each matrix multiplication sub-operation occurring on a machine learning accelerator core and generating a partial matrix multiplication result”
Dependent claim 7 recites additional observation, evaluation, and judgement “selecting a first set of connections for the forward training pass and selecting a second set of connections for the backwards pass”
Dependent claim 8 recites additional instructions to apply the judicial exception using generic computer components “the one or more connections are unidirectional”
Dependent claim 9 recites additional insignificant extra-solution activity of gathering data “the pinning includes pinning the weights in the first core between the forwards pass and the backwards pass” where in light of the instant specification “pinning” is interpreted as synonymous with storing in memory ([¶0032] “The weight memory 306 is configured to store (“pin”) weights through one or even multiple backwards and forwards passes such that the weights do not need to be moved out to memory”)
Therefore, when considering the elements separately and in combination, they do not add significantly more to the inventive concept. Accordingly, claims 1-9 are rejected under 35 U.S.C. § 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 6-11, and 15-20 are rejected under U.S.C. §102(a)(1) as being anticipated by Chung (“EXPLOITING THE INHERENT PARALLELISMS OF BACK-PROPAGATION NEURAL NETWORKS TO DESIGN A SYSTOLIC ARRAY”, 1991).
PNG
media_image1.png
460
642
media_image1.png
Greyscale
FIG. 1(a) of Chung
PNG
media_image2.png
338
418
media_image2.png
Greyscale
FIG. 1(b) of Chung
Regarding claim 1, Chung teaches A method, comprising: storing weights for a first set of partial matrix multiplication operations for a first layer in a first local memory of a first core([p. 4 §3] "Each basic cell wji is a computational element which contains the weight value and the weight increment associated to the connection between neuron i of a layer and neuron j of the next layer" [p. 5] "The internal data path of the basic cell is shown in Figure 3. It consists of three multipliers, three adders, two registers for weight value and weight increment" basic cell interpreted as core, cell registers interpreted as local memory.)
performing matrix multiplication for a forwards training pass using the weights in the first core in a first format to generate first partial matrix multiplication results([Abstract] "The design is based on the classical systolic algorithm of matrix-by-vector multiplication, and exploits the inherent parallelisms of back-propagation neural networks. This design executes the forward and backward passes in parallel, and exploits the pipelined parallelism of multiple patterns in each pass. The estimated performance of this design shows that the pipelining of multiple patterns is an important factor in VLSI neural network implementations")
transmitting the first partial matrix multiplication results towards a core at an end of a row in an array of cores for that includes the first core, wherein the transmitting occurs in a horizontal direction within the array, and wherein the core at the end of the row is configured to accumulate the first partial matrix multiplication results([p. 4 §3] "The 0f unit executes the thresholding and the activation function as shown in Equations (1.2) and (2.2)" 0f unit in horizontal direction explicitly performs forward propagation (threshold/activation). See also FIG. 1(b) and Eqn. 1.2 and 2.2. FIG. 1(b) explicitly shows that each cell outputs the previous running sums (accumulated partial multiplication results) plus the local product (also a partial multiplication result) in both the horizontal (ES=NWxW +WS) and vertical (SE=WNxW+NE) directions.)
pinning the first set of weights in the first local memory; ([p. 3 §2] "we use the batch updating method because the weights must not be changed for the forward pass and the backward pass of a pattern as shown in Equations (l)-(6), although the systolic array proposed in this paper can support both of the weight updating methods." [p. 4 §3] "The weight matrix is shared between the forward and backward passes" [p. 4 §3] "Each basic cell wji is a computational element which contains the weight value and the weight increment associated to the connection between neuron i of a layer and neuron j of the next layer" [p. 5] "The internal data path of the basic cell is shown in Figure 3. It consists of three multipliers, three adders, two registers for weight value and weight increment")
performing matrix multiplication for a backwards training pass using the weights in the first core in a second format to generate second partial matrix multiplication results; transmitting the second partial matrix multiplication results towards a core at an end of a column in an array of cores that includes the first core, wherein the transmitting occurs in a vertical direction within the array, and wherein the core at the end of the column is configured to accumulate the second partial matrix multiplication results. ([p. 4 §3] "the g unit executes the derivative of the activation function as shown in Equation (4). The g unit has a FIFO queue to contain the outputs of a neuron, that are fed back to the basic cells to compute the weight updates in the backward pass" 0f unit in horizontal direction explicitly performs forward propagation (threshold/activation). See also FIG. 1(b) and Eqn. 1.2 and 2.2. FIG. 1(b) explicitly shows that each cell outputs the previous running sums (accumulated partial multiplication results) plus the local product (also a partial multiplication result) in both the horizontal (ES=NWxW +WS) and vertical (SE=WNxW+NE) directions where ES and SE are interpreted as different formats using the weight stored in the core.)
wherein the pinning results in the first set of weights remaining in the first memory between the forwards training pass and the backwards training pass.([p. 3 §2] "we use the batch updating method because the weights must not be changed for the forward pass and the backward pass of a pattern as shown in Equations (l)-(6), although the systolic array proposed in this paper can support both of the weight updating methods." [p. 4 §3] "The weight matrix is shared between the forward and backward passes").
Regarding claim 2, Chung teaches The method of claim 1, wherein the first layer is a general matrix multiply layer.(Chung [p. 4 §3] "The weight matrix is shared between the forward and backward passes, and the data paths of the activations forward-propagated and the errors back-propagated are disjoint The 0f unit executes the thresholding and the activation function as shown in Equations (1.2) and (2.2), and the g unit executes the derivative of the activation function as shown in Equation (4)." See also FIG. 1(b)).
Regarding claim 6, Chung teaches The method of claim 1, wherein: the forward training pass and the backwards pass include a plurality of matrix multiplication sub-operations involving portions of a larger matrix, each matrix multiplication sub-operation occurring on a machine learning accelerator core and generating a partial matrix multiplication result; and the method further comprises: selecting one or more connections between machine learning accelerator cores through which to accumulate partial matrix multiplication results for summation.(Chung [p. 4 §3] "The 0f unit executes the thresholding and the activation function as shown in Equations (1.2) and (2.2)" 0f unit in horizontal direction explicitly performs forward propagation (threshold/activation). See also FIG. 1(b) and Eqn. 1.2 and 2.2. FIG. 1(b) explicitly shows that each cell outputs the previous running sums (accumulated partial multiplication results) plus the local product (also a partial multiplication result) in both the horizontal (ES=NWxW +WS) and vertical (SE=WNxW+NE) directions.).
Regarding claim 7, Chung teaches The method of claim 6, wherein selecting the one or more connections comprises: selecting a first set of connections for the forward training pass and selecting a second set of connections for the backwards pass.(Chung [p. 4 §3] "The 0f unit executes the thresholding and the activation function as shown in Equations (1.2) and (2.2)" 0f unit in horizontal direction explicitly performs forward propagation (threshold/activation). See also FIG. 1(b) and Eqn. 1.2 and 2.2. FIG. 1(b) explicitly shows that each cell outputs the previous running sums (accumulated partial multiplication results) plus the local product (also a partial multiplication result) in both the horizontal (ES=NWxW +WS) and vertical (SE=WNxW+NE) directions.).
Regarding claim 8, Chung teaches The method of claim 6, wherein the one or more connections are unidirectional.(Chung [p. 4 §3] "The 0f unit executes the thresholding and the activation function as shown in Equations (1.2) and (2.2)" See FIG. 1. The connections are unidirectional in either the vertical or horizontal direction).
Regarding claim 9, Chung teaches The method of claim 1, wherein the pinning includes pinning the weights in the first core between the forwards pass and the backwards pass.(Chung [p. 3 §2] "we use the batch updating method because the weights must not be changed for the forward pass and the backward pass of a pattern as shown in Equations (l)-(6), although the systolic array proposed in this paper can support both of the weight updating methods." [p. 4 §3] "The weight matrix is shared between the forward and backward passes").
Regarding claim 10, claim 10 is directed towards a machine learning accelerator core for performing the method of claim 1. Therefore, the rejection applied to claim 1 also applies to claim 10.
Claim 10 also recites additional elements a matrix multiplication unit; a reshape engine; and a weight memory(Chung [p. 4 §3] "Each basic cell wji is a computational element which contains the weight value and the weight increment associated to the connection between neuron i of a layer and neuron j of the next layer" [p. 5] "The internal data path of the basic cell is shown in Figure 3. It consists of three multipliers, three adders, two registers for weight value and weight increment" Multiplier interpreted as matrix multiplication unit. cell registers interpreted as weight memory. Internal data path interpreted as reshape engine.).
Similarly, regarding claims 11 and 15, claims 11 and 15 are directed towards a machine learning accelerator core for performing the method of claims 2 and 9, respectively. Therefore, the rejections applied to claims 2 and 9 also apply to claims 11 and 15.
Regarding claim 16, Chung teaches a machine learning accelerator for performing the method of claim 1. Therefore, the rejection applied to claim 1 also applies to claim 16.
Claim 16 also recites additional elements Claim 10 also recites additional elements a plurality of machine learning accelerator cores that includes a first machine learning accelerator core, wherein the first machine learning accelerator core comprises: a matrix multiplication unit; a reshape engine; and a weight memory (Chung [p. 4 §3] "Each basic cell wji is a computational element which contains the weight value and the weight increment associated to the connection between neuron i of a layer and neuron j of the next layer" [p. 5] "The internal data path of the basic cell is shown in Figure 3. It consists of three multipliers, three adders, two registers for weight value and weight increment" Cell interpreted as core. Multiplier interpreted as matrix multiplication unit. cell registers interpreted as weight memory. Internal data path interpreted as reshape engine.).
Similarly, regarding claims 17-20, claims 17-20 are directed towards a machine learning accelerator for performing the methods of claims 6-9, respectively. Therefore, the rejections applied to claims 6-9 also apply to claims 17-20.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3 and 12 are rejected under U.S.C. §103 as being unpatentable over the combination of Chung and Lauterbach (US20210248453A1).
Regarding claim 3, Chung teaches The method of claim 2.
However, Chung doesn't explicitly teach, wherein the weights in the second format are organized as a matrix that is a transpose of the weights in first format.
Lauterbach, in the same field of endeavor, teaches the weights in the second format are organized as a matrix that is a transpose of the weights in first format.([¶0496] "In some embodiments and/or usage scenarios, the delta pass and the chain pass are placed offset by one layer so the activations are stored in the same layers as the weights used in the backward direction. Activations are stored by the receiving layer such that in the delta pass and the chain pass, the activations are used directly without additional communication. In addition to storing activations, a weight transpose is performed to implement the delta pass. The weight transpose, in some embodiments and/or usage scenarios, is implemented by replicating the weights, using additional memory capacity and additional communication when updating the weights. In some embodiments and/or usage scenarios, the weight transpose is implemented by transposing the delta broadcast in the vertical dimension.").
Chung as well as Lauterbach are directed towards neural network accelerators. Therefore, Chung as well as Lauterbach are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Chung with the teachings of Lauterbach by partitioning neurons to a systolic array that performed partial matrix multiplication operations. Lauterbach provides as additional motivation for combination ([¶0094] “In an aspect conceptually related to a scaled compute fabric for accelerated deep learning, techniques in advanced deep learning provide improvements in one or more of accuracy, performance, energy efficiency, and cost”). This motivation for combination also applies to the remaining claims which depend on this combination.
Regarding claim 12, claim 12 is directed towards a machine learning accelerator core for performing the method of claim 3. Therefore, the rejection applied to claim 3 also applies to claim 12.
Claims 4 and 13 are rejected under U.S.C. §103 as being unpatentable over the combination of Chung and Ma (“an equivalence of fully connected layer and convolutional layer”, 2017).
Regarding claim 4, Chung teaches The method of claim 1.
However, Chung doesn't explicitly teach wherein the first layer is a convolution layer.
Ma, in the same field of endeavor, teaches the first layer is a convolution layer. ([Abstract] "This article demonstrates that convolutional operation can be converted to matrix multiplication, which has the same calculation way with fully connected layer. The article is helpful for the beginners of the neural network to understand how fully connected layer and the convolutional layer work in the backend").
Chung as well as Ma are directed towards neural networks. Therefore, Chung as well as Ma are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Chung with the teachings of Ma by using the accelerator in Chung for convolutional neural networks using the equivalence reinforced by Ma. Ma provides as additional motivation for combination ([Abstract] "This article demonstrates that convolutional operation can be converted to matrix multiplication, which has the same calculation way with fully connected layer.”).
Regarding claim 13, claim 13 is directed towards an accelerator core for performing the method of claim 4. Therefore, the rejection applied to claim 4 also applies to claim 13.
Claims 5 and 14 are rejected under U.S.C. §103 as being unpatentable over the combination of Chung and Ma and in further view of Prasad (US20210034708A1).
Regarding claim 5, the combination of Chung, and Ma teaches The method of claim 4.
However, the combination of Chung, and Ma doesn't explicitly teach the weights in the second format are organized as a matrix that is a convolution-based reshape of the weights in the first format
wherein, in the convolution-based reshape, columns include filters in the same input channel while in the weights in the first format, columns include filters in the same output channel.
Prasad, in the same field of endeavor, teaches The method of claim 4, wherein the weights in the second format are organized as a matrix that is a convolution-based reshape of the weights in the first format, ([¶0105] "The convolutional layer 440 applies the filters to produce a set of vectors. The set of vectors are aggregated to form columns of a matrix, referred to herein as an “attention matrix.” The max pooling layer 442 produces a vector that includes a respective maximum value from each of row of the attention matrix. The vector produced by the max pooling layer 442 may be referred to herein as an “attention vector.” The multiplication layer 450 transposes the word embedding document matrix 420, and multiplies the transposed document matrix by the attention vector. ")
wherein, in the convolution-based reshape, columns include filters in the same input channel while in the weights in the first format, columns include filters in the same output channel.([¶0105] "The product output by the multiplication layer 450 is an intermediate representation 422 b generated by the hidden layer sequence 414 b. The hidden layer sequence 414 b thereby performs “attention-based word embedding.”" Intermediate representation interpreted as synonymous with output channel.).
The combination of Chung and Ma as well as Prasad are directed towards neural network systems. Therefore, the combination of Chung and Ma as well as Prasad are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Duong, Chen, and user1824726 and Lauterbach with the teachings of Prasad by using a convolution layer as the first layer in a neural network. While it would be obvious to one of ordinary skill in the art to use a convolutional layer as the first layer in a neural network, Prasad reinforces the obviousness of using a convolutional layer as the first layer in a neural network.
Regarding claim 14, claim 14 is directed towards a machine learning accelerator core for performing the method of claim 5. Therefore, the rejection applied to claim 5 also applies to claim 14.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124
/MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124