DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in the application.
Claim Objections
The following claims are objected to.
Claim 1 recites “cause the one or more processing devices to execute functions comprising: receiving, at a computing device, …, and generating, at the computing device, …”. It is not clear which device is performing the acts. Claim 10 has similar issue.
Claim 10 (page 6) 5th line and page 7 last line “the image” has no antecedent basis. In PA rejection, examiner considers “the input” for consistency.
Claim 3 “the first set of connection weights” has no antecedent basis.
Claim 6 “the first set of connection weights” has no antecedent basis.
Claim 12 “the first set of connection weights” has no antecedent basis.
Claim 16 “the first set of connection weights” has no antecedent basis.
Claim 18 should be dependent upon claim 10.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Claim 1 “one or more processing devices to execute”, “receiving, at a computing device” and “generating, at the computing device”.
Claim 10 “configured to run at one or more processing devices”, “receiving, at a computing device” and “generating, at the computing device”.
Claim 19 “receive, at a computing device” and “generate, at the computing device”.
The corresponding structure disclosed in the specification is: FIG. 1B 202; and para. [0047] “one or more central processing units (CPUs), one or more microprocessors” etc. In this instance, the structure corresponding to a 35 U.S.C. 112(f) claim limitation for a computer-implemented function must include the algorithm needed to transform the general purpose computer or microprocessor disclosed in the specification. The corresponding algorithm disclosed in the specification is: FIG. 2-3; para. [0071]-[0098]. For prior art rejection purpose, Examiner interprets the above one or more processing devices/computing device as performing the identical or equivalent function.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-8, 10, 12-14 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seung et al. (Seung, H. Sebastian, and Jonathan Zung. "A correlation game for unsupervised learning yields computational interpretations of Hebbian excitation, anti-Hebbian inhibition, and synapse elimination." arXiv preprint arXiv:1704.00646 (2017), hereafter Seung), in view of Choi et al. (US Publication 2022/0051017 A1, hereafter Choi).
As per claim 1, Seung teaches a method (Abstract) for extracting object representations from inputs, the method comprising:
receiving an input comprising pixelized information (Fig. 1 u1…u4 representing an input; page 3 top 2 para. “Our neural networks will learn to transform a sequence of input vectors u(1),...,u(T) into a sequence of output vectors x(1),...,x(T)”, “Define the input matrix U = [u(1),...,u(T)] as the matrix containing input vectors u(t) as its columns. The element Uat is the ath component of u(t)”; page 7 section 5 Empirical results with MNIST “To illustrate the properties of the algorithm, we ran it on the MNIST dataset”; Note the MNIST dataset is a large collection of handwritten digits (0-9) used for training and testing image classification models); and
generating an object representation from the input using a bi-layer neural network comprising an input layer of input nodes and a representation layer of representation nodes (Fig. 1 shows a bi-layer neural network comprising an input layer of 4 input nodes and a representation layer of 3 representation nodes; Abstract “Through empirical studies, we show that this facilitates the learning of sensory features that resemble parts of objects”; page 10 section 7 Discussion “3. By sparsening connectivity, synaptic competition and elimination facilitate the learning of features that resemble parts of objects”);
wherein:
all input nodes are connected to all representation nodes through a first set of weighted connections having differing values and all representation nodes are connected to all other representation nodes through a second set of weighted connections having differing values (Fig. 1 each of the 4 input nodes is connected to all 3 representation through a first set of weighted connections having differing values W, and each of the 3 representation nodes are connected to all other representation nodes through a second set of weighted connections L (see annotated Figure 1 below));
PNG
media_image1.png
469
440
media_image1.png
Greyscale
a weight matrix stores connection weights corresponding to the first set of weighted connections between the input nodes of the input layer and the representation nodes of the representation layer (Fig. 1 “W”; eqn. (10) “Wia”), wherein:
a connection weight stored in the weight matrix is strengthened when an input node and a representation node are both active (page 6 the sentence following eqn. (15) “The first term is Hebbian, as it causes strengthening of Wia when xi and ua are coactive”);
in response to detecting that two representation nodes are co-active, the connection weights between input nodes to both representation nodes are weakened (page 9 2nd para. “How are the lateral connections related to feedforward excitation? If two neurons receive similar feedforward weights, their activities will be highly correlated in the absence of lateral inhibition. To weaken the correlation, the learning algorithm is expected to strengthen the inhibitory connection between the two neurons”);
the input nodes of the input layer receive a first set of values corresponding to the pixelized information of the input (Fig. 1 u1…u4);
a second set of values for the representation nodes in the representation layer is calculated based, at least in part, on inputs received via (i) the first set of weighted connections between the input nodes and the representation nodes and (ii) the second set of weighted connections among the representation nodes (Seung proposes a correlation game theory. As shown in eqn. (8)-(9), the goal of excitation is to maximize the payoff function (8), and the goal of inhibition is to minimize the same payoff function. That says excitation aims to maximize output-input correlations while inhibition aims to decorrelate the outputs (page 5 top 5 lines). The optimizations in eqn. (9) could be performed by an iterative online method, which is based on one input vector at each time step (page 5 section 3 Neural network algorithm top 3 lines). In each time step, given a stimulus vector u (input), update the activities xi of the output neurons is performed using eqn. (14), in which the first set of weighted connections between the input nodes and the representation nodes Wia and the second set of weighted connections among the representation nodes Lij are used); and
the second set of values for the representation nodes in the representation layer is utilized to generate the object representation for the input (page 7 section 5 Empirical results with MNIST “The learned connectivity is shown in Figure 3. Most connection strengths have been driven to zero; these synapses have effectively been eliminated. Each vector of convergent connections is displayed as an image. The features can be interpreted as “parts” of handwritten digits”).
Seung however, does not teach a system for performing the method. Specifically, the system comprises one or more processing devices, and one or more non-transitory computer-readable storage devices storing computing instructions configured to be executed on the one or more processing devices and cause the one or more processing devices to execute functions.
Choi in an analogous filed discloses a system for identifying one or more objects in one or more images using one or more neural networks (Abstract; FIG. 1). Specifically Choi teaches the system comprises one or more processing devices, and one or more non-transitory computer-readable storage devices storing computing instructions configured to be executed on the one or more processing devices and cause the one or more processing devices to execute functions including receiving an input image and generating an object representation from the input image using the one or more neural network (FIG. 2 shows feature map 206 is generated using one or more neural networks. From the feature map 206, object classification (object identification) and object position in the image can be identified. Therefore the feature map is an object representation from the input image 202. See FIG. 1; para. [0066]-[0067], [0089]-[0091]. Choi further teaches a system comprising one or more processing devices (FIG. 10 processor #1002) and one or more non-transitory computer-readable storage devices storing computing instructions configured to be executed on the one or more processing devices and cause the one or more processing devices to execute functions (FIG. 10 memory #1020).
It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Seung to incorporate the teaching of Choi to include a system comprising one or more processing devices, and one or more non-transitory computer-readable storage devices storing computing instructions configured to be executed on the one or more processing devices and cause the one or more processing devices to execute functions. Doing so would allow implementation of an active learning method for object detection as mentioned by Choi (para. [0257]).
As per claim 3, dependent upon claim 1, Seung in view of Choi teaches a learning mechanism continuously updates the first set of connection weights as additional inputs are processed by the bi-layer neural network (Seung proposes a correlation game theory. As shown in eqn. (8)-(9), the goal of excitation is to maximize the payoff function (8), and the goal of inhibition is to minimize the same payoff function. That says excitation aims to maximize output-input correlations while inhibition aims to decorrelate the outputs (page 5 top 5 lines). The optimizations in eqn. (9) could be performed by an iterative online method, which is based on one input vector at each time step (page 5 section 3 Neural network algorithm top 3 lines). In each time step, given a stimulus vector u (input), update the activities xi of the output neurons is performed using eqn. (14). After convergence of x, the first set of connection weights W are updated as shown in eqn. (15), and the second set of connection weights L are updated as shown in eqn. (17). After both of the connection strengths are updated, the dynamics (14) (i.e. updating output based on input) is run for the next stimulus vector in next time step, and so on, resulting in continuously updating the first set of connection weights as additional inputs are processed by the bi-layer neural network. See section 3 Neural network algorithm and section 4 Biological interpretation).
As per claim 4, dependent upon claim 3, Seung in view of Choi teaches the learning mechanism includes a stochastic gradient descent method (Seung page 5 eqn. (11) and the sentence preceding eqn. (11) “For the other optimizations in Eq. (9), we perform gradient ascent with respect to W and gradient descent with respect to L”).
As per claim 5, dependent upon claim 1, Seung in view of Choi teaches the second set of values for the representation nodes in the representation layer and the first set of values for the input nodes in the input layer are all non-negative values (Seung page 3 top para. “Our neural networks will learn to transform a sequence of input vectors u(1),...,u(T) into a sequence of output vectors x(1),...,x(T). Both input and output will be assumed nonnegative”).
As per claim 6, dependent upon claim 1, Seung in view Choi of teaches:
a second set of connection weights for the second set of weighted connections is determined such that weights between any two representation nodes in the representation layer are the same in both directions (Seung page 5 section 3 Neural network algorithm first para. “The conjugate variables Wia in the Legendre-Fenchel transform of Eq. (4) are now feedforward connections from the input to the output. The Lagrange multipliers Lij are now lateral connections between the outputs. The lateral connections are assumed to be symmetric (Lij = Lji), which guarantees that the dynamics will converge to a local maximum of the objective function if there is no runaway instability [Hahnloser et al., 2003]”);
in response to detecting that two representation nodes are co-active, the connection weights between the two representation nodes are strengthened (Seung page 9 2nd para. “If two neurons receive similar feedforward weights, their activities will be highly correlated in the absence of lateral inhibition. To weaken the correlation, the learning algorithm is expected to strengthen the inhibitory connection between the two neurons”); and
the second set of connection weights for the second set of weighted connections is continuously updated based, at least in part, on changes in the first set of connection weights (Seung proposes a correlation game theory. As shown in eqn. (8)-(9), the goal of excitation is to maximize the payoff function (8), and the goal of inhibition is to minimize the same payoff function. That says excitation aims to maximize output-input correlations while inhibition aims to decorrelate the outputs (page 5 top 5 lines). The optimizations in eqn. (9) could be performed by an iterative online method, which is based on one input vector at each time step (page 5 section 3 Neural network algorithm top 3 lines). In each time step, given a stimulus vector u (input), update the activities xi of the output neurons is performed using eqn. (14). After convergence of x, the first set of connection weights W are updated as shown in eqn. (15), and the second set of connection weights L are updated as shown in eqn. (17). As shown in eqn. (14), updating xi using input u involves using the first set of connection weights and the second set of connection weights. After both of the connection strengths are updated, the dynamics (14) (i.e. updating output based on input) is run for the next stimulus vector in next time step, and so on, resulting in continuously updating the first set of connection weights and second set of connection weights as additional inputs are processed by the bi-layer neural network. See section 3 Neural network algorithm and section 4 Biological interpretation. As shown in eqn. (14), changes in the previous time step in the first set of connection weights affects updating of xi in the next time step, resulting in the second set connection weights is updated based on changes in the first set of connection weights.)
As per claim 7, dependent upon claim 1, Seung in view of Choi teaches the object representations include data related to object identification and data related to position information (Choi FIG. 2 shows feature map 206 is generated using one or more neural networks. From the feature map 206, object classification (object identification) and object position in the image can be identified. Therefore the feature map is an object representation from the input image 202. See FIG. 1; para. [0066]-[0067], [0089]-[0091]).
As per claim 8, dependent upon claim 1, Seung in view of Choi teaches the second set of weighted connections is inhibitory (Seung page 6 two lines following eqn. (14) “The output neurons try to turn each other off through the lateral inhibitory connections Lij”).
As per claim 10, an independent claim, Seung in view of Choi teaches a method (Seung Abstract) for extracting object representations from inputs, the method implemented via execution of computing instructions configured to run at one or more processing devices and stored on non-transitory computer-readable media (See rejections applied to claim 1 with respect to Choi’s teaching of a system), the method comprising:
receiving, at a computing device, an input comprising pixelized information (See rejections applied to claim 1); and
generating, at the computing device, an object representation from the input using a bi-layer neural network comprising an input layer of input nodes and a representation layer of representation nodes (See rejections applied to claim 1);
wherein:
all input nodes are connected to all representation nodes through a first set of weighted connections having differing values and all representation nodes are connected to all other representation nodes through a second set of weighted connections having differing values (See rejections applied to claim 1);
a weight matrix stores connection weights corresponding to the first set of weighted connections between the input nodes of the input layer and the representation nodes of the representation layer (See rejections applied to claim 1), wherein:
a connection weight stored in the weight matrix is strengthened when an input node and a representation node are both active (See rejections applied to claim 1);
in response to detecting that two representation nodes are co- active, the connection weights between input nodes to both representation nodes are weakened (See rejections applied to claim 1);
the input nodes of the input layer receive a first set of values corresponding to the pixelized information of the input (See rejections applied to claim 1);
a second set of values for the representation nodes in the representation layer is calculated based, at least in part, on inputs received via (i) the first set of weighted connections between the input nodes and the representation nodes and (ii) the second set of weighted connections among the representation nodes (See rejections applied to claim 1); and
the second set of values for the representation nodes in the representation layer is utilized to generate the object representation for the input (See rejections applied to claim 1).
Claim 12, dependent upon claim 10, is rejected as applied to claim 3 above.
Claim 13, dependent upon claim 12, is rejected as applied to claim 4 above.
Claim 14, dependent upon claim 10, is rejected as applied to claim 5 above.
Claim 16, dependent upon claim 10, is rejected as applied to claim 6 above.
Claim 17, dependent upon claim 10, is rejected as applied to claim 7 above.
Claim 18, dependent upon claim 10, is rejected as applied to claim 8 above.
Claim 19, an independent medium claim, is rejected as applied to claim 1 above.
Claim(s) 2, 11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seung in view of Choi, as applied above to claims 1, 10 and 19 respectively, and further in view of Desjardins et al. (US Patent 10,762,421 B2, hereafter Desjardins).
As per claim 2, Seung in view of Choi teaches the system of claim 1, wherein:
a first set of connection weights associated with the first set of weighted connections between the input nodes of the input layer and the representation nodes of the representation layer is selected to minimize the chances that two representation nodes in the representation layer are active at the same time (Seung eqn. (8)-(9); page 5 top para. “excitation aims to maximize output-input correlations while inhibition aims to decorrelate the outputs”; page 9 second para. “If two neurons receive similar feedforward weights, their activities will be highly correlated in the absence of lateral inhibition. To weaken the correlation, the learning algorithm is expected to strengthen the inhibitory connection between the two neurons”); and
the connection weights in the weight matrix are updated as additional inputs are received (See rejections applied to claim 3 above).
Seung in view of Choi, however, does not teach:
the first set of connection weights associated with the first set of weighted connections is initially calculated using estimates of the eigenvectors of the variance- covariance matrix based on an input matrix created from vector representations of a selected set of inputs.
Desjardins in an analogous field teaches whitened neural network layer (Abstract). The whitened neural network layer then applies a whitening weight matrix to the intermediate whitened activation to generate the whitened activation. The whitening weight matrix is a matrix whose elements are derived based on eigenvalues of a matrix of the covariance of input activations, i.e., of output activations generated by the layer below the whitened neural network layer. For example, the whitening weight matrix may be the inverse square root of the covariance matrix. In some implementations, the whitening weight matrix may be represented as a PCA-whitening matrix whose rows are obtained from an eigen decomposition of the covariance matrix of the input activations (col. 4 ln 45-56). Desjardins further discloses a formula for calculating the whitening weight matrix (see below picture from col. 4).
PNG
media_image2.png
199
433
media_image2.png
Greyscale
If the whitening algorithm is applied to the input layer, the first set of connection weights (corresponding to whitening weight matrix U) is calculated using
PNG
media_image3.png
23
30
media_image3.png
Greyscale
, which is based on the eigenvectors of the variance-covariance matrix based on an input matrix created from vector representations of a selected set of inputs.
It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Seung and Choi to incorporate the teaching of Desjardins to calculate initial first set of connection weights using estimates of the eigenvectors of the variance-covariance matrix based on an input matrix created from vector representations of a selected set of inputs. The motivation for applying whitening is to allow the training process remain as computationally efficient as or become more computationally efficient than training a neural network that does not include whitened neural network layers as recognized by Desjardins (col. 2 ln 4-7).
Claim 11, dependent upon claim 10, is rejected as applied to claim 2 above.
Claim 20, dependent upon claim 19, is rejected as applied to claim 2 above.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seung in view of Choi, as applied above to claim 4, and further in view of Ravi et al (US Publication 2020/0151570 A1, hereafter Ravi).
As per claim 9, Seung in view of Choi teaches using a learning mechanism to update the first set of connection weights (See rejections applied to claim 4 above). Seung further teaches a step size when updating the first set of connection weights (Seung page 6 eqn. (15) ηw). Seung in view of Choi does not further specify that the step size is between 0 and 1.
Ravi in an analogous field discloses a method for training artificial neural networks using a conditional gradient descent process (Abstract; para. [0046]). Ravi specifically discloses a predetermined step size for the gradient descent which is between 0 and 1 (para. [0046] “where the subscript t is the iteration index and η is a predetermined step size for the gradient descent, typically between 0 and 1”).
It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Seung and Choi to incorporate the teaching of Ravi to apply a step size between 0 and 1 for gradient descent in neural network training. Doing so would ensure stable and efficient convergence towards a minimum of an objective function as would have appreciated by a person with ordinary skill in the art.
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Seung in view of Choi, as applied above to claim 10, and further in view of Bahroun et al. (Bahroun Y, Soltoggio A. Online representation learning with single and multi-layer hebbian networks for image classification. In International conference on artificial neural networks 2017 Sep 11 (pp. 354-363). Cham: Springer International Publishing., hereafter Bahroun).
As per claim 15, Seung in view of Choi does not teach the recited limitations.
Bahroun discloses a bi-layer neural network for generating object representation and image classification (Abstract; Fig. 1). Bahroun further discusses the performance of the neural network with respect to different sizes (m) of the hidden layers. If the number of neurons exceeds the size of the input (m > n), the representation is called overcomplete. If the number of neurons is fewer than the size of the input (m < n), the representation is called undercomplete (Fig. 1; page 357-358 bridging para.).
It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Seung and Choi to incorporate the teaching of Bahroun to include a bi-layer neural network with more representation nodes in the representation layer than input nodes in the input layer, i.e., overcomplete. Doing so would allow more flexibility in matching the output structure with the input as recognized by Bahroun (page 358 line 2-6 from top).
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUEMEI G CHEN whose telephone number is (571)270-3480. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XUEMEI G CHEN/Primary Examiner, Art Unit 2661