Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement filed 03/27/2024 fails to comply with the provisions of 37 CFR 1.97, 1.98 and MPEP § 609 because the non-patent literatures No. 19, 29, 31, 35, 37, 43, 46, 49, and 50, in the first IDS that contains 50 NPL’s, filed on 03/27/2024, are missing the dates of publication; and the non-patent literatures No. 9, 11, in the second IDS containing 40 NPL’s, filed 03/27/2024, are also missing the dates of publication. It has been placed in the application file, but the information referred to therein has not been considered as to the merits. Applicant is advised that the date of any re-submission of any item of information contained in this information disclosure statement or the submission of any missing element(s) will be the date of submission for purposes of determining compliance with the requirements based on the time of filing the statement, including all certification requirements for statements under 37 CFR 1.97(e). See MPEP § 609.05(a).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, (US-Patent 12,211,307)
In regards to claim 1, Batra et al discloses a method for human pose estimation,
(see at least: Fig. 1, and col. 1, lines 49-50, “devices, systems, and methods for body pose estimation”), comprising:
obtaining, with a processor, a plurality of keypoints corresponding to a plurality of joints of a human in an image, (see at least: Fig. 1, first stage TCN, plurality of joints of a human body; and col. 3, lines 39-40, where a network estimates two-dimensional body joints in an image);
masking, with the processor, a subset of keypoints in the plurality of keypoints corresponding to occluded joints of the human, (see at least: Fig. 1, first stage TCN, where subset of points corresponding to occluded joints of the human, are masked with rectangular boxes at t=1, t=2, and t=w, on the left side (input) of the first stage TCN 110. Further, col. 6, lines 33-42, “Occluded joints”, where some joints are occluded due to the relative orientation between the human and the camera, and external occlusion caused by other objects in the scene. See also, col. 7, lines 20-21, “masking is performed over the joints”);
determining, with the processor, a reconstructed subset of keypoints by reconstructing the masked subset of keypoints using a machine learning model, (see at least: Fig. 1, see first stage TCN 110, “i.e., machine learning model”, which outputs the masked subset of keypoints of the human body. Further, col. 4, lines 40-45, the first stage 110 is a TCN that accepts a window of two-dimensional poses as input and outputs three-dimensional poses, which implicitly comprises the reconstructed masked subset of keypoints of the human body); and
forming, with the processor, a refined plurality of keypoints based on the plurality of keypoints and the reconstructed subset of keypoints, the refined plurality of keypoints being used by a system to perform a task, (see at least: Fig. 1, where the TCN refiner 120, receives as input the outputs of the first stage 110, and outputs a refined human body, (t=1 …t=W), including the refined keypoints and the masked keypoints reconstructed from first stage TCN 110. See also, col. 4, lines 45-51, the outputs of the first stage 110 are passed to the second stage 120 which is a temporal refiner network that improves the estimated three-dimensional poses, which technically produces reliable three-dimensional poses, (see col. 2, line 29) that enable the human body or robot to perform one or more tasks, “i.e., the refined plurality of keypoints being implicitly used to perform a task”).
Therefore, Batra is functionally equivalent to the recited limitations of claim 1 as addressed above.
In regards to claim 5, Batra obviously discloses the limitations of claim 1.
Batra further discloses that the masking the subset of keypoints further comprising: obtaining, with the processor, a respective confidence value for each keypoint in the plurality of keypoints; and determining, with the processor, the subset of keypoints as those keypoints in the plurality of keypoints having respective confidence values that are less than a predetermined threshold, (see at least: col. 7, lines 12-18, using binary channel as explicit occlusion indicator, where at inference time, the two-dimensional detector confidence can be thresholded and used for the purpose of detecting occluded joints, [i.e., the confidence value for each keypoint is obtained, “implicit by the two-dimensional detector confidence”, and the subset of keypoints corresponding to occluded keypoints are implicitly determined based on thresholding the confidence”]).
In regards to claim 19, Batra obviously discloses the limitations of claim 1.
Batra further discloses forming the refined plurality of keypoints further comprising: forming, with the processor, a refined plurality of keypoints by substituting the reconstructed subset of keypoints in place of the masked subset of keypoints in the plurality of keypoints, (see at least: col. 9, lines 45-51, the outputs of the first stage 110 are passed to the second stage 120 which is a temporal refiner network that improves the estimated three-dimensional poses, which implicitly refines the plurality of keypoints by substituting the reconstructed subset of keypoints in place of the masked subset of keypoints in the plurality of keypoints, based on the occlusion-solving from the enforcement of temporal consistency).
In regards to claim 20, Batra obviously discloses the limitations of claim 1.
Batra further discloses wherein the machine learning model has been previously trained by randomly masking keypoints in a training dataset and learning to predict the masked keypoints, (see at least: col. 4, lines 14-15, keypoints are randomly masked during training to provide occlusion data augmentation, which the machine learning model implicitly has been previously trained by randomly masking keypoints in a training dataset and learning to predict the masked keypoints).
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, (US-Patent 12,211,307) in view of Zheng et al, (US-PGPUB 20230196617)
In regards to claim 2, Batra obviously discloses the limitations of claim 1.
Batra further discloses the obtaining the plurality of keypoints further comprising: receiving, with the processor, the image from an image sensor, the image capturing the human, (see at least: col. 7, lines 58-63, implicit by obtaining a plurality of two-dimensional images of a body); and determining, with the processor, the plurality of keypoints corresponding to the plurality of joints of the human two-dimensional location in the two-dimensional image of one or more joints of the body, [i.e., implicitly determining plurality of keypoints corresponding to joints of the human body, based on determining location of one or more joints of the body).
Batra does not expressly disclose using a keypoint detection model for determining plurality of keypoints corresponding to the plurality of joints of the human.
Zheng discloses using a keypoint detection model for determining plurality of keypoints corresponding to the plurality of joints of the human, (see at least: Fig. 2, Par. 0023, based on an RGB image of the person, a plurality of body keypoints 206, may be extracted from the image, for example, using a first neural network 204 (e.g., a body keypoint extraction network).
Batra and Zheng are combinable because they are both concerned with body joints detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Batra, to use the body keypoint extraction network 204, as though by Zheng, in order to extract plurality of body keypoints (joints), (Par. 0023)
In regards to claim 3, the combine teaching Batra and Zheng as whole discloses the limitations of claim 2.
Furthermore, Zheng discloses generating, with the processor, a plurality of heatmaps based on the image; and determining, with the processor, the plurality of keypoints based on the plurality of heatmaps, each respective joint in the plurality of keypoints being determined based on a corresponding respective heatmap in the plurality of heatmaps, where the body keypoints may refer to body parts and/or joints, (see at least: Par. 0023, the extracted body keypoints 206 are in the form of one or more heat maps representing the body keypoints 206, [i.e., a plurality of heatmaps are implicitly generated; and the extracted body keypoints are determined based on the heatmaps, and each respective joint, (body keypoints refer to joints), being implicitly determined based on a corresponding respective heatmap of one or more heatmaps).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, and Zheng et al, as applied to claim 3 above; and further in view of Brown et al, (US-Patent 11,875,529)
The combine teaching Batra and Zheng as whole discloses the limitations of the claim 3.
The combine teaching Batra and Zheng as whole does not expressly disclose determining, with the processor, a plurality of confidence values for the plurality of keypoints based on the plurality of heatmaps, each respective confidence value being determined based on a corresponding respective heatmap in the plurality of heatmaps.
Brown et al discloses determining, with the processor, a plurality of confidence values for the plurality of keypoints based on the plurality of heatmaps, each respective confidence value being determined based on a corresponding respective heatmap in the plurality of heatmaps, (see at least: col. 5, lines 47-57, Fig. 8 depth estimator generates a separate depth heatmap for each kind of joint, where the value (i.e., darkness) of each point along a given heatmap may represent the confidence of the corresponding joint being at that depth, and the confidence values of all depths may be zero for a not-visible joint, [i.e., generating plurality of confidence values for the plurality of keypoints based on the plurality of heatmaps, each respective confidence value being determined based on a corresponding respective heatmap in the plurality of heatmaps, “implicit by estimating the confidence value for each joint in a given heatmap’]).
Batra, Zheng, and Brown are combinable because they are all concerned with body joints detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Batra and Zheng, to generate a confidence value for each joint in the heatmap, as though by Brown, in order to estimate a 3D location for each joint, (Brown, col. 5, lines 59-60)
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, (US-Patent 12,211,307) in view of Yip et al, (US-PGPUB 20240289882)
Batra obviously discloses the limitations of claim 1.
Batra does not expressly disclose wherein the machine learning model incorporates a Transformer-based neural network architecture and uses multi-scale graph convolution.
However, Yip discloses wherein the machine learning model incorporates a Transformer-based neural network architecture and uses multi-scale graph convolution, (see at least: Par. 0012, adopting a transformer module with a learnable transformer neural network layer, “i.e., Transformer-based neural network architecture”; and in the transformer graph convolved dynamic mode decomposition (TGCDMD), the structural dependence of various assets is captured by a weighted graph, which is fed into a next step that integrates a graph convolution layer and DMD graph convolution, “implicit the multi-scale graph convolution”).
Batra and Yip et al are combinable because they are both concerned with object recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Batra, to incorporate the TGCDMD, as though by Yip, with the Batra’s body pose estimator 100, in order to model the poses faster.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, (US-Patent 12,211,307) in view of Cai et al, (US-PGPUB 20220335654)
Batra obviously discloses the limitations of claim 1.
Batra does not expressly disclose that the determining the reconstructed subset of keypoints further comprising: determining, with the processor, an initial feature embedding based on the plurality of keypoints.
However, Cai et al discloses the determining, with the processor, an initial feature embedding based on the plurality of keypoints, (see at least: Par. 0043, calculate the initial feature of the first point cloud data based on initial weight information of the first encoder, [i.e., the encoded initial feature is calculated based on the point cloud data, “plurality of keypoints”]).
Batra and Cai et al are combinable because they are both concerned with object recognition. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify Batra, to calculate the initial feature of the first point cloud data, as though by Cai, in order to generating point cloud encoder, (Cai et al, Par. 0036).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, and Cai et al, as applied to claim 7 above; and further in view of Xing et al, (US-PGPUB 20230028046).
The combine teaching Batra and Cai as whole discloses the limitations of claim 7.
The combine teaching Batra and Cai as whole does not expressly disclose the determining the initial feature embedding using multi-scale graph convolution.
However, Xing discloses determining the initial feature embedding using multi-scale graph convolution, (see at least: Par. 0087, based on the initial feature of the node and the initial feature of each target node corresponding to the node (i.e., each associated feature of the node), the weight of each associated feature is determined through a graph convolution network, “i.e., implicitly using multi-scale graph convolution”, and then the initial feature of the node and the initial feature of each target node may be weighted according to respective corresponding weights of the initial features of the node and the initial feature of each target node corresponding to the node, to obtain each weighted initial feature, “initial feature embedding”, [i.e., [i.e., determining the initial feature embedding, “weighted initial feature”, using the multi-scale graph convolution, “graph convolution network based on multi-weights”]).
Batra, Cai, and Xing are combinable because they are all concerned with feature detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Batra and Cai, to use the weight-based determination graph convolution network, as though by Xing, in order to determine one or more weighted initial feature, (Xing, Par. 0087)
Claims 9-15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, Cai et al, as applied to claim 7 above; and further in view of Zhao et al, (US-PGPUB 20210133535)
In regards to claim 9, the combine teaching Batra and Cai as whole discloses the limitations of claim 7.
The combine teaching Batra and Cai as whole does not expressly disclose the determining, with the processor, based on the initial feature embedding, a plurality of attended feature embeddings using an encoder of the machine learning model, the encoder having a Transformer-based neural network architecture.
However, Zhao discloses determining, with the processor, based on the initial feature embedding, a plurality of attended feature embeddings using an encoder of the machine learning model, the encoder having a Transformer-based neural network architecture, (see at least: Par. 0026-0028, where the equivalent number of attention matrices “i.e., the plurality of attended feature embeddings” is determined based on the query, key, and value matrices for each set of weight matrices, “i.e., the initial feature embedding”, using an encoder of the transformer model, which is a deep learning model, “the encoder having a Transformer-based neural network architecture”).
Batra, Cai, and Zhao are combinable because they are all concerned with feature detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Batra, and Cai, to use the transformer model, as though by Zhao, to calculate an equivalent number of attention matrices using the query, key, and value matrices for each set of weight matrices, (Zhao, Par. 0028)
In regards to claim 10, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 9.
Zhao further discloses wherein the encoder has a plurality of encoding layers, the plurality of encoding layers having a sequential order, each respective encoding layer determining a respective attended feature embedding of the plurality of attended feature embeddings, (see at least: Par. 0026-0028, the encoder includes a set of encoding layers that processes the input iteratively one layer after another, “plurality of encoding layers having a sequential order”, and the process undertaken by each self-attention layer includes taking in an input including a list of fixed-length vectors, and splitting the input into a set of query, key, and value matrices, “i..e, each respective encoding layer determining a respective attended feature embedding”).
In regards to claim 11, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 10.
Zhao further discloses determining, with the processor, each respective attended feature embedding of the plurality of attended feature embeddings, in a respective encoding layer of the plurality of encoding layers, based on a previous feature embedding, (see at least: Par. 0026, the encoder includes a set of encoding layers that processes the input iteratively one layer after another; and Par. 0028, calculating an equivalent number of attention matrices, “each respective attended feature embedding”, using the query, key, and value matrices for each set of weight matrices, “based on a previous feature embedding”); and
wherein (i) for a first encoding layer of the plurality of encoding layers, the previous feature embedding is the initial feature embedding and (ii) for each encoding layer of the plurality of encoding layers other than the first encoding layer, the previous feature embedding is that which is output by a previous encoding layer of the plurality of encoding layers, (see at least: Par. 0026, the encoder includes a set of encoding layers that processes the input iteratively one layer after another, “which implicit that the first layer encode the initial feature embedding, and for next layers after the first layer, each layer encodes the previous feature output by preceding layer)
In regards to claim 12, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 11.
The combine teaching Batra, Cai, and Morariu as whole does not expressly disclose determining, with the processor, a respective attention matrix based on the previous feature embedding; and determining, with the processor, the respective attended feature embedding based on the attention matrix and the previous attended feature embedding.
However, Zhao discloses determining, with the processor, a respective attention matrix based on the previous feature embedding, (see at least: Par. 0029, the output of the attention layer is a matrix, “attention matrix”, including a vector for each entry in the input sequence, “the previous feature embedding”); and determining, with the processor, the respective attended feature embedding based on the attention matrix and the previous attended feature embedding, (see at least: Par. 0028-0029, the matrix serves as the input of the feed-forward neural network, which the output of feed-forward neural network, implicitly corresponds to the respective attended feature embedding, [i.e., the output of feed-forward neural network, “respective attended feature embedding”, is determined implicitly based on the attention matrix, and the vector entry in the input sequence, “previous attended feature embedding”]).
Batra, Cai, and Zhao are combinable because they are all concerned with feature detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Batra, and Cai, to use a multi-head attention, as though by Zhao, in order to compute the final attention vector for every entry, (Zhao, Par. 0028).
In regards to claim 13, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 12.
Zhao further discloses that determining the respective attention matrix further comprising: determining, with the processor, a respective multi-head self-attention matrix, (see at least: Par. 0027-0028, passing the vectors into a self-attention layer (multi-head attention), to compute the final attention vector for every entry, “implicit the determining respective multi-head self-attention matrix”).
In regards to claim 14, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 12.
Morariu further discloses the determining, with the processor, respective Key, Query, and Value matrices based on the previous feature embedding; and determining, with the processor, the respective attention matrix based on the previous feature embedding and the respective Key, Query, and Value matrices, (see at least: Par. 0028, splitting the input into a set of query, key, and value matrices, ….and calculating an equivalent number of attention matrices using the query, key, and value matrices for each set of weight matrices).
In regards to claim 15, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 12.
Morariu further discloses determining the respective attention matrix further comprising: determining, with the processor, the respective Key, Query, and Value matrices using multi-scale graph convolution, (see at least: Par. 0026, transformer model is a deep learning model, “implicit the graph convolution”; and from Par. 0028, multiplying the input by a set of weight matrices, where each set includes a query weight matrix, a key weight matrix, and a value weight matrix, to determine the respective Key, Query, and Value matrices, “set of weight matrices implicit the multi-scale of transformer model or graph convolution”).
In regards to claim 17, the combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 10.
Batra further discloses determining, with the processor, the reconstructed subset of keypoints based on a final attended feature embedding of the plurality of attended feature embeddings, the final attended feature embedding being output by a final encoding layer of the plurality of encoding layers, (col. 3, lines 57-59, a network includes an attention mechanism added to the TCN, which implicit that the reconstructed subset of keypoints is based on a final attended feature output by a final encoding layer of the plurality of encoding layers, as attention mechanism added to the TCN implicitly comprises plurality of encoding layers)
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Batra et al, Cai et al, and Zhao et al, as applied to claim 17 above; and further in view of Hwang et al, (US-Patent 12131493)
The combine teaching Batra, Cai, and Zhao as whole discloses the limitations of claim 17.
Batra further discloses determining, with the processor, the reconstructed subset of keypoints based on the final attended feature embedding using sequence, (col. 3, lines 57-59, a network includes an attention mechanism added to the TCN, which implicit that the reconstructed subset of keypoints is based on a final attended feature output. Further, col. 4, lines 65-67, “the input sequence” implicit the use of sequence).
The combine teaching Batra, Cai, and Zhao as whole does not expressly disclose that the reconstructed subset of keypoints based on the final attended feature embedding using excitation
Hwang discloses the encoder performing convolution by performing a an excitation process for generating first channel information, (step s210 in Fig. 4, col. 8, lines 17-25, and claim 1)
Batra, Cai, Zhao, and Hwang are combinable because they are all concerned with feature detection. Therefore, it would have been obvious to a person of ordinary skill in the art, to modify the combine teaching Batra, Cai, and Zhao, to perform a an excitation process, as though by Hwang, in order to generate channel information, (col. 8, lines 17-25).
Allowable Subject Matter
Claim 16 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With respect to claim 16, the prior art of record, alone or in reasonable combination, does not teach or suggest, the following limitation(s), (in consideration of the claim as a whole):
“determining, with the processor, a respective intermediate feature embedding based on the attention matrix and the previous attended feature embedding; and determining, with the processor, the respective attended feature embedding based on the respective intermediate feature embedding using a multi-layer perceptron”
The relevant prior art of record, Zhao et al, (US-PGPUB 20210133535) discloses
determining, with the processor, a respective attention matrix based on the previous feature embedding, (see at least: Par. 0029, the output of the attention layer is a matrix, “attention matrix”, including a vector for each entry in the input sequence, “the previous feature embedding”); and determining, with the processor, the respective attended feature embedding based on the attention matrix and the previous attended feature embedding, (see at least: Par. 0028-0029, the matrix serves as the input of the feed-forward neural network, which the output of feed-forward neural network, implicitly corresponds to the respective attended feature embedding, [i.e., the output of feed-forward neural network, “respective attended feature embedding”, is determined implicitly based on the attention matrix, and the vector entry in the input sequence, “previous attended feature embedding”]); but fails to teach or suggest, either alone or in combination with the other cited references, determining a respective intermediate feature embedding based on the attention matrix and the previous attended feature embedding; and determining the respective attended feature embedding based on the respective intermediate feature embedding using a multi-layer perceptron
A further prior art of record, Batra et al discloses a method for human pose
estimation, (see at least: Fig. 1, and col. 1, lines 49-50, “devices, systems, and methods for body pose estimation”), comprising:
obtaining, with a processor, a plurality of keypoints corresponding to a plurality of joints of a human in an image, (see at least: Fig. 1, first stage TCN, plurality of joints of a human body; and col. 3, lines 39-40, where a network estimates two-dimensional body joints in an image);
masking, with the processor, a subset of keypoints in the plurality of keypoints corresponding to occluded joints of the human, (see at least: Fig. 1, first stage TCN, where subset of points corresponding to occluded joints of the human, are masked with rectangular boxes at t=1, t=2, and t=w, on the left side (input) of the first stage TCN 110. Further, col. 6, lines 33-42, “Occluded joints”, where some joints are occluded due to the relative orientation between the human and the camera, and external occlusion caused by other objects in the scene. See also, col. 7, lines 20-21, “masking is performed over the joints”);
determining, with the processor, a reconstructed subset of keypoints by reconstructing the masked subset of keypoints using a machine learning model, (see at least: Fig. 1, and col. 4, lines 40-45, “see the rejection of claim 1 for more details”); and
forming, with the processor, a refined plurality of keypoints based on the plurality of keypoints and the reconstructed subset of keypoints, the refined plurality of keypoints being used by a system to perform a task, (see at least: Fig. 1, col. 2, line 29, and col. 4, lines 45-51, “see the rejection of claim 1 for more details”).
However, Batra fails to teach or suggest, either alone or in combination with the other cited references, determining a respective intermediate feature embedding based on the attention matrix and the previous attended feature embedding; and determining the respective attended feature embedding based on the respective intermediate feature embedding using a multi-layer perceptron
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMARA ABDI whose telephone number is (571)272-0273. The examiner can normally be reached 9:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMARA ABDI/Primary Examiner, Art Unit 2668 01/16/2026