Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/21/2026 has been entered.
Response to Arguments
Claim Objections
Certain claim objections are raised and maintained as set forth below in the following Detailed Action.
35 U.S.C. 112 Rejection
The 35 U.S.C. 112 Rejection is maintained for claim 17 for the reasons discussed below in the Detailed Action.
35 U.S.C. 103 Rejection
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
DETAILED ACTION
Claim Objections
Claims 1, 3, 15 and 17-19 are objected to because of the following informalities:
For claim 1, Examiner believes this claim should be amended in the following manner:
A method comprising, by a computing system:
accessing a set of depth measurements and amplitude signals of a brightness image of a scene generated using a depth sensor of an artificial reality device, the amplitude signals representative of brightness values of pixels of the brightness image;
generating, based on the brightness image, a plurality of segmentation masks respectively associated with a plurality of object types;
segmenting, using the plurality of segmentation masks, the set of depth measurements into subsets of depth measurements respectively associated with the plurality of object types;
determining, for each object type of the plurality of object types, at least one three-dimensional (3D) model that best fits a subset of the subsets of depth measurements;
refining, using 3D models determined for the plurality of object types, the subsets of depth measurements;
generating, using the subsets of [[the]] depth measurements that are refined, K-depth meshes for mixed reality rendering; and
using refined depth measurements for mixed reality rendering.
For claim 3, Examiner believes this claim should be amended in the following manner:
The method of Claim 1, wherein determining, for each object type of the plurality of object types, the at least one 3D model of the 3D models that best fits [[a]] the subset of depth measurements corresponding to the object type of the plurality of object types comprises:
determining parameters of the object type at a current time instance;
selecting a particular 3D model from a plurality of 3D models that corresponds to the object type; and
generating the at least one 3D model for the object type by optimizing the particular 3D model according to the parameters of the object type at the current time instance and such that the generated at least one 3D model best fits the subset of depth measurements corresponding to the object type.
For claim 15, Examiner believes this claim should be amended in the following manner:
One or more computer-readable non-transitory storage media embodying software that is operable when executed by one or more processors to:
access a set of depth measurements and amplitude signals of a brightness image of a scene generated using a depth sensor of an artificial reality device, the amplitude signals representative of brightness values of pixels of the brightness image;
generate, based on the brightness image, a plurality of segmentation masks respectively associated with a plurality of object types;
segment, using the plurality of segmentation masks, the set of depth measurements into subsets of depth measurements respectively associated with the plurality of object types;
determine, for each object type of the plurality of object types, at least one three- dimensional (3D) model that best fits a subset of the subsets of depth measurements;
refine, using 3D models determined for the plurality of object types, the subsets of depth measurements; and
generate, using the subsets of [[the]] depth measurements that are refined, K-depth meshes for mixed reality rendering; and
use refined depth measurements for mixed reality rendering.
For claim 17, Examiner believes this claim should be amended in the following manner:
The one or more computer-readable non-transitory storage media of Claim 15, wherein to determine, for each object type of the plurality of object types, the at least one 3D model of the 3D models that best fits the subset of depth measurements corresponding to the object type, the software is further operable when executed to:
determine parameters of the object type at a current time instance;
select a particular 3D model from a plurality of 3D models that corresponds to the object type; and
generate the at least one 3D model for the object type by optimizing the particular 3D model according to the parameters of the object type at the current time instance and such that the generated at least one 3D model best fits the subset of depth measurements corresponding to the object type.
For claim 18, Examiner believes this claim should be amended in the following manner:
An artificial reality device comprising:
one or more sensors;
one or more processors; and
one or more computer-readable non-transitory storage media coupled to the one or more processors and comprising instructions operable when executed by the one or more processors to cause the artificial reality device to:
access a set of depth measurements and amplitude signals of a brightness image of a scene generated using a depth sensor of the artificial reality device, the amplitude signals representative of brightness values of pixels of the brightness image;
generate, based on the brightness image, a plurality of segmentation masks respectively associated with a plurality of object types;
segment, using the plurality of segmentation masks, the set of depth measurements into subsets of depth measurements respectively associated with the plurality of object types;
determine, for each object type of the plurality of object types, at least one three-dimensional (3D) model that best fits a subset of the subsets of depth measurements;
refine, using 3D models determined for the plurality of object types, the subsets of depth measurements;
generate, using the subsets of [[the]] depth measurements that are refined, K-depth meshes for mixed reality rendering; and
use refined depth measurements for mixed reality rendering.
For claim 19, Examiner believes this claim should be amended in the following manner:
The artificial reality device of Claim 18, wherein to refine the subsets of depth measurements using the 3D models determined for the plurality of object types, the instructions are further operable when executed by the one or more processors to cause the artificial reality device to:
replace, for each object type, the subset of depth measurements corresponding to the object type with depth information represented by the at least one 3D model associated with the object type, wherein the depth information represented by the at least one 3D model associated with the object type is relatively more accurate than the subset of depth measurements corresponding to the object type.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 17 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
For dependent claim 17, parent claim 15 establishes a first “at least one 3D model” and claim 17 goes on to establish a second “at least one 3D model”. Claim 17 goes on to recite the phrase “the at least one 3D model” and it is unclear and ambiguous to which of the previously established first “at least one 3D model” and second “at least one 3D model” is being referenced by the phrase “the at least one 3D model”. Examiner has suggested amendments in the claim objections discussed above to resolve the ambiguities.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 7, 8, 12, 15, 16, 18 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wantland et al. (U.S. Patent Application Publication 2021/0042950 A1, hereinafter “Wantland”) in view of Pugh et al. (U.S. Patent Application Publication 2021/0142497 A1, hereinafter “Pugh”), Schmidt (U.S. Patent Application Publication 2018/0211398 A1) and Powers et al. (U.S. Patent Application Publication 2019/0318547 A1, hereinafter “Powers”).
For claim 1, Wantland discloses a method comprising, by a computing system (disclosing a method implemented by a computing device (page 1/par. 3)): accessing a set of depth measurements and an image of a scene generated using one or more sensors of an artificial reality device (disclosing a mobile device as an artificial reality device for presenting augmented reality with a camera as a sensor for generating an image of a scene and a depth map as a set of depth measurements (page 2/par. 24; and page 7/par. 79)); generating, based on the image, a plurality of segmentation masks respectively associated with a plurality of object types (disclosing generation, based on the image, of segmentation masks associated with object types where each segmentation mask identifies pixels in the image corresponding to an object type associated with that segmentation mask (page 1/par. 19; page 4/par. 48; and page 8/par. 91)); and segmenting, using the plurality of segmentation masks, the set of depth measurements into subsets of depth measurements respectively associated with the plurality of object types (disclosing the segmentation masks segment the depth map into portions as subsets of the depth map where each portion corresponds to an object type of the object types (page 7/par. 81; and page 8/par. 91)).
Wantland does not disclose determining, for each object type of a plurality of object types, at least one three-dimensional (3D) model that best fits a subset of depth measurements; refining, using 3D models determined for the plurality of object types, the subsets of depth measurements; and using refined depth measurements for mixed reality rendering.
However, these limitations are well-known in the art as disclosed in Pugh.
Pugh similarly discloses a system and method for generating segmentation masks associated with object classes as object types for the presentation of mixed reality (page 1/par. 21; and page 2/par. 28). Pugh discloses, for each object class, a 3D CAD model to best fit a portion of a depth map corresponding to the object class (page 2/par. 28; page 9/par. 97; and page 18/par. 195). Pugh discloses the 3D CAD models determined for the object classes refines the portions of the depth maps for the object classes (page 2/par. 28; page 8/par. 93; page 18/par. 195 and 205). Pugh explains the refined depth maps are used to render the mixed reality (page 2/par. 28; and page 18/par. 195 and 205). It follows Wantland may be accordingly modified with the teachings of Pugh to determine 3D models that best fit its subsets of depth measurement corresponding to its plurality of object types to refine its subsets of depth measurements for mixed reality rendering.
A person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention would find it obvious to modify Wantland with the teachings of Pugh. Pugh is analogous art in dealing with a system and method for generating segmentation masks associated with object classes as object types for the presentation of mixed reality (page 1/par. 21; and page 2/par. 28). Pugh discloses its use of 3D models is advantageous in replacing real objects in an image with corresponding virtual objects to appropriately render mixed reality (page 18/par. 195 and 205). Consequently, a PHOSITA would incorporate the teachings of Pugh into Wantland for replacing real objects in an image with corresponding virtual objects to appropriately render mixed reality.
Wantland as modified by Pugh does not specifically disclose a depth sensor for accessing a set of depth measurements and amplitude signals of an image as a brightness image, the amplitude signals representative of brightness values of pixels of the brightness image.
However, these limitations are well-known in the art as disclosed in Schmidt.
Schmidt similarly discloses a system and method for capturing images of a scene for the presentation of artificial reality (page 1/par. 1-2 and 17). Schmidt explains its system implements a depth sensor for accessing a set of depth images and amplitude signals of an image as a brightness image where the amplitude signals represent brightness values of pixels of the brightness image (page 1/par. 18; page 11/par. 99; and page 12/par. 102). It follows Wantland and Pugh may be accordingly modified with the teachings of Schmidt to implement a depth sensor for accessing its set of depth measurements and amplitude signals of its image as a brightness image where the amplitude signals represent brightness values of pixels of the brightness image.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Wantland and Pugh with the teachings of Schmidt. Schmidt is analogous art in dealing with a system and method for capturing images of a scene for the presentation of artificial reality (page 1/par. 1-2 and 17). Schmidt discloses its use of a depth sensor is advantageous for appropriately determining a depth of spatial features of a scene to appropriately present artificial reality (page 11/par. 99; and page 12/par. 102). Consequently, a PHOSITA would incorporate the teachings of Schmidt into Wantland and Pugh for appropriately determining a depth of spatial features of a scene to appropriately present artificial reality.
Wantland as modified by Pugh and Schmidt does not disclose generating K-depth meshes for mixed reality rendering.
Powers similarly discloses a system and method for presenting mixed reality (par. 2). Powers explains its system renders a model of a virtual scene to present the mixed reality where the model may be represented with meshes integrated with a K-depth manifold to generate K-depth meshes (par. 30, 35-36 and 135). It follows Wantland, Pugh and Schmidt may be accordingly modified with the teachings of Powers to generate K-depth meshes for mixed reality renderings using its subsets of depth measurements that are refined.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Wantland, Pugh and Schmidt with the teachings of Powers. Powers is analogous art in dealing with a system and method for presenting mixed reality (par. 2). Powers discloses its use of K-depth manifolds is advantageous for appropriately rendering objects in mixed reality with respect to occlusions (par. 135). Consequently, a PHOSITA would incorporate the teachings of Powers into Wantland, Pugh and Schmidt for appropriately rendering objects in mixed reality with respect to occlusions. Therefore, claim 1 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 2, depending on claim 1, Wantland as modified by Pugh, Schmidt and Powers discloses wherein refining the subsets of depth measurements using the 3D models determined for the plurality of object types comprises: replacing, for each object type of the plurality of object types, the subset of depth measurements corresponding to the object type with depth information represented by the at least one 3D model of the 3D models associated with the object type, wherein the depth information represented by the at least one 3D model associated with the object type is relatively more accurate than the subset of depth measurements corresponding to the object type (Pugh similarly discloses a system and method for generating segmentation masks associated with object classes as object types for the presentation of mixed reality (page 1/par. 21; and page 2/par. 28); Pugh discloses, for each object class, a 3D CAD model to best fit a portion of a depth map corresponding to the object class (page 2/par. 28; page 9/par. 97; and page 18/par. 195); Pugh discloses the 3D CAD models determined for the object classes refines the portions of the depth maps for the object classes (page 2/par. 28; page 8/par. 93; page 18/par. 195 and 205); Pugh explains the 3D CAD models refines the depth maps to replace the portions of the depth maps with depth information represented by the 3D CAD models to improve accuracy of the depth map more than the portions of the depth maps associated with the object classes (pages 11-12/par. 131; and page 18/par. 195 and 205); and it follows Wantland may be accordingly modified with the teachings of Pugh to determine 3D models that best fit its subsets of depth measurement corresponding to its plurality of object types to refine its subsets of depth measurements for mixed reality rendering and to replace its subsets of depth measurements with depth information represented by the 3D models to improve accuracy of its depth measurements).
For claim 7, depending on claim 1, Wantland as modified by Pugh, Schmidt and Powers discloses wherein the mixed reality rendering comprises one or more of: passthrough rendering; occlusion detection or rendering; or light rendering (Pugh similarly discloses a system and method for generating segmentation masks associated with object classes as object types for the presentation of mixed reality (page 1/par. 21; and page 2/par. 28); Pugh discloses, for each object class, a 3D CAD model to best fit a portion of a depth map corresponding to the object class (page 2/par. 28; page 9/par. 97; and page 18/par. 195); Pugh discloses the 3D CAD models determined for the object classes refines the portions of the depth maps for the object classes (page 2/par. 28; page 8/par. 93; page 18/par. 195 and 205); Pugh explains the refined depth maps are used to render the mixed reality with occlusion rendering (page 2/par. 28; page 15/par. 166; and page 18/par. 195 and 205); and it follows Wantland may be accordingly modified with the teachings of Pugh to determine 3D models that best fit its subsets of depth measurement corresponding to its plurality of object types to refine its subsets of depth measurements for mixed reality rendering).
For claim 8, depending on claim 1, Wantland as modified by Pugh, Schmidt and Powers discloses wherein the object type is at least one of planes, people, or static objects in the scene observed over a period of time (Wantland discloses the object type corresponds to static objects in the scene observed over a period of time its mobile device is in use in real time running at frames per second, e.g. 30 fps (page 2/par. 21; and page 8/par. 91)).
For claim 12, depending on claim 1, Wantland as modified by Pugh, Schmidt and Powers discloses wherein the segmentation masks are generated using a machine learning (ML) based segmentation model (Wantland discloses the segmentation masks are generating through a machine learning segmentation model (page 2/par. 20; and page 9/par. 107)).
For claim 15, Wantland as modified by Pugh, Schmidt and Powers discloses one or more computer-readable non-transitory storage media embodying software that is operable when executed by one or more processors (Wantland discloses memory for storing software for execution by a processor to perform the functions of a computing system (page 9/par. 106-107; and page 10/par. 114-115)) to perform the method of claim 1 (see above as to claim 1).
For claim 16, depending on claim 15, this claim is a combination of the limitations of claim 15 and claim 2. It follows claim 16 is rejected for the same reasons as to claim 15 and claim 2.
For claim 18, Wantland as modified by Pugh, Schmidt and Powers discloses an artificial reality device (Wantland discloses a mobile device as an artificial reality device for presenting augmented reality (page 2/par. 24; and page 7/par. 79)) comprising: one or more sensors (Wantland discloses a camera (page 2/par. 24)); one or more processors (Wantland discloses a processor (page 9/par. 106-107)); and one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by the one or more of the processors to cause the artificial reality device (Wantland discloses memory coupled to the processor where the memory stores instructions for execution by the processor to perform the functions of the mobile device (page 9/par. 106-107; and page 10/par. 114-115)) to perform the method of claim 1 (see above as to claim 1).
For claim 19, depending on claim 18, this claim is a combination of the limitations of claim 18 and claim 2. It follows claim 19 is rejected for the same reasons as to claim 18 and claim 2.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wantland in view of Pugh, Schmidt and Powers further in view of Choi et al. (U.S. Patent Application Publication 2013/0201188 A1, hereinafter “Choi”).
For claim 9, depending on claim 1, Wantland as modified by Pugh and Schmidt discloses wherein the 3D models that are determined for the plurality of object types are generated by one or more components associated with the plurality of object types, and wherein the one or more components generate the 3D models based on tracking object geometry of the plurality of object types over a period of time (Pugh similarly discloses a system and method for generating segmentation masks associated with object classes as object types for the presentation of mixed reality (page 1/par. 21; and page 2/par. 28); Pugh discloses, for each object class, a 3D CAD model to best fit a portion of a depth map corresponding to the object class (page 2/par. 28; page 9/par. 97; and page 18/par. 195); Pugh discloses the 3D CAD models determined for the object classes refines the portions of the depth maps for the object classes (page 2/par. 28; page 8/par. 93; page 18/par. 195 and 205); Pugh explains the 3D models are generated by processes as components associated with the object classes where the 3D models are generated based on tracking object geometry of the object classes over a period of time (page 1/par. 20; page 4/par. 43 and 45-46; pages 15-16/par. 172; and page 18/par. 195); and it follows Wantland may be accordingly modified with the teachings of Pugh to determine 3D models that best fit its subsets of depth measurement corresponding to its plurality of object types to refine its subsets of depth measurements for mixed reality rendering).
Wantland as modified by Pugh, Schmidt and Powers does not disclose pre-generated models.
However, these limitations are well-known in the art as disclosed in Choi.
Choi similarly discloses a system and method for combining real images and virtual images for presenting a mixed reality (page 2/par. 15). Choi explains it is known to pre-generated virtual model data to obtain virtual models as pre-generated models (page 2/par. 15). It follows Wantland, Pugh, Schmidt and Powers may be accordingly modified with the teachings of Choi to implement its 3D models as pre-generated models.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Wantland, Pugh, Schmidt and Powers with the teachings of Choi. Choi is analogous art in dealing with a system and method for combining real images and virtual images for presenting a mixed reality (page 2/par. 15). Choi discloses its use of pre-generated models is advantageous in reducing time of processing a virtual space for image generation (page 1/par. 8; and page 2/par. 15). Consequently, a PHOSITA would incorporate the teachings of Choi into Wantland, Pugh, Schmidt and Powers for reducing time of processing a virtual space for image generation. Therefore, claim 9 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
Claim(s) 10, 11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wantland in view of Pugh, Schmidt and Powers further in view of Zhou et al. (U.S. Patent Application Publication 2020/0357108 A1, hereinafter “Zhou”).
For claim 10, depending on claim 1, Wantland as modified by Pugh, Schmidt and Powers discloses receiving subsequent depth measurements of the scene captured over a period of time (Wantland discloses the system determines subsequent depth maps of the scene corresponding to subsequent images captured by the camera of the scene over a period of time its mobile device is in use in real time running at frames per second, e.g. 30 fps (page 2/par. 21 and 24)).
Wantland as modified by Pugh, Schmidt and Powers does not disclose using a stabilization filter to stabilize depth measurements.
Zhou similarly discloses a system and method for detecting objects in an image (page 1/par. 6). Zhou explains its system implements a Kalman filter as a stabilization filter to stabilize depth measurements (page 12/par. 176). It follows Wantland, Pugh, Schmidt and Powers may be accordingly modified with the teachings of Zhou to implement a Kalman filter as a stabilization filter to stabilize its subsequent depth measurements captured over its period of time.
A PHOSITA before the effective filing date of the claimed invention would find it obvious to modify Wantland, Pugh, Schmidt and Powers with the teachings of Zhou. Zhou is analogous art in dealing with a system and method for detecting objects in an image (page 1/par. 6). Zhou discloses its use of a Kalman filter is advantageous in stabilizing depth measurements to improve object detection in an image (page 12/par. 176-177). Consequently, a PHOSITA would incorporate the teachings of Zhou into Wantland, Pugh, Schmidt and Powers for stabilizing depth measurements to improve object detection in an image. Therefore, claim 10 is rendered obvious to a PHOSITA before the effective filing date of the claimed invention.
For claim 11, depending on claim 10, Wantland as modified by Pugh, Schmidt, Powers and Zhou discloses wherein the stabilization filter is a Kalman filter (Zhou similarly discloses a system and method for detecting objects in an image (page 1/par. 6); Zhou explains its system implements a Kalman filter as a stabilization filter to stabilize depth measurements (page 12/par. 176); and it follows Wantland, Pugh, Schmidt and Powers may be accordingly modified with the teachings of Zhou to implement a Kalman filter as a stabilization filter to stabilize its subsequent depth measurements captured over its period of time).
For claim 13, depending on claim 1, Wantland as modified by Pugh, Schmidt, Powers and Zhou discloses wherein the one or more sensors comprise a time-of-flight sensor, and the image is an output of the time-of-flight sensor (Wantland discloses a time-of-flight sensor (page 2/par. 28); Zhou similarly discloses a system and method for detecting objects in an image (page 1/par. 6); Zhou explains it is known for a time of flight sensor to obtain and output an image (page 3/par. 49); and it follows Wantland, Pugh, Schmidt and Powers may be accordingly modified with the teachings of Zhou to output its image through its time-of-flight sensor to appropriately determine its set of depth measurements).
Allowable Subject Matter
Claim 17 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims and to address any claim objections raised above in the Detailed Action.
Claims 3-6 and 20 would be allowable if rewritten to include all of the limitations of the base claim and any intervening claims and to address any claim objections raised above in the Detailed Action.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES TSENG whose telephone number is (571)270-3857. The examiner can normally be reached 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES TSENG/ Primary Examiner, Art Unit 2613