Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Satavani (US 20250225728) in view of Steinbrucker (Volumetric 3D mapping in real-time on a CPU).
Note: The contents of Satavani cited are present in the provisional application 63/618,609 filed January 8, 2024.
Regarding claim 1, Satavani teaches an apparatus for three-dimensional reconstruction (3DR) of a scene, the apparatus comprising:
At least one memory (Paragraph 20, a memory coupled to the processor);
At least one processor coupled to the at least one memory (Paragraph 20, a memory coupled to the processor) and configured to:
Compare the TSDF value to a previous TSDF value to estimate a vertex difference (Paragraph 26, the geometric mesh is only reconstructed in the event the change between point values of different frames differs, for example, by more than a configured threshold value);
Determine, based on a comparison between the vertex difference and a threshold, whether to generate a mesh based on the TSDF value (Paragraph 26, the geometric mesh is only reconstructed in the event the change between point values of different frames differs, for example, by more than a configured threshold value).
While Satavani fails to disclose the following, Steinbrucker teaches:
Select a plurality of voxel blocks for the scene based on depth data and pose data, wherein the pose data is indicative of a perspective of the depth data (I. Introduction, we assume that the camera poses are known; III. Multi-resolution Data Fusion in an Octree; A single voxel stores the truncated signed distance, the weight, and the color);
Generate a truncated signed distance function (TSDF) value based on the depth data, wherein the TSDF value corresponds to at least one voxel in the plurality of voxel blocks (III. Multi-resolution Data Fusion in an Octree; A single voxel stores the truncated signed distance, the weight, and the color);
Steinbrucker and Satavani are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Satavani by using Steinbrucker and select voxels based on depth data and pose data, and generate a TSDF value for a voxel. Doing so would allow for using a known method of identifying and storing information about the desired object to reconstruct.
Method claim 20 corresponds to apparatus claim 1. Therefore, claim 20 is rejected for the same rationale as above.
Regarding claim 3, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1, wherein the at least one processor is configured to: maintain a previous mesh of the scene in at least one memory without generating the mesh based on the TSDF value in response to the comparison indicating that the vertex difference is less than the threshold, wherein the previous TSDF value is associated with the previous mesh (Satavani, Paragraph 26, the geometric mesh is only reconstructed in the event the change between point values of different frames differs, for example, by more than a configured threshold value).
Regarding claim 4, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1, wherein the at least one processor is configured to: generate the mesh based on the TSDF value in response to the comparison indicating that the vertex difference is greater than the threshold (Satavani, Paragraph 26, the geometric mesh is only reconstructed in the event the change between point values of different frames differs, for example, by more than a configured threshold value);
While the combination as previously presented fails to disclose the following, Steinbrucker further teaches:
Write the mesh into the at least one memory (Fig. 1 caption: We fuse the input images into a multi-resolution octree data structure that stores both the SDF and the triangle mesh).
Steinbrucker and Satavani are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Satavani by using Steinbrucker and write the mesh into memory. Doing so would allow for storing and easily accessing the previous mesh to update.
Claims 2, 7-8 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Xiong (US 20230140170).
Regarding claim 2, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Xiong teaches:
Wherein the previous TSDF value is based on previous depth data and previous pose data, and wherein the previous TSDF value is associated with a previous mesh of the scene (Paragraph 42, when the operation 243 indicates that the difference between the present and past image frames exceeds a similarity threshold (such as due to activity in the scene or a change of pose), a feature mapping stage 230, a disparity mapping stage 260, a depth mapping stage 280, and a three-dimensional reconstruction stage 290 may be performed). Note: Satavani and Steinbrucker teach TSDF, and Xiong teaches determining previous depth and previous pose.
Xiong and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Xiong and use previous depth and previous pose to determine the previous TSDF value. Doing so would allow for easily calculating the previous TSDF value from already stored information.
Regarding claim 7, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Xiong teaches:
Wherein, to compare the TSDF value to the previous TSDF value to identify the vertex difference, the at least one processor is configured to process the TSDF value and the previous TSDF value using a trained machine learning model to identify the vertex difference (Paragraph 43, generating the disparity map using a volumetric neural network technique). Note: Satavani and Steinbrucker teach TSDF, and Xiong teaches determining previous depth and previous pose and using a neural network (machine learning) to evaluate the difference.
Xiong and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Xiong and using machine learning to identify the difference between the TSDF value and the previous TSDF value. Doing so would allow for using a known way to calculate the difference between the current and previous depth value.
Regarding claim 8, the combination of Satavani, Steinbrucker, and Xiong teaches the apparatus of claim 7. While the combination as presented previously fails to disclose the following, Xiong further teaches:
Generate the mesh based on the TSDF value in response to the comparison indicating that the vertex difference is greater than the threshold (Paragraph 42, when the operation 243 indicates that the difference between the present and past image frames exceeds a similarity threshold (such as due to activity in the scene or a change of pose), a feature mapping stage 230, a disparity mapping stage 260, a depth mapping stage 280, and a three-dimensional reconstruction stage 290 may be performed; Paragraph 47, Examples of three-dimensional reconstructions according to some embodiments of this disclosure include dense surface meshes and incremental meshes);
Determine an actual vertex difference between the mesh and a previous mesh, wherein the previous TSDF value is associated with the previous mesh (Paragraph 78, In some embodiments, generation of a three-dimensional scene reconstruction includes computing a TSDF function of the dense depth map to convert an atomized point cloud of depth data to voxels, or regions associated with depth values, and further refining the voxel representation of the real-world scene as a three-dimensional mesh);
Update the trained machine learning model based on a comparison between the actual vertex difference and the vertex difference (Paragraph 43, generating the disparity map using a volumetric neural network technique).
Xiong and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Xiong and regenerating the mesh based on the change in TSDF value, determine an actual vertex difference, and updating the machine learning model based on the actual vertex difference. Doing so would allow for using a known way regenerate the mesh representation based on a change in depth value.
Regarding claim 13, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Xiong teaches:
Wherein the depth data includes a depth map that maps depth values to pixels in an image of the scene (Paragraph 49, where the image data shows pixels of image data in the neighborhood of the existing depth point having similar values in one or more channels of a color space… the contours of the depth map).
Xiong and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Xiong and mapping depth values to pixels in a depth map. Doing so would allow for understanding the depth of objects in 2D images used to create the 3D reconstruction.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Ilola (US 20200294271).
Regarding claim 5, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Ilola teaches:
Wherein, to compare the TSDF value to the previous TSDF value to identify the vertex difference, the at least one processor is configured to apply a scaling factor to a difference between the TSDF value and the previous TSDF value to estimate the vertex difference (Paragraph 48, compare the depth values associated with corresponding pixels of the different depth planes in order to identify those pixels for which the difference in depth values satisfies the predefined threshold, such as by exceeding the predefined threshold. In one example embodiment, the differences in the depth values are scaled so as to be represented by 0, 1, 2 or 3 with a difference of 0 representing no difference in depth values between the different depth planes, a difference of 1 representing a rounding error between the original volumetric video data point and the projection thereof, and difference values of 2 and 3 representing more significant differences in depth values that satisfy the predefined threshold). Note: Satavani and Steinbrucker teach TSDF and Ilola teaches using a scaling factor to determine a depth difference.
Ilola and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Ilola and using a scaling factor to determine a difference in depth. Doing so would allow for easily determining if the change in depth exceeds a predefined threshold.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Petrovskaya (US 20190080516).
Regarding claim 6, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Petrovskaya teaches:
Wherein, to compare the TSDF value to the previous TSDF value to identify the vertex difference, the at least one processor is configured to apply a linear regression model to a difference between the TSDF value and the previous TSDF value to estimate the vertex difference (Paragraph 340, The system may now construct a large training data set consisting of (d.sub.pix, τ.sub.pix) pairs by recording many different positions of the checker board at different distances away from the camera. Using this data set, for each pixel, the system may run a linear regression to learn parameters; Paragraph 139, a vertex representation may be both the intermediate and final representation. Where the intermediate representation is a TSDF representation, e.g., a final polygonal mesh representation may be created using the marching cubes algorithm).
Petrovskaya and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Petrovskaya and using linear regression to determine a difference in depth. Doing so would allow for using a known way to evaluate the change in depth.
Claims 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker and further in view of Xiong as applied to claims 2, 7-8 and 13 and further in view of Claessen (US 20210082184).
Regarding claim 9, the combination of Satavani, Steinbrucker, and Xiong teaches the apparatus of claim 8. While the combination fails to disclose the following, Claessen teaches:
Wherein the trained machine learning model includes at least a first layer and second layer, wherein the first layer is configured to categorize the at least one voxel into one or a plurality of predetermined voxel configurations to identify a predicted arrangement of at least one surface in the mesh, and wherein the second layer is configured to compare the predicted arrangement of the at least one surface in the mesh to a previous mesh (Paragraph 98, The performance of the trained 3D deep learning network may be validated through the comparison of voxel representation of the predicted root 212 or full tooth 214 and the original (real-world) 3D image data 202, as illustrated below with reference to FIG. 6. In the case where inputs and outputs (target labels) of deep neural network 210 are strictly separated into a root section and a crown section, this validation may be done through comparison of the 3D image data of predicted root and the matching part of the original (real-world) 3D image data of the root).
Claessen and the combination of Satavani, Steinbrucker, and Xiong are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani, Steinbrucker, and Xiong by using Claessen and using a machine learning model to predict a voxel in a mesh and compare the predicted surface mesh to another mesh. Doing so would allow for using a known way to model the 3D reconstruction and determine accuracy.
Regarding claim 10, the combination of Satavani, Steinbrucker, Xiong, and Claessen teaches the apparatus of claim 9. While the combination as previously presented fails to disclose the following, Claessen further teaches:
Wherein the first layer is one or a set of convolutional neural network (CNN) layers of the trained machine learning model (Paragraph 38, plurality of first 3D convolutional layers are configured to process a first block of voxels from the first voxel representation and wherein the at least one fully connected layer is configured to classify voxels of the first block of voxels into at least one of jaw, teeth and/or nerve voxels).
Claessen and the combination of Satavani, Steinbrucker, and Xiong are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani, Steinbrucker, and Xiong by using Claessen and using convolutional layers to classify voxels. Doing so would allow for using a known way to identify objects in 3D space.
Regarding claim 11, the combination of Satavani, Steinbrucker, Xiong, and Claessen teaches the apparatus of claim 9. While the combination as previously presented fails to disclose the following, Claessen further teaches:
Wherein the second layer is one of a set of convolutional neural network (CNN) layers of the trained machine learning model (Paragraph 38, plurality of first 3D convolutional layers are configured to process a first block of voxels from the first voxel representation and wherein the at least one fully connected layer is configured to classify voxels of the first block of voxels into at least one of jaw, teeth and/or nerve voxels; Paragraph 98, The performance of the trained 3D deep learning network may be validated through the comparison of voxel representation of the predicted root 212 or full tooth 214 and the original (real-world) 3D image data 202, as illustrated below with reference to FIG. 6. In the case where inputs and outputs (target labels) of deep neural network 210 are strictly separated into a root section and a crown section, this validation may be done through comparison of the 3D image data of predicted root and the matching part of the original (real-world) 3D image data of the root).
Claessen and the combination of Satavani, Steinbrucker, and Xiong are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani, Steinbrucker, and Xiong by using Claessen and using convolutional layers to compare the predicted mesh to another mesh. Doing so would allow for using a known way to verify predictions of 3D meshes.
Regarding claim 12, the combination of Satavani, Steinbrucker, Xiong, and Claessen teaches the apparatus of claim 9. While the combination as previously presented fails to disclose the following, Claessen further teaches:
Wherein the second layer is one of a set of fully connected (FC) layers of the trained machine learning model (Paragraph 38, plurality of first 3D convolutional layers are configured to process a first block of voxels from the first voxel representation and wherein the at least one fully connected layer is configured to classify voxels of the first block of voxels into at least one of jaw, teeth and/or nerve voxels; Paragraph 98, The performance of the trained 3D deep learning network may be validated through the comparison of voxel representation of the predicted root 212 or full tooth 214 and the original (real-world) 3D image data 202, as illustrated below with reference to FIG. 6. In the case where inputs and outputs (target labels) of deep neural network 210 are strictly separated into a root section and a crown section, this validation may be done through comparison of the 3D image data of predicted root and the matching part of the original (real-world) 3D image data of the root).
Claessen and the combination of Satavani, Steinbrucker, and Xiong are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani, Steinbrucker, and Xiong by using Claessen and using fully connected layers. Doing so would allow for using a known way to verify predictions of 3D meshes.
Claims 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Schoenberg (US 20160364907).
Regarding claim 14, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Schoenberg teaches:
Wherein, to generate the TSDF value based on the depth data, the at least one processor is configured to generate the TSDF value based on the depth data and the previous TSDF value (Paragraph 50, As additional depth information is received at the depth camera(s), new SDF values (which may be normalized and/or truncated) are assigned to each voxel. The new SDF values may then be combined with any previous SDF value stored at each respective voxel. The new SDF value may be combined with one or more previous SDF values by averaging).
Schoenberg and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Schoenberg and generating a new TSDF value based on depth data and a previous TSDF value. Doing so would allow for using a known way to evaluate the change in depth.
Regarding claim 15, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Schoenberg teaches:
Wherein, to generate the TSDF value based on the depth data, the at least one processor is configured to generate the TSDF value based on the depth data and the pose data (Paragraph 33, Depth information for an environment may include a depth image and a 6DOF pose estimate indicating the location and orientation of the depth camera when the depth image was captured; Paragraph 50, As additional depth information is received at the depth camera(s), new SDF values (which may be normalized and/or truncated) are assigned to each voxel. The new SDF values may then be combined with any previous SDF value stored at each respective voxel. The new SDF value may be combined with one or more previous SDF values by averaging).
Schoenberg and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Schoenberg and generating a new TSDF value based on depth data which includes pose data. Doing so would allow for using a known way to evaluate the change in depth while taking pose into account.
Regarding claim 16, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Schoenberg teaches:
Wherein, to generate the TSDF value based on the depth data, the at least one processor is configured to generate the TSDF value based on the depth data and a previous weight volume associated with the previous TSDF value (Paragraph 50, the average can be a weighted average that uses a weighting function relating to the distance of the associated voxel from the depth camera. The averaged SDF values can then be stored at the current voxel. In an alternative example, two values can be stored at each voxel. A weighted sum of the SDF values can be calculated and stored, and also a sum of the weights calculated and stored).
Schoenberg and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Schoenberg and generating a new TSDF value based on depth data with weighting. Doing so would allow for using a known way to evaluate the change in depth while taking weights into account.
Regarding claim 17, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Schoenberg teaches:
Generate a weight volume value based on at least one of the depth data, the previous TSDF value, or a previous weight volume value associated with the previous TSDF value (Paragraph 50, the average can be a weighted average that uses a weighting function relating to the distance of the associated voxel from the depth camera. The averaged SDF values can then be stored at the current voxel. In an alternative example, two values can be stored at each voxel. A weighted sum of the SDF values can be calculated and stored, and also a sum of the weights calculated and stored);
Wherein the vertex difference is also based on the weight volume value (Paragraph 50, the average can be a weighted average that uses a weighting function relating to the distance of the associated voxel from the depth camera. The averaged SDF values can then be stored at the current voxel. In an alternative example, two values can be stored at each voxel. A weighted sum of the SDF values can be calculated and stored, and also a sum of the weights calculated and stored).
Schoenberg and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Schoenberg and generate a weight value based on a previous weight associated with a pervious TSDF value. Doing so would allow for using a known way to evaluate the change in depth while taking weights into account.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Mitchell (US 20240005605).
Regarding claim 18, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Mitchell teaches:
Generate a second TSDF value based on the depth data, wherein the TSDF value corresponds to a first corner of the at least one voxel, wherein the second TSDF value corresponds to a second corner of the at least one voxel, wherein the vertex difference is also based on a comparison the second TSDF value to a previous second TSDF value (Paragraph 62, a voxel grid can be imposed on the input data and then each corner of each voxel can be classified as inside or outside of one or more reference objects within the first volume). Note: Satavani and Steinbrucker teach calculating a TSDF value and comparing changes and Mitchell teaches measuring different corners of the same voxel.
Mitchell and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Mitchell and measure different corners of the same voxel. Doing so would allow for identifying movement of objects by analyzing different parts of the same voxel.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Satavani in view of Steinbrucker as applied to claims 1, 3-4 and 20 and further in view of Li (US 20250316018).
Note: The contents of Li cited are present in the provisional application 63/631,568 filed April 9, 2024.
Regarding claim 19, the combination of Satavani and Steinbrucker teaches the apparatus of claim 1. While the combination fails to disclose the following, Li teaches:
Wherein, to generate the TSDF value, the at least one processor is configured to process the depth data and the pose data using a trained machine learning model (Paragraph 33, the system and methods described herein may provide a 3D preview (e.g., a view of a voxel model) during object scanning using a process (e.g., machine learning) that extracts 3D object geometry from color (e.g., RGB) images from different camera poses. The 3D geometry extraction from dense color image data is more accurate than using only sparse depth data-based modeling with respect to showing thin/unique structures).
Li and the combination of Satavani and Steinbrucker are both considered to be analogous to the claimed invention because they are in the same field of 3D reconstruction. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Satavani and Steinbrucker by using Li and use machine learning to process the depth data and pose data. Doing so would allow for using a known way to process data and determine a TSDF value from given inputs.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SNIGDHA SINHA whose telephone number is (571)272-6618. The examiner can normally be reached Mon-Fri. 12pm-8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SNIGDHA SINHA/Examiner, Art Unit 2619
/JASON CHAN/Supervisory Patent Examiner, Art Unit 2619