DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Claims 1-20 were previously pending and subject to non-final action filed 07/29/2025. In the response filed on 11/03/2025, claim 1, 6, 8, 13, 15, and 20 were amended. Claims 5, 12 and 19 were canceled. Therefore, claims 1-4, 6-11, 13-18 and 20 are currently pending and subject to the final action below.
Response to Arguments
Applicant's arguments, see pages 7-8 filed 11/03/2025, with respect to 35 U.S.C. 103 of claims 1-20 have been fully considered but are moot because the arguments do not apply to the new combinations of references being used in the current rejection.
Examiner Notes
Keypoints are a point of interest feature points and refer to a pixel within an image that comprise rich texture information such as edges, corners, and/or readily identifiable structures.
A simultaneous localization and mapping (SLAM), is a method used for autonomous vehicles that lets you build a map and localize your vehicle in that map at the same time. SLAM algorithms allow the vehicle to map out unknown environments.
Pipeline is a series of steps that automate and standardize the process of building, training, and deploying a model(s).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (US PGPUB: 20210209797 A1, Filed Date: Jan. 5, 2021) in view of Georgios Pavlako from Semantic Keypoints, Pub Date: Mar. 2017, hereinafter “Georgios”) in view of Akbas (US 11074711 B1, Filed Date: Jun. 14, 2019) in view of BHARGAVA (US PGPUB: 20220222889, Filed Date: Jan. 12, 2021).
Regarding independent claim 1, Lee teaches: A computer-implemented method for semantic localization of various objects, the method comprising:
obtaining an image that displays a scene with a first object and a second object; (Lee − [0010] apparatus for determining features of one or more objects in one or more images is provided. [0069] Object detection can be used to detect objects in images, and in some cases various attributes of the detected objects. For instance, object localization is a technique that can be performed to localize an object in a digital image or a video frame of a video sequence capturing a scene or environment. Fig. 2, ref. 201 a plurality of cars in the scene. Examiner notes that first object and second object can be similar objects with similar shape, or different objects with similar or different shapes.)
generating a first set of two-dimensional (2D) keypoints corresponding to the first object during a first pass of a pipeline; (Lee − [0075-0076] The input image is shown as image 207 with the 2D object keypoint detections results generated by the CNNs 1-c 206. For instance, a first cropped image can be input to a first CNN, a second cropped image can be input to a second CNN, and so on. Each of the keypoints is represented in the image 207 by a dot, with dots having different colors being associated with different vehicles. Each cropped image represent an individual object in the input image. Fig. 18-20 are machine learning models.)
generating first object pose data based on the first set of 2D keypoints; (Lee − [0006] Systems and techniques are described herein for determining poses of objects in images using point-based object localization. For example, the systems and techniques described herein provide an efficient way to estimate a six degrees of freedom (6-DoF) object pose and the shape of the object from an image (e.g., a single image) including the object. Object pose is first object pose data [0075] For instance, keypoint-based systems can use a PnP solver and RANSAC to perform 3D object localization. Keypoint-based systems can estimate 6D poses using extracted keypoints (also referred to herein as sample points) and 3D prior models. Keypoints also referred to sample points. )
generating camera pose data based on the first object pose data; (Lee − [0077-0078] [0078] Knowledge of the general camera geometry can be utilized for performing PnP and other techniques. In the pinhole camera model, a scene view is formed by projecting 3D points into the image plane using a perspective transformation. projection of the 3D point coordinate 302 (and/or other 3D point coordinates on the 3D object 308) onto the image plane 306 of a 2D image. The camera geometry (viewpoint) is camera pose data.)
generating a keypoint heatmap based on the camera pose data; (Lee − [0181] FIG. 20 is a diagram illustrating a specific example of a neural network based keypoint detector that can be used by the object detection engine 902. a convolutional layer to generate predicted heatmaps based on sample points)
generating first coordinate data of the first object in world coordinates using the first object pose data and the camera pose data; (Lee − [0093] A mapping from the 2D image plane to the 3D real world coordinate system (for example, to identify where an object in a video frame is positioned) can also be accomplished using these equations, when the extrinsic parameters are known. [0094] FIG. 5-FIG. 8D are diagrams illustrating a perspective-n-point (PnP) technique. PnP can be used to estimate the 6-DoF pose parameter (defined by a transformation matrix T), given a set of n 3D points in the world space (or the object coordinate system) and the 2D projection points in the image that correspond to the set of n 3D points. [0154] FIG. 15A-FIG. 15D are images illustrating an example of a keypoint-based vehicle pose estimation. )
generating second coordinate data of the second object in the world coordinates using the second object pose data and the camera pose data; (Lee − [0093-0094] [0154] FIG. 15A-FIG. 15D are images illustrating an example of a keypoint-based vehicle pose estimation. )
tracking the first object based on the first coordinate data; (Lee − [0155] FIG. 16A-FIG. 16D are images illustrating an example of results of keypoint-based vehicle pose estimation using an image from a front-facing camera of a tracking vehicle tracking various target vehicles (as target objects).)
and tracking the second object based on the second coordinate data. (Lee − [0155] FIG. 16A-FIG. 16D are images illustrating an example of results of keypoint-based vehicle pose estimation using an image from a front-facing camera of a tracking vehicle tracking various target vehicles (as target objects).)
Lee continue to teach: generating a second set of 2D keypoints (Lee – Fig. 2 ref 207 [006] [075]) and a trained machine learning system ([0117] the object detection engine 902 can use a machine learning based object detector, such as using one or more neural networks.) but does not explicitly teach: classifying the first object as being asymmetrical and the second object as being symmetrical;
However, Georgios teaches: classifying the first object as being asymmetrical (Georgios − Fig. 4 classifying a gas canister which is asymmetrical; two halves of the gas canister are different (asymmetrical) )
and the second object as being symmetrical; (Georgios − Fig. 5 classifying car, buss, train, and chair is symmetrical; two halves of the car, bus, train and/or chair are the same on one side as the other.)
PNG
media_image1.png
912
732
media_image1.png
Greyscale
generating, via a trained machine learning system, (Georgios − [pdf page 1] abstract keypoints predicted by a convolutional network (convnet)) a second set of 2D keypoints corresponding to the second object (Georgios − [pdf page 2-4 Fig. 3, Overview of the stacked hourglass architecture, for generating keypoint heatmap. Given detected keypoints in an image, estimate the rotation and translation between object and camera frame (camera pose data) as well as coefficients of the shape deformation. Assigning values in the heatmap corresponding to the n-number of keypoints in the image. B. Pose optimization section)
PNG
media_image2.png
254
1051
media_image2.png
Greyscale
PNG
media_image3.png
212
1015
media_image3.png
Greyscale
generating second object pose data based on the second set of 2D keypoints; (Georgios − [pdf pages 5-8 The 6-DoF pose was estimated with CAD model to estimate the object pose. [pdf page 4] The estimated object poses are shown in the last two columns of Fig. 4.)
PNG
media_image4.png
92
509
media_image4.png
Greyscale
Lee and Georgios are analogous art because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images using point-based object localization.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Lee and Georgios before him or her, to combine the teachings of Lee and Georgios. The rationale for doing so would have been to provide a model to determine keypoints on non-symmetrical objects as discussed by Georgios (Summary) to improve accuracy of tracking objects in an environment.
Therefore, it would have been obvious to combine Lee and Georgios to obtain the invention as specified in the instant claim(s).
Lee does not explicitly teach: in response to receiving the keypoint heatmap and a cropped image of the second object together as multi-channel input, the second set of 2D keypoints being generated during a second pass of the pipeline;
However, Akbas teaches: in response to receiving the keypoint heatmap and a cropped image of the second object together as multi-channel input, (Akbas − [Col. 11 ll. 30-47] the heatmap from the keypoint subnet 30 are inputs to the pose residual network (PRN) 50. The keypoint heatmaps 38, 39 are cropped to fit the bounding boxes. Examiner Notes: The keypoint heatmap and cropped heatmap images are multi-channel input into the PRN 50.)
the second set of 2D keypoints being generated during a second pass of the pipeline; (Akbas − [Col. 11 ll. 30-47] In the illustrative embodiment, the residuals make irrelevant keypoints disappear, and the pose residual network 50 deletes irrelevant keypoints. For example, with the image depicted in FIG. 8, when the PRN is trying to detect the mother, then the PRN needs to eliminate the baby's keypoints (e.g., in eq. (1), the unrelated keypoints are suppressed; in this case the keypoints of the baby are suppressed). Examiner Notes: Generating a new set of keypoints by suppressing irrelevant keypoints.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have combined the teaching of Lee, Georgios and Akbas because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images using point-based object localization. The rationale for doing so would have been to provide a model to determine keypoints on asymmetrical and symmetrical objects to improve pose estimation of object symmetries.
Lee teaches generating a keypoint heatmap as output but does not explicitly teach: generating a keypoint heatmap as a prior input by projecting three-dimensional (3D) keypoints of the second object with respect to the image based on the camera pose data;
However, BHARGAVA teaches: generating a keypoint heatmap as a prior input (BHARGAVA − [0076] Fig. 6; the image backbone 610 extracts relevant appearance and geometric features of the object 604 (the image); image backbone 610 generate a keypoint heatmap 612 of the object 604; The keypoint heatmap 612 is provided to a semantic keypoint predictor 620. Examiner Note: the keypoint heatmap 612 is an input to a semantic keypoint predictor 620;) by projecting three-dimensional (3D) keypoints of the second object with respect to the image based on the camera pose data; (BHARGAVA − [0076-0078] Keypoint heatmap 612 input to a sematic keypoint predictor 620 to configured a 3D lifting block 640. The 3D lifting block 640 uses structure prior information 642 (keypoint heatmap 2-D keypoints) and/or monocular depth information 664 to construct a 3D structured object geometry 650. Examiner Note: 3D object is the projected 3D keypoints of the object with respect of the image)
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have combined the teaching of Lee, Georgios, Akbas and BHARGAVA because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images. The rationale for doing so would have been to provide an improvement over the conventional annotation methods by using semantic keypoints for auto-labeling different object shapes.
Regarding dependent claim 2, depends on claim 1, Lee teaches: and the second object is classified as symmetrical with respect to a second texture of the second object in the image and a second rotational axis of the second object. (Lee – Fig. 1A-1D image of vehicle with symmetric shapes. [0006,0075] Fig. 2 [0082] The relationship between a 6D pose vector and a 4×4 transformation matrix can be defined and used to determine the 6D pose of an object in an image. A 3D rotational vector; [0116] In some cases, the object detection engine 902 can determine a classification (referred to as a class) or category of each object detected in an image.)
Lee does not explicitly teach: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object;
However, Georgios teaches: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object; (Georgios − [pdf page 2] Fig. 4 the gas canister object is an asymmetrical object is classify by the model. Contribution: deformable shape model to estimate the continuous 6-DoF pose of an object.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have combined the teaching of Lee, Georgios, Akbas and BHARGAVA because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images. The rationale for doing so would have been to provide an improvement over the conventional annotation methods by using semantic keypoints for auto-labeling different object shapes.
Regarding dependent claim 3, Lee teaches: further comprising: cropping the image to generate a another cropped image that includes the first object, wherein the first set of 2D keypoints is generated by a trained machine learning system in response to the another cropped image. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints.)
Regarding dependent claim 4, Lee teaches: further comprising: cropping the image to generate the cropped image that includes the second object. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints. [0181] FIG. 20 is a diagram illustrating a specific example of a neural network based keypoint detector that can be used by the object detection engine 902. a convolutional layer to generate predicted heatmaps based on sample points)
Regarding dependent claim 6, Lee teaches: further comprising: obtaining a first set of 3D keypoints of the first object from a first 3D model of the first object; obtaining a second set of 3D keypoints of the second object from a second 3D model of the second object; (Lee − [0077] prior information 208 (including a 3D object model and associated 3D keypoints)
generating the first object pose data via a Perspective-n-Point (PnP) process that uses the first set of 2D keypoints and the first set of 3D keypoints; and generating the second object pose data via the PnP process that uses the second set of 2D keypoints and the second set of 3D keypoints. (Lee − [0077] The 2D object keypoint detection results and prior information 208 (including a 3D object model and associated 3D keypoints) are provided as input to an N-Point RANSAC-based pose estimation system 210. The N-Point RANSAC-based pose estimation system 210 can use a PnP solver along with RANSAC to remove outliers. The object localization results from the N-Point RANSAC-based pose estimation system 210 are shown in the input image shown as image 211.)
Regarding dependent claim 7, Lee teaches: further comprising: optimizing a cost of the scene based on the first object pose data, the second object pose data, and the camera pose data. (Lee – [0097] The pose estimate can be optimized by minimizing the reprojection errors of the inlier sample points. [0101] Various optimization methods can be used, such as the Gauss-Newton method, the Levenberg-Marquardt method, among others)
Regarding independent claim 8, Lee teaches: a system comprising: a camera; (Lee − [0024] In some aspects, the apparatus is, is part of, and/or includes a vehicle or a computing device or component of a vehicle (e.g., an autonomous vehicle), a camera,)
a processor in data communication with the camera, the processor being configured to receive a plurality of images from the camera, the processor being operable to: (Lee − [0208] FIG. 36 illustrates an example computing device architecture 3600 of an example computing device which can implement the various techniques described herein.)
Claim 8 have similar/same technical features/limitation as claim 1 and the claims are rejected under the same rationale.
Regarding dependent claim 9, Lee teaches: and the second object is classified as symmetrical with respect to a second texture of the second object in the image and a second rotational axis of the second object. (Lee – Fig. 1A-1D image of vehicle with symmetric shapes. [0006,0075] Fig. 2 [0082] The relationship between a 6D pose vector and a 4×4 transformation matrix can be defined and used to determine the 6D pose of an object in an image. A 3D rotational vector; [0116] In some cases, the object detection engine 902 can determine a classification (referred to as a class) or category of each object detected in an image.)
Lee does not explicitly teach: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object;
However, Georgios teaches: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object; (Georgios − [pdf page 2] Fig. 4 the gas canister object is an asymmetrical object is classify by the model. Contribution: deformable shape model to estimate the continuous 6-DoF pose of an object.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have combined the teaching of Lee, Georgios, Akbas and BHARGAVA because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images. The rationale for doing so would have been to provide an improvement over the conventional annotation methods by using semantic keypoints for auto-labeling different object shapes.
Regarding dependent claim 10, Lee teaches: wherein the processor is further operable to: crop the image to generate another cropped image that includes the first object, wherein the first set of 2D keypoints is generated by a trained machine learning system in response to the another cropped image. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints.)
Regarding dependent claim 11, Lee teaches: wherein the processor is further operable to crop the image to generate a second the cropped image that includes the second object. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints. [0181] FIG. 20 is a diagram illustrating a specific example of a neural network based keypoint detector that can be used by the object detection engine 902. a convolutional layer to generate predicted heatmaps based on sample points)
Regarding dependent claim 13, Lee teaches: wherein the processor is further operable to: obtain a first set of 3D keypoints of the first object from a first 3D model of the first object; obtain a second set of 3D keypoints of the second object from a second 3D model of the second object; (Lee − [0077] prior information 208 (including a 3D object model and associated 3D keypoints)
generate the first object pose data via a Perspective-n-Point (PnP) process that uses the first set of 2D keypoints and the first set of 3D keypoints; and generate the second object pose data via the PnP process that uses the second set of 2D keypoints and the second set of 3D keypoints. (Lee − [0077] The 2D object keypoint detection results and prior information 208 (including a 3D object model and associated 3D keypoints) are provided as input to an N-Point RANSAC-based pose estimation system 210. The N-Point RANSAC-based pose estimation system 210 can use a PnP solver along with RANSAC to remove outliers. The object localization results from the N-Point RANSAC-based pose estimation system 210 are shown in the input image shown as image 211.)
Regarding dependent claim 14, Lee teaches: wherein the processor is further operable to: optimize a cost of the scene based on the first object pose data, the second object pose data, and the camera pose data.. (Lee – [0097] The pose estimate can be optimized by minimizing the reprojection errors of the inlier sample points. [0101] Various optimization methods can be used, such as the Gauss-Newton method, the Levenberg-Marquardt method, among others)
Regarding independent claim 15 is directed to non-transitory computer readable media. Claim 15 have similar/same technical features/limitation as claim 1 and the claims are rejected under the same rationale.
Regarding dependent claim 16, Lee teaches: and the second object is classified as symmetrical with respect to a second texture of the second object in the image and a second rotational axis of the second object. (Lee – Fig. 1A-1D image of vehicle with symmetric shapes. [0006,0075] Fig. 2 [0082] The relationship between a 6D pose vector and a 4×4 transformation matrix can be defined and used to determine the 6D pose of an object in an image. A 3D rotational vector; [0116] In some cases, the object detection engine 902 can determine a classification (referred to as a class) or category of each object detected in an image.)
Lee does not explicitly teach: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object;
However, Georgios teaches: wherein: the first object is classified as asymmetrical with respect to a first texture of the first object in the image and a first rotational axis of the first object; (Georgios − [pdf page 2] Fig. 4 the gas canister object is an asymmetrical object is classify by the model. Contribution: deformable shape model to estimate the continuous 6-DoF pose of an object.)
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have combined the teaching of Lee, Georgios, Akbas and BHARGAVA because they are from the same problem-solving area, utilizing digital image processing and determining poses of objects in images. The rationale for doing so would have been to provide an improvement over the conventional annotation methods by using semantic keypoints for auto-labeling different object shapes.
Regarding dependent claim 17, Lee teaches: further comprising: cropping the image to generate another cropped image that includes the first object, wherein the first set of 2D keypoints is generated by a trained machine learning system in response to the another cropped image. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints.)
Regarding dependent claim 18, Lee teaches: further comprising: cropping the image to generate the cropped image that includes the second object. (Lee – [0075-0076] For example, the region of the input image 201 inside each bounding box can be cropped to produce individually cropped images 204. [0076] Each of the cropped images 204 is input to a separate CNN, shown in FIG. 2 as CNNs 1-c 206, where c has a value greater than or equal to 0 (where if c=0, a single CNN is present for processing a single detected object) The CNNs 1-c 206 are designed to process the cropped images 204 to detect 2D object keypoints. [0181] FIG. 20 is a diagram illustrating a specific example of a neural network based keypoint detector that can be used by the object detection engine 902. a convolutional layer to generate predicted heatmaps based on sample points)
Regarding dependent claim 20, Lee teaches: further comprising: obtaining a first set of 3D keypoints of the first object from a first 3D model of the first object; obtaining a second set of 3D keypoints of the second object from a second 3D model of the second object; (Lee − [0077] prior information 208 (including a 3D object model and associated 3D keypoints)
generating the first object pose data via a Perspective-n-Point (PnP) process that uses the first set of 2D keypoints and the first set of 3D keypoints; and generating the second object pose data via the PnP process that uses the second set of 2D keypoints and the second set of 3D keypoints. (Lee − [0077] The 2D object keypoint detection results and prior information 208 (including a 3D object model and associated 3D keypoints) are provided as input to an N-Point RANSAC-based pose estimation system 210. The N-Point RANSAC-based pose estimation system 210 can use a PnP solver along with RANSAC to remove outliers. The object localization results from the N-Point RANSAC-based pose estimation system 210 are shown in the input image shown as image 211.)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARL E BARNES JR/Examiner, Art Unit 2178
/STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178