DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
2. Claims 1-7 are objected to because of the following informalities: Claim 1 recites in lines 2 and 5 a generative adversarial network. Line 5 should recite “the generative adversarial network” instead. Appropriate correction is required.
Terminal Disclaimer
3. The Terminal Disclaimer filed on 2/25/2026 has been received but has yet to be approved at the time of this Final Office Action. Therefore, the Double Patenting rejection below is maintained until the Terminal Disclaimer filed on 2/25/2026 is approved.
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
5. Claims 1-3 and 8-12 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-2, 4 and 7-10 of U.S. Patent No. 11,995,805 B2 (patent 805) in view of Zhang et al. (US Patent Application Publication No. 2018/0225866 A1), which discloses displaying a modified 3D point cloud on the screen of user device. Claim 15 is unpatentable over claims 13 and 17. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are broader.
6. The following table shows correspondence between claims of present application with claims of patent 805.
Claims of present application
1
2
3
8
9
10
11
12
15
Claims of patent 805
1 in view of Zhang
2
4
7 in view of Zhang
8
9
10
10
13 and 17
7. The following table shows correspondence between the limitations of claim 1 of present application and limitations of claim 1 of patent 805.
Claim 1 of present application
Claims 1 of patent 805 in view of Zhang
1. A non-transitory computer readable storage medium having stored thereon instructions that, when executed by one or more processors, cause a computer to:
1. A non-transitory computer readable storage medium having stored thereon instructions that, when executed by one or more processors, cause a computer to:
generate a loss value by processing one or more three-dimensional regions and at least one three-dimensional point cloud;
update one or more weights of a generative adversarial network by backpropagating the loss value;
store the updated weights of the generative adversarial network on a non-transitory computer readable storage medium;
obtain a three-dimensional point cloud having one or more gaps;
initialize a generative adversarial network using stored weights; impute one or both of (i) RGB colorspace data, and (ii) elevation data into the gaps of the three-dimensional point cloud by analyzing the three-dimensional point cloud using the initialized generative adversarial network;
obtain a three-dimensional point cloud having one or more gaps;
initialize the generative adversarial network using the stored weights; and
impute one or both of (i) RGB data, and (ii) elevation data into the gaps of the three-dimensional point cloud by analyzing the three-dimensional point cloud using the initialized generative adversarial network.
and display the three-dimensional point cloud including the imputed data in a display device of a user.
Zhang at paragraph [0037] discloses “The registration engine 140 generates and outputs a merged, fused 3D point cloud by performing the registration process (216). Further details of the registration process 216 according to some implementations are described below. The merged, fused 3D point cloud can be provided, for example, to a display device 120 that includes a graphical user interface. The merged, fused 3D point cloud thus can be displayed on a viewing screen of the display device 120. A user can rotate the merged, fused 3D point cloud displayed on the screen using, for example, a cursor, so that different perspectives of the scene 122 can be viewed on the display screen as the 3D point cloud is rotated. Thus, in response to user input, the point cloud on the display screen is rotated.”
8. Claims 1-3, 8-10, and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4, 8-11, and 15 of U.S. Patent No. 11,508,042 B2 (patent 042) in view of Zhang, which discloses displaying a modified 3D point cloud on the screen of user device. Claim 15 is unpatentable over claims 13 and 17. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims are broader.
9. The following table shows correspondence between the claims of present application with those of patent 042.
Claims of present application
1
2
3
8
9
10
15
Claims of patent 042
1 and 2 in view of Zhang
3
4
8 and 9 in view of Zhang
10
11
15 and 16 in view of Zhang
10. The following table shows correspondence between the limitations of claim 1 of present application with limitations of claims 1 and 2 of patent 042 in view of Zhang.
Claim 1 of present application
Claims 1 and 2 of patent 042 in view of Zhang
1. A non-transitory computer readable storage medium having stored thereon computer instructions that, when executed by one or more processors, cause the one or more processors to:
obtain one or more training three-dimensional point clouds;
extract one or more three-dimensional regions from each training three-dimensional point cloud, wherein extracting the one or more three-dimensional regions from each training three-dimensional point cloud includes creating one or more gaps in each three-dimensional point cloud corresponding to each of the one or more extracted three-dimensional regions,
train the generative adversarial network by:
analyzing the extracted three-dimensional regions and each three-dimensional point cloud including the respective one or more gaps, wherein the analyzing includes generating a loss value, and
updating one or more weights of the generative adversarial network by backpropagating the loss value throughout the generative adversarial network; and
store the updated weights of the generative adversarial network on the non-transitory computer readable storage medium as parameters for initializing the generative adversarial network.
1. A non-transitory computer readable storage medium having stored thereon instructions that, when executed by one or more processors, cause a computer to: obtain a three-dimensional point cloud having one or more gaps; initialize a generative adversarial network using stored weights; impute one or both of (i) RGB colorspace data, and (ii) elevation data into the gaps of the three-dimensional point cloud by analyzing the three-dimensional point cloud using the initialized generative adversarial network;
2. The non-transitory computer readable storage medium of claim 1, having stored thereon further instructions to:
obtain a three-dimensional point cloud having one or more gaps;
initialize the generative adversarial network using the stored weights; and
impute one or both of (i) RGB data, and (ii) elevation data into the gaps of the three-dimensional point cloud by analyzing the three-dimensional point cloud using the initialized generative adversarial network.
and display the three-dimensional point cloud including the imputed data in a display device of a user.
Zhang at paragraph [0037] discloses “The registration engine 140 generates and outputs a merged, fused 3D point cloud by performing the registration process (216). Further details of the registration process 216 according to some implementations are described below. The merged, fused 3D point cloud can be provided, for example, to a display device 120 that includes a graphical user interface. The merged, fused 3D point cloud thus can be displayed on a viewing screen of the display device 120. A user can rotate the merged, fused 3D point cloud displayed on the screen using, for example, a cursor, so that different perspectives of the scene 122 can be viewed on the display screen as the 3D point cloud is rotated. Thus, in response to user input, the point cloud on the display screen is rotated.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
11. Claims 1-2, 8-9, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Vernon R. Goodman (US Patent Application Publication No. 2017/0076456 A1) in view of Gavriil et al. (“Void Filling of Digital Elevation Models with Deep Generative Models”), in view of Lang et al. (US Patent Application Publication No. 2019/0258953 A1), and further in view of Lim et al. (US Patent Application Publication No. 2018/0341836 A1).
12. Regarding Claim 1 (Currently amended), Goodman discloses A non-transitory computer readable storage medium having stored thereon instructions for imputing data that, when executed by one or more processors, cause a computer to: (paragraph [0060] reciting “Certain embodiments are described herein as including logic or a number of components, modules, processors, or mechanisms. Modules may constitute either software modules (e.g., code embodied (1) on a non-transitory machine-readable medium or (2) in a transmission signal) or hardware-implemented modules. …”)
obtain a three-dimensional point cloud having one or more gaps; (paragraph [0017] reciting “DEMs can include holes and/or incorrectly interpolated points. This is true of a single aspect DEM created by a passive 3D point cloud generator using multiple images from differing aspects. Embodiments discussed herein can help correct the holes and/or incorrectly interpolated points in the 3D point cloud. In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).” The DEM 3D point cloud has holes/gaps.)
impute both (i) RGB colorspace data and (ii) elevation data into the gaps of the three-dimensional point cloud (paragraph [0017] reciting “… In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).”;
paragraph [0018] reciting “Embodiments can provide an output filtered DEM file (e.g., an image file) that includes filled in and/or corrected heights. Additionally or alternatively, embodiments can provide a binary point format (BPF) point cloud file created at a specified or default resolution for each input file. This BPF file is in pixel (line, sample) space and is included for comparison purposes. The projection to world-space can be performed after a DEM adjustment process is performed.”
The DEM 3D point cloud’s void is filled in with corrected heights (elevation data). Imputing means to fill in missing data and filling the voids of a DEM 3D point cloud corresponds to imputing altitude (height) data into the 3D point cloud of the DEM.) and display the three-dimensional point cloud including the imputed data in a display device of a user. (paragraph [0088] reciting “In Example 11 a method for filtering digital elevation map (DEM) data includes ingesting digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, filling in voids in the ingested DEM data using local interpolation to create interpolated DEM data, creating a shadow map based on the received intensity data, modifying, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and providing signals to a display that cause an image to be displayed based on the modified DEM data.” The interpolated DEM 3D point cloud is displayed onto a user device’s display screen.)
While not explicitly disclosed by Goodman, Gavriil discloses using a generative adversarial network (Abstract reciting “… In this paper we consider a state-of-the-art machine learning model for image inpainting, namely a Wasserstein Generative Adversarial Network based on a fully convolutional architecture with a contextual attention mechanism. We show that this model can successfully be transferred to the setting of digital elevation models (DEMs) for the purpose of generating semantically plausible data for filling voids. …”;
page 3, section C. Model Architecture reciting “The proposed DEM void filling generative model G is an adaptation of the generative image inpainting model presented in [13]; see Figure 2. This model demonstrates promising results for texture-like images, which we …”)
by analyzing the three-dimensional point cloud using the initialized generative adversarial network; (Abstract reciting “… In this paper we consider a state-of-the-art machine learning model for image inpainting, namely a Wasserstein Generative Adversarial Network based on a fully convolutional architecture with a contextual attention mechanism. We show that this model can successfully be transferred to the setting of digital elevation models (DEMs) for the purpose of generating semantically plausible data for filling voids. …”;
page 3, section C. Model Architecture reciting “The proposed DEM void filling generative model G is an adaptation of the generative image inpainting model presented in [13]; see Figure 2. This model demonstrates promising results for texture-like images, which we …”)
page 2, III. Methodology reciting “We will consider preprocessed digital elevation/surface models in GeoTIFF format, in which the data forms a grid with a single height value for every position (i, j). Let D = (dp) ∈ R m×n be a partial digital elevation model, where p is an abbreviation for pixel referring to the coordinates (i, j) of a point on the DEM grid and dp is the corresponding height value. Partial means that some pixel values are considered void. A binary matrix M = (mp) ∈ {0, 1} m×n acts as a mask representing the void regions of D. We refer to pixels p for which mp = 0 as known, and unknown otherwise. …”
Therefore, by using a GAN (generative adversaria network), the DEM’s void is filled and the filling required generating pixels with height values which corresponds to elevation data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman with Gavriil so that the generative adversarial network is used to perform the void filling disclosed in Goodman. This is an obviously beneficial modification since Goodman pertains to DEM 3D point cloud void filling and Gavriil provides a GAN to perform such a task, which expedites and facilitates the void filling process through machine learning.
While the combination of Goodman and Gavriil does not explicitly disclose, Lang discloses initialize a generative adversarial network using stored weights; (paragraph [0289] reciting “Furthermore, in other aspects of the invention, the system has for example both an attack tree based neural net attacker and a defender tree based neural net defender. Both interact with the environment as actors. In an example, the present invention uses Generative Adversarial Networks (GANs). In an example, the present invention uses multi-agent reinforcement learning. These machine learning techniques are known to those skilled in the art of machine learning. For example, one side's neural net weights are fixed while the other side is trained. Then the other side's neural net weights are fixed while the one side is trained. Over time both sides learn from each other and become more successful and refined, thus creating a self-learning system.” Thus, a single GAN starts out with stored weights which are then trained against another GAN in order for the weights to gain self-learning capabilities.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman and Gavriil with Lang so that the generative adversarial network has stored weights which are then trained through adversarial training. Gavriil discloses training a GAN and therefore it is required that the GAN has weights so it can be trained to perform the void filling of Gavriil.
While the combination of Goodman, Gavriil, and Lang does not explicitly disclose, Lim discloses impute both (i) (paragraph [0015] reciting “…Typical GANs are used with 2D images, but the GAN described herein is used with 3D point clouds. For example, the generator is configured to generate 3D data points and interpolate (meaning to map, insert, add, or fill in) the generated data points into a received low resolution point cloud to produce a super-resolved point cloud. …”) RGB colorspace data (paragraph [0070] reciting “Optionally, the one or more characteristics of the data points in the low resolution point cloud on which the generator of the GAN is configured to generate the generated data points includes one or more of 3D position coordinates, intensities, colors, or relative positions of the data points.” Therefore, the holes or gaps of the point cloud can be filled using a GAN to generate points which including color data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, and Lang with Lim so that the GAN is further used to generated color data for the interpolated points in the point cloud of Goodman, Gavriil, and Lang. This is an obviously beneficial modification since filling gaps with interpolated data including color data results in a more accurate DEM with proper elevation and color data points.
12. Regarding Claim 2 (Original), Goodman further discloses The non-transitory computer readable storage medium of claim 1, having stored thereon further instructions that, when executed by one or more processors, cause a computer to: store the three-dimensional point cloud including the imputed data on the computer readable storage medium. (paragraph [0093] reciting “In Example 16 a system for filtering digital elevation map (DEM) data includes an ingest processor to ingest digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, a local interpolation processor to fill in voids in the ingested DEM data using local interpolation to create interpolated DEM data, a shadow map processor to create a shadow map based on the received intensity data, a height correction processor to modify, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and a memory to store the modified DEM data.”)
13. Regarding Claim 8 (Currently amended), Goodman discloses A computer-implemented method (paragraph [0014] reciting “FIG. 10 illustrates, by way of example, a logical block diagram of an embodiment of a machine within which instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.”) for imputing data using a generative adversarial network, the computer-implemented method comprising:
obtaining a three-dimensional point cloud having one or more gaps; (paragraph [0017] reciting “DEMs can include holes and/or incorrectly interpolated points. This is true of a single aspect DEM created by a passive 3D point cloud generator using multiple images from differing aspects. Embodiments discussed herein can help correct the holes and/or incorrectly interpolated points in the 3D point cloud. In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).” The DEM 3D point cloud has holes/gaps.)
imputing both (i) RGB colorspace data and (ii) elevation data into the gaps of the three-dimensional point cloud (paragraph [0017] reciting “… In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).”;
paragraph [0018] reciting “Embodiments can provide an output filtered DEM file (e.g., an image file) that includes filled in and/or corrected heights. Additionally or alternatively, embodiments can provide a binary point format (BPF) point cloud file created at a specified or default resolution for each input file. This BPF file is in pixel (line, sample) space and is included for comparison purposes. The projection to world-space can be performed after a DEM adjustment process is performed.”
The DEM 3D point cloud’s void is filled in with corrected heights (elevation data). Imputing means to fill in missing data and filling the voids of a DEM 3D point cloud corresponds to imputing altitude (height) data into the 3D point cloud of the DEM.) and displaying the three-dimensional point cloud including the imputed data in a display device of a user. (paragraph [0088] reciting “In Example 11 a method for filtering digital elevation map (DEM) data includes ingesting digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, filling in voids in the ingested DEM data using local interpolation to create interpolated DEM data, creating a shadow map based on the received intensity data, modifying, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and providing signals to a display that cause an image to be displayed based on the modified DEM data.” The interpolated DEM 3D point cloud is displayed onto a user device’s display screen.)
While not explicitly disclosed by Goodman, Gavriil discloses by analyzing the three-dimensional point cloud using the initialized generative adversarial network; (Abstract reciting “… In this paper we consider a state-of-the-art machine learning model for image inpainting, namely a Wasserstein Generative Adversarial Network based on a fully convolutional architecture with a contextual attention mechanism. We show that this model can successfully be transferred to the setting of digital elevation models (DEMs) for the purpose of generating semantically plausible data for filling voids. …”;
page 3, section C. Model Architecture reciting “The proposed DEM void filling generative model G is an adaptation of the generative image inpainting model presented in [13]; see Figure 2. This model demonstrates promising results for texture-like images, which we …”)
page 2, III. Methodology reciting “We will consider preprocessed digital elevation/surface models in GeoTIFF format, in which the data forms a grid with a single height value for every position (i, j). Let D = (dp) ∈ R m×n be a partial digital elevation model, where p is an abbreviation for pixel referring to the coordinates (i, j) of a point on the DEM grid and dp is the corresponding height value. Partial means that some pixel values are considered void. A binary matrix M = (mp) ∈ {0, 1} m×n acts as a mask representing the void regions of D. We refer to pixels p for which mp = 0 as known, and unknown otherwise. …”
Therefore, by using a GAN (generative adversaria network), the DEM’s void is filled and the filling required generating pixels with height values which corresponds to elevation data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman with Gavriil so that the generative adversarial network is used to perform the void filling disclosed in Goodman. This is an obviously beneficial modification since Goodman pertains to DEM 3D point cloud void filling and Gavriil provides a GAN to perform such a task, which expedites and facilitates the void filling process through machine learning. While the combination of Goodman and Gavriil does not explicitly disclose, Lang discloses initializing the generative adversarial network using stored weights; (paragraph [0289] reciting “Furthermore, in other aspects of the invention, the system has for example both an attack tree based neural net attacker and a defender tree based neural net defender. Both interact with the environment as actors. In an example, the present invention uses Generative Adversarial Networks (GANs). In an example, the present invention uses multi-agent reinforcement learning. These machine learning techniques are known to those skilled in the art of machine learning. For example, one side's neural net weights are fixed while the other side is trained. Then the other side's neural net weights are fixed while the one side is trained. Over time both sides learn from each other and become more successful and refined, thus creating a self-learning system.” Thus, a single GAN starts out with stored weights which are then trained against another GAN in order for the weights to gain self-learning capabilities.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman and Gavriil with Lang so that the generative adversarial network has stored weights which are then trained through adversarial training. Gavriil discloses training a GAN and therefore it is required that the GAN has weights so it can be trained to perform the void filling of Gavriil.
While the combination of Goodman, Gavriil, and Lang does not explicitly disclose, Lim discloses imputing both (i) (paragraph [0015] reciting “…Typical GANs are used with 2D images, but the GAN described herein is used with 3D point clouds. For example, the generator is configured to generate 3D data points and interpolate (meaning to map, insert, add, or fill in) the generated data points into a received low resolution point cloud to produce a super-resolved point cloud. …”) RGB colorspace data (paragraph [0070] reciting “Optionally, the one or more characteristics of the data points in the low resolution point cloud on which the generator of the GAN is configured to generate the generated data points includes one or more of 3D position coordinates, intensities, colors, or relative positions of the data points.” Therefore, the holes or gaps of the point cloud can be filled using a GAN to generate points which including color data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, and Lang with Lim so that the GAN is further used to generated color data for the interpolated points in the point cloud of Goodman, Gavriil, and Lang. This is an obviously beneficial modification since filling gaps with interpolated data including color data results in a more accurate DEM with proper elevation and color data points.
14. Regarding Claim 9 (Original), Goodman further discloses The computer-implemented method of claim 8, further comprising: storing the three-dimensional point cloud including the imputed data on a computer readable storage medium. (paragraph [0093] reciting “In Example 16 a system for filtering digital elevation map (DEM) data includes an ingest processor to ingest digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, a local interpolation processor to fill in voids in the ingested DEM data using local interpolation to create interpolated DEM data, a shadow map processor to create a shadow map based on the received intensity data, a height correction processor to modify, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and a memory to store the modified DEM data.”)
15. Regarding Claim 15 (Currently amended), Goodman discloses A computing system (Abstract reciting “Discussed herein are systems, devices, and methods for filtering digital elevation map (DEM) data. …”) the system comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the computing system to: (paragraph [0060] reciting “Certain embodiments are described herein as including logic or a number of components, modules, processors, or mechanisms. Modules may constitute either software modules (e.g., code embodied (1) on a non-transitory machine-readable medium or (2) in a transmission signal) or hardware-implemented modules. …”)
obtain a three-dimensional point cloud having one or more gaps; (paragraph [0017] reciting “DEMs can include holes and/or incorrectly interpolated points. This is true of a single aspect DEM created by a passive 3D point cloud generator using multiple images from differing aspects. Embodiments discussed herein can help correct the holes and/or incorrectly interpolated points in the 3D point cloud. In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).” The DEM 3D point cloud has holes/gaps.)
and impute both (i) RGB colorspace data and (ii) elevation data into the gaps of the three-dimensional point cloud (paragraph [0017] reciting “… In one or more embodiments, edges, foliage, and shadows present in the PAN image can be used to decide if and how much a point altitude of the DEM should be adjusted. Embodiments discussed herein describe the altitude (height) correction using multiple adaptive techniques, such as to fill in “holes” in 3D point clouds from data from a single view (a single PAN/MS image).”;
paragraph [0018] reciting “Embodiments can provide an output filtered DEM file (e.g., an image file) that includes filled in and/or corrected heights. Additionally or alternatively, embodiments can provide a binary point format (BPF) point cloud file created at a specified or default resolution for each input file. This BPF file is in pixel (line, sample) space and is included for comparison purposes. The projection to world-space can be performed after a DEM adjustment process is performed.”
The DEM 3D point cloud’s void is filled in with corrected heights (elevation data). Imputing means to fill in missing data and filling the voids of a DEM 3D point cloud corresponds to imputing altitude (height) data into the 3D point cloud of the DEM.) and displaying the three-dimensional point cloud including the imputed data in a display device of a user. (paragraph [0088] reciting “In Example 11 a method for filtering digital elevation map (DEM) data includes ingesting digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, filling in voids in the ingested DEM data using local interpolation to create interpolated DEM data, creating a shadow map based on the received intensity data, modifying, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and providing signals to a display that cause an image to be displayed based on the modified DEM data.” The interpolated DEM 3D point cloud is displayed onto a user device’s display screen.)
While not explicitly disclosed by Goodman, Gavriil discloses for imputing data using a generative adversarial network, (Abstract reciting “… In this paper we consider a state-of-the-art machine learning model for image inpainting, namely a Wasserstein Generative Adversarial Network based on a fully convolutional architecture with a contextual attention mechanism. We show that this model can successfully be transferred to the setting of digital elevation models (DEMs) for the purpose of generating semantically plausible data for filling voids. …”;
page 3, section C. Model Architecture reciting “The proposed DEM void filling generative model G is an adaptation of the generative image inpainting model presented in [13]; see Figure 2. This model demonstrates promising results for texture-like images, which we …”)
page 2, III. Methodology reciting “We will consider preprocessed digital elevation/surface models in GeoTIFF format, in which the data forms a grid with a single height value for every position (i, j). Let D = (dp) ∈ R m×n be a partial digital elevation model, where p is an abbreviation for pixel referring to the coordinates (i, j) of a point on the DEM grid and dp is the corresponding height value. Partial means that some pixel values are considered void. A binary matrix M = (mp) ∈ {0, 1} m×n acts as a mask representing the void regions of D. We refer to pixels p for which mp = 0 as known, and unknown otherwise. …” Therefore, by using a GAN (generative adversaria network), the DEM’s void is filled and the filling required generating pixels with height values which corresponds to elevation data.)
by analyzing the three-dimensional point cloud using the initialized generative adversarial network; (Abstract reciting “… In this paper we consider a state-of-the-art machine learning model for image inpainting, namely a Wasserstein Generative Adversarial Network based on a fully convolutional architecture with a contextual attention mechanism. We show that this model can successfully be transferred to the setting of digital elevation models (DEMs) for the purpose of generating semantically plausible data for filling voids. …”;
page 3, section C. Model Architecture reciting “The proposed DEM void filling generative model G is an adaptation of the generative image inpainting model presented in [13]; see Figure 2. This model demonstrates promising results for texture-like images, which we …”)
page 2, III. Methodology reciting “We will consider preprocessed digital elevation/surface models in GeoTIFF format, in which the data forms a grid with a single height value for every position (i, j). Let D = (dp) ∈ R m×n be a partial digital elevation model, where p is an abbreviation for pixel referring to the coordinates (i, j) of a point on the DEM grid and dp is the corresponding height value. Partial means that some pixel values are considered void. A binary matrix M = (mp) ∈ {0, 1} m×n acts as a mask representing the void regions of D. We refer to pixels p for which mp = 0 as known, and unknown otherwise. …” Therefore, by using a GAN (generative adversaria network), the DEM’s void is filled and the filling required generating pixels with height values which corresponds to elevation data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman with Gavriil so that the generative adversarial network is used to perform the void filling disclosed in Goodman. This is an obviously beneficial modification since Goodman pertains to DEM 3D point cloud void filling and Gavriil provides a GAN to perform such a task, which expedites and facilitates the void filling process through machine learning.
While the combination of Goodman and Gavriil does not explicitly disclose, Lang discloses initialize the generative adversarial network using stored weights; (paragraph [0289] reciting “Furthermore, in other aspects of the invention, the system has for example both an attack tree based neural net attacker and a defender tree based neural net defender. Both interact with the environment as actors. In an example, the present invention uses Generative Adversarial Networks (GANs). In an example, the present invention uses multi-agent reinforcement learning. These machine learning techniques are known to those skilled in the art of machine learning. For example, one side's neural net weights are fixed while the other side is trained. Then the other side's neural net weights are fixed while the one side is trained. Over time both sides learn from each other and become more successful and refined, thus creating a self-learning system.” Thus, a single GAN starts out with stored weights which are then trained against another GAN in order for the weights to gain self-learning capabilities.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman and Gavriil with Lang so that the generative adversarial network has stored weights which are then trained through adversarial training. Gavriil discloses training a GAN and therefore it is required that the GAN has weights so it can be trained to perform the void filling of Gavriil.
While the combination of Goodman, Gavriil, and Lang does not explicitly disclose, Lim discloses impute both (i) (paragraph [0015] reciting “…Typical GANs are used with 2D images, but the GAN described herein is used with 3D point clouds. For example, the generator is configured to generate 3D data points and interpolate (meaning to map, insert, add, or fill in) the generated data points into a received low resolution point cloud to produce a super-resolved point cloud. …”) RGB colorspace data (paragraph [0070] reciting “Optionally, the one or more characteristics of the data points in the low resolution point cloud on which the generator of the GAN is configured to generate the generated data points includes one or more of 3D position coordinates, intensities, colors, or relative positions of the data points.” Therefore, the holes or gaps of the point cloud can be filled using a GAN to generate points which including color data.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, and Lang with Lim so that the GAN is further used to generated color data for the interpolated points in the point cloud of Goodman, Gavriil, and Lang. This is an obviously beneficial modification since filling gaps with interpolated data including color data results in a more accurate DEM with proper elevation and color data points.
16. Claims 3, 6-7, 10, 13-14, 16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goodman in view of Gavriil in view of Lang in view of Lim and further in view of Lukac et al. (US Patent Application Publication No. 2020/0105059 A1).
17. Regarding Claim 3 (Original), while the combination of Goodman, Gavriil, Lim, and Lang does not explicitly disclose, Lukac discloses The non-transitory computer readable storage medium of claim 1, having stored thereon further instructions that, when executed by one or more processors, cause a computer to: generate the three-dimensional point cloud using a structure-from-motion technique. (paragraph [0054] reciting “Generally, structure from motion techniques refer to processes for reconstructing a 3D structure from its projections into a collection of photographs or images taken from different viewpoints (e.g., different camera poses). Various different visual features can be tracked, such as corner points (e.g., edges with gradients in multiple directions). These visual features are tracked from one photograph to another, and their trajectories over time are used to determine a 3D reconstruction of the portion of the environment captured by the photographs.”;
paragraph [0057] reciting “… The output of the reconstruction stage are the camera pose estimates for the photographs and the reconstructed scene structure as a set of scene points. This set of scene points is also referred to as a 3D point cloud reconstruction.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, and Lim with Lukac so that the point cloud of the DEM is generated from structure from motion. This is an obviously beneficial modification since Goodman explicitly calls for 3D point cloud to generate the DEM and Lukac provides a method of acquiring such 3D point cloud.
18. Regarding Claim 6 (Original), Goodman further discloses The non-transitory computer readable storage medium of claim 3, wherein the three-dimensional point cloud comprises a plurality of points and each point comprises a three-dimensional coordinate value and an RGB color value. (paragraph [0022] reciting “The operation 102 can include reading the data from the PAN/MS image. The data can include image type (MS/PAN), resolution (number of pixels or density of pixels), intensity, position (X, Y, and/or Z), and/or color data. The data can be stored locally in a cache or other memory. The data can be translated into a format that makes it easier to retrieve specific data, such as for processing using the method 100.”)
19. Regarding Claim 7 (Original), Lukac further discloses The non-transitory computer readable storage medium of claim 6, wherein each point further comprises GPS position data. (paragraph [0017] reciting “… Structure from motion reconstruction is first performed with bundle adjustment without using GPS locations of the photographs, resulting in a 3D point cloud with camera parameters relative to each other. Then, robust estimation of similarity transformation into the world coordinates is applied using, for example, a least median of squares algorithm The transformation is estimated between relative camera positions and their corresponding positions in real world coordinates (e.g., known from GPS metadata associated with the photographs). …” It would have been obvious to attach metadata of GPS location to the pixels of the 3D point cloud because this allows the user to comprehend the location of the object in the 3D point cloud.)
20. Regarding Claim 10 (Original), while the combination of Goodman, Gavriil, Lim, and Lang does not explicitly disclose, Lukac discloses The computer-implemented method of claim 8, wherein obtaining the three-dimensional point cloud having the one or more gaps includes generating the three-dimensional point cloud using a structure-from-motion technique. (paragraph [0054] reciting “Generally, structure from motion techniques refer to processes for reconstructing a 3D structure from its projections into a collection of photographs or images taken from different viewpoints (e.g., different camera poses). Various different visual features can be tracked, such as corner points (e.g., edges with gradients in multiple directions). These visual features are tracked from one photograph to another, and their trajectories over time are used to determine a 3D reconstruction of the portion of the environment captured by the photographs.”;
paragraph [0057] reciting “… The output of the reconstruction stage are the camera pose estimates for the photographs and the reconstructed scene structure as a set of scene points. This set of scene points is also referred to as a 3D point cloud reconstruction.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, and Lim with Lukac so that the point cloud of the DEM is generated from structure from motion. This is an obviously beneficial modification since Goodman explicitly calls for 3D point cloud to generate the DEM and Lukac provides a method of acquiring such 3D point cloud.
21. Regarding Claim 13 (Original), Goodman further discloses The computer-implemented method of claim 8, wherein the three-dimensional point cloud comprises a plurality of points and each point comprises a three-dimensional coordinate value and an RGB color value. (paragraph [0022] reciting “The operation 102 can include reading the data from the PAN/MS image. The data can include image type (MS/PAN), resolution (number of pixels or density of pixels), intensity, position (X, Y, and/or Z), and/or color data. The data can be stored locally in a cache or other memory. The data can be translated into a format that makes it easier to retrieve specific data, such as for processing using the method 100.”)
22. Regarding Claim 14 (Original), Lukac further discloses The computer-implemented method of claim 13, wherein each point further comprises GPS position data. (paragraph [0017] reciting “… Structure from motion reconstruction is first performed with bundle adjustment without using GPS locations of the photographs, resulting in a 3D point cloud with camera parameters relative to each other. Then, robust estimation of similarity transformation into the world coordinates is applied using, for example, a least median of squares algorithm The transformation is estimated between relative camera positions and their corresponding positions in real world coordinates (e.g., known from GPS metadata associated with the photographs). …” It would have been obvious to attach metadata of GPS location to the pixels of the 3D point cloud because this allows the user to comprehend the location of the object in the 3D point cloud.)
23. Regarding Claim 16 (Original), while the combination of Goodman, Gavriil, Lim, and Lang does not explicitly disclose, Lukac discloses The computing system of claim 15, wherein obtaining the three-dimensional point cloud having the one or more gaps includes generating the three-dimensional point cloud using a structure-from-motion technique. (paragraph [0054] reciting “Generally, structure from motion techniques refer to processes for reconstructing a 3D structure from its projections into a collection of photographs or images taken from different viewpoints (e.g., different camera poses). Various different visual features can be tracked, such as corner points (e.g., edges with gradients in multiple directions). These visual features are tracked from one photograph to another, and their trajectories over time are used to determine a 3D reconstruction of the portion of the environment captured by the photographs.”;
paragraph [0057] reciting “… The output of the reconstruction stage are the camera pose estimates for the photographs and the reconstructed scene structure as a set of scene points. This set of scene points is also referred to as a 3D point cloud reconstruction.”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, and Lim with Lukac so that the point cloud of the DEM is generated from structure from motion. This is an obviously beneficial modification since Goodman explicitly calls for 3D point cloud to generate the DEM and Lukac provides a method of acquiring such 3D point cloud.
24. Regarding Claim 19 (Original), Goodman further discloses The computing system of claim 16, wherein the memory stores further instructions that, when executed by the one or more processors, cause the system to: store the three-dimensional point cloud including the imputed data on a computer readable storage medium. (paragraph [0093] reciting “In Example 16 a system for filtering digital elevation map (DEM) data includes an ingest processor to ingest digital elevation map (DEM) data and intensity data from a panchromatic (PAN) or multi-spectral (MS) image, a local interpolation processor to fill in voids in the ingested DEM data using local interpolation to create interpolated DEM data, a shadow map processor to create a shadow map based on the received intensity data, a height correction processor to modify, using the created shadow map, a height of one or more pixels in the interpolated DEM data to create modified DEM data, and a memory to store the modified DEM data.”)
25. Regarding Claim 20 (Original), Goodman further discloses The computing system of claim 16, wherein the three-dimensional point cloud comprises a plurality of points and each point comprises a three-dimensional coordinate value and an RGB color value. (paragraph [0022] reciting “The operation 102 can include reading the data from the PAN/MS image. The data can include image type (MS/PAN), resolution (number of pixels or density of pixels), intensity, position (X, Y, and/or Z), and/or color data. The data can be stored locally in a cache or other memory. The data can be translated into a format that makes it easier to retrieve specific data, such as for processing using the method 100.”)
26. Claims 4, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Goodman in view of Gavriil in view of Lang in view of Lim in view of Lukac and further in view of Jaechoon Chon et al. (US Patent Application Publication No. 2015/0199839 A1)
27. Regarding Claim 4 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Chon discloses The non-transitory computer readable storage medium of claim 3, wherein the gaps comprise implicit gaps. (paragraph [0024] reciting “… The input point cloud may be "noisy" or include data points that do not reflect objects in the images but are artifacts of the data collection.” Noisy points are artifacts from data collection so they are explicitly gaps caused by collection of the 3D point cloud.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, Lim, and Lukac with Chon so that the data points with artifacts from noise result from the data collection. This is an obvious outcome of capturing images using digital image capturing devices and is beneficial in that such holes ought to be filled using the teachings of Goodman modified by Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
28. Regarding Claim 11 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Chon discloses The computer-implemented method of claim 10, wherein the gaps comprise implicit gaps. (paragraph [0024] reciting “… The input point cloud may be "noisy" or include data points that do not reflect objects in the images but are artifacts of the data collection.” Noisy points are artifacts from data collection so they are explicitly gaps caused by collection of the 3D point cloud.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, Lim, and Lukac with Chon so that the data points with artifacts from noise result from the data collection. This is an obvious outcome of capturing images using digital image capturing devices and is beneficial in that such holes ought to be filled using the teachings of Goodman modified by Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
29. Regarding Claim 17 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Chon discloses The computing system of claim 16, wherein the gaps comprise implicit gaps. (paragraph [0024] reciting “… The input point cloud may be "noisy" or include data points that do not reflect objects in the images but are artifacts of the data collection.” Noisy points are artifacts from data collection so they are explicitly gaps caused by collection of the 3D point cloud.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, Lim, and Lukac with Chon so that the data points with artifacts from noise result from the data collection. This is an obvious outcome of capturing images using digital image capturing devices and is beneficial in that such holes ought to be filled using the teachings of Goodman modified by Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
30. Claims 5, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Goodman in view of Gavriil in view of Lang in view of Lim in view of Lukac and further in view of Huang et al. (US Patent Application Publication No. 2017/0053438 A1).
31. Regarding Claim 5 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Huang discloses The non-transitory computer readable storage medium of claim 3, wherein the gaps comprise explicit gaps. (paragraph [0005] reciting “Second, quality of existing point clouds will not be analyzed. It is well known that the point clouds have been scanned is generally accompanied with massive noise, outliers and holes under the influence of accuracy of a scanner, disturbance from surrounding, and shield as well as material of the object to be scanned. The conventional automatic scanning technique does not process and analyze inherent flaws of the point cloud data, and only simply uses the extent of point clouds covering the source model as guidance, and therefore it is for sure to have a considerable difference between the automatically scanned point cloud model and the real model.” Flaws in point clouds caused by the scanning technique corresponds to explicit gaps.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, and Lukac with Huang so that holes caused by the automated process of scanning are filled by the teachings of Goodman and Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
32. Regarding Claim 12 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Huang discloses The computer-implemented method of claim 10, wherein the gaps comprise explicit gaps. (paragraph [0005] reciting “Second, quality of existing point clouds will not be analyzed. It is well known that the point clouds have been scanned is generally accompanied with massive noise, outliers and holes under the influence of accuracy of a scanner, disturbance from surrounding, and shield as well as material of the object to be scanned. The conventional automatic scanning technique does not process and analyze inherent flaws of the point cloud data, and only simply uses the extent of point clouds covering the source model as guidance, and therefore it is for sure to have a considerable difference between the automatically scanned point cloud model and the real model.” Flaws in point clouds caused by the scanning technique corresponds to explicit gaps.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, Lim, and Lukac with Huang so that holes caused by the automated process of scanning are filled by the teachings of Goodman and Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
33. Regarding Claim 18 (Original), while the combination of Goodman, Gavriil, Lang, Lim, and Lukac does not explicitly disclose, Huang discloses The computing system of claim 16, wherein the gaps comprise explicit gaps. (paragraph [0005] reciting “Second, quality of existing point clouds will not be analyzed. It is well known that the point clouds have been scanned is generally accompanied with massive noise, outliers and holes under the influence of accuracy of a scanner, disturbance from surrounding, and shield as well as material of the object to be scanned. The conventional automatic scanning technique does not process and analyze inherent flaws of the point cloud data, and only simply uses the extent of point clouds covering the source model as guidance, and therefore it is for sure to have a considerable difference between the automatically scanned point cloud model and the real model.” Flaws in point clouds caused by the scanning technique corresponds to explicit gaps.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Goodman, Gavriil, Lang, Lim, and Lukac with Huang so that holes caused by the automated process of scanning are filled by the teachings of Goodman and Gavriil. This is obviously beneficial since all gaps whether explicit or implicit must be filled to create a correct/accurate 3D point cloud.
Response to Arguments
34. Applicant’s arguments, see Remarks, filed on 2/25/2026, with respect to the rejections of claims 1-20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Lim. Lim discloses using GAN to interpolate the gaps in the point cloud with data, including color data. Therefore, the amended claims 1, 8, and 15 are still rejected.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
CONTACT
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK S CHEN whose telephone number is (571)270-7993. The examiner can normally be reached Mon - Fri 8-11:30 and 1:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 5712727794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK S CHEN/Primary Examiner, Art Unit 2611