Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed on 22 September, 2025.
Response to Amendment
The amendment filed 12 September, 2025 has been entered.
The amendment of the specification has been acknowledged.
The amendment of claims 1, 4, 5, and 7 has been acknowledged.
Response to Arguments
Applicant’s arguments, see page 8, section “Specification Objection”, filed 12 September, 2025 with respect to the objection of the abstract have been fully considered and are persuasive. The objection of the specification has been withdrawn.
Applicant’s arguments, see pages 9 and 10, section “Claim Rejections 35 U.S.C. § 102 & § 103 ”, filed 12 September, 2025 with respect to the rejection of claims 1, 5, and 7 have been fully considered but they are not persuasive.
On page 9 of the response filed 12 September, 2025, the applicant alleges that Qian et al (U.S. Patent Publication No. 2017/0200309 A1, hereinafter “Qian”) merely performs corner detection directly on the original satellite image. This corner detection process, as described by the applicant, does not generate any new diagram or representation, but rather identifies features within the existing satellite image. The applicant further alleges that the newly amended claim limitation of “generate a pseudo-projection diagram that represents geometric features of a subject in a satellite image, independent of imaging wavelength, exposure time, and other characteristics specific to the artificial satellite” defines a specific technical operation that creates a new representation with particular properties. The examiner respectfully disagrees.
With respect to the applicants newly amended limitation above, it is understood by the examiner that the pseudo-projection diagram is a 2D representation of a 3D object within a satellite image. This diagram consisting of geometric points (See figures 3 - 5 of applicant’s specification) which are further mapped to an existing projection diagram which is previously generated. The pseudo-projection diagram is one which is generated independent of the imaging wavelength, exposure time, and other characteristics specific to the artificial satellite, but still generated using the satellite image (See ¶ 0038 of the applicant’s specification).
Qian states in ¶ 0039 that in step S202, image analysis is performed on the satellite image to detect corners of objects within the satellite image. It is further stated that edge detection may be performed to determine certain boundaries. These corners are provided with an associated geolocation based on satellite image metadata which is further used in the following step of associating these corners with the corners detected in the 3D surface (mesh) model.
The collection of corners within the satellite image is understood by the examiner to read on “pseudo-projection diagram” under broadest reasonable interpretation as it is geometric bounds of buildings that are determined from the satellite image. While the applicant states on page 10 of their response that Qian does not contemplate or address scenarios where the satellite image and 3D model projection have different imaging characteristics that would prevent direct correlation, this is not a specific limitation embodied in the claims. As written the examiner believes Qian to continue to read on the previously and newly amended claim limitations of the applicant’s disclosure.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 5, and 7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Qian et al (U.S. Patent Publication No. 2017/0200309 A1, hereinafter “Qian”).
Regarding claim 1, Qian teaches a position association system comprising:
a memory configured to store instructions (¶ 0023: A "computer-readable medium" refers to any storage device used for storing data accessible by a computer. Examples of a computer-readable medium may include: a magnetic hard disk; a floppy disk; an optical disk, such as a CD-ROM and a DVD; a magnetic tape; a flash removable memory; a memory chip; and/or other types of media that can store machine-readable instructions thereon.); and
a processor configured to execute the instructions (¶ 0021: Examples of a computer include: a stationary and/or portable computer; a computer having a single processor, multiple processors, or multi-core processors) to:
generate a projection diagram, which is a diagram obtained by projecting a three-dimensional shape of an object onto a two-dimensional plane along direction of a line of sight of a sensor of an artificial satellite (¶ 0027: In step 100, an overhead image is obtained of a cityscape. In the examples provided herein, the overhead image of the cityscape is a satellite image obtained by a satellite…), based on three-dimensional data which is data representing the three-dimensional shape of the object and posture information indicating posture of the sensor when the sensor images the object (¶ 0034: In step S106, the 3D surface (mesh) model is used to obtain a 2D image view of the 3D surface (mesh) model from a location corresponding to the satellite image location (the geolocation of the satellite when taking the picture of the cityscape). Conventional transformation functions may be used to perform a translation of the 3D surface (mesh) model to represent a viewpoint of the 3D model from the satellite image location.; ¶ 0038: In step S200, for those portions of the 3D surface (mesh) model that are visible from the satellite image location, corners within the 3D surface (mesh) model are detected.;);
generate a pseudo-projection diagram that represents geometric features of a subject in a satellite image, independent of imaging wavelength, exposure time, and other characteristics specific to the artificial satellite (¶ 0039: In step S202, image analysis is performed on the satellite image to detect corners of objects (e.g., buildings) within the satellite image. For example, edge detection may be performed to determine certain boundaries… Corners detected by image analysis of the satellite image are provided with an associated geolocation based on the satellite image metadata (providing a known geolocation of the satellite when the picture was taken, as well as the angle at which the picture was taken) as well as RPC model data.);
associate points in the projection diagram with points in the pseudo-projection diagram (Figure 2; ¶ 0040: In step S204, a correlation analysis is performed to match corners detected in the 3D surface (mesh) model with corners detected in the satellite image.);
derive a mapping that associates the point in the pseudo-projection diagram with points in the three-dimensional shape represented by the three-dimensional data, based on a result of association between the points in the projection diagram and the points in the pseudo-projection diagram (¶ 0041: In step S206, deviations between the 2D image view of the 3D surface (mesh) model and the satellite image in offset (amount and direction), scaling and/or warping may be used to provide adjusted RPC model variables (or other adjust other metadata associated with the satellite image(s)) to better represent the effects of the satellite camera on the image and thus obtain better geolocation information from all satellite images taken by the satellite image camera. As LiDAR geolocations (and thus the 3D surface (mesh) model geolocations) data are typically more accurate than calculated satellite geolocations, deviations between the 3D surface (mesh) model geolocations and the calculated satellite geolocations may safely be assumed to be an error attributable to the calculated geolocations from satellite images and metadata. This adjusted metadata may be used in future calculations to determine more accurate locations of features in all images taken by such this satellite camera.; ¶ 0046: In step 304, the portion of the image for the particular facade is associated with the surface of the corresponding facade in the 3D surface (mesh) model. For example, each tile or polygon of a mesh representing the surface of the facade in the 3D surface (mesh) model may adopt the corresponding piece of the portion of the image of the facade as its texture.); and
derive an association relation between points of object in the satellite image and the points in the three-dimensional shape represented by the three-dimensional data, based on the mapping (¶ 0043: After this alignment, the detected corners (or most of the detected corners) within the 2D image view of the 3D surface (mesh) model and the satellite image may be aligned to overlap with respective corresponding comers of the satellite image. Thus, corners within the satellite image (which may be assigned a pixel location) may be assigned an accurate 3D geolocation in step S208 by adopting the corresponding geolocation of the overlapping comer of the 3D surface (mesh) model.).
Regarding claim 5, claim 5 has been analyzed with regard to respective claim 1 and is rejected for the same reasons of obviousness as used above.
Regarding claim 7, claim 7 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Qian’s further teaching on:
A non-transitory computer-readable recording medium in which a position association program is recorded, wherein the position association program causes a computer to execute (¶ 0060: The computer, a computer system and/or network may be configured with non-transitory computer readable media to cause performance of the methods described herein.):
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 – 4, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable Qian et al (U.S. Patent Publication No. 2017/0200309 A1, hereinafter “Qian”) in view of Son et al (U.S. Patent Publication No. 2022/014817 A1, hereinafter “Son”)
Regarding claim 2, Qian teaches the position association system according to claim 1.
Additionally, Qian teaches wherein the processor generates the pseudo-projection diagram by performing
Qian does not explicitly teach performing an image to image translation.
However, Son does teach performing an image to image translation (¶ 0025: The method may include generating a synthesized image by applying the input data to the trained image generation neural network, and outputting the synthesized image.; 0054: The input image x 101 unprojected to the 3D space may be projected back to an arbitrary viewpoint and may be transformed into an image T(x,d) 107 of a new view. A training apparatus may transform the input image x 101 into the image T(x,d) 107 of the new view based on transformation information T in consideration of 3D. The above-described transformation may be referred to as "3D transformation" and may be expressed by T(x,d).; However, most of the conditional information x 101, for example, semantic segmentation, an edge, or a skeleton, used as an input in a large number of image-to-image transformation may correspond to a form in which structural information of an input image is abstracted.).
Qian and Son are considered to be analogous art as both pertain to 3D image transformations. Therefore, it would have been obvious to one of ordinary skill in the art to combine the system of using satellite imagery to enhance a 3D surface model (as taught by Qian) and the computing method and apparatus with image generation (as taught by Son) before the effective filing date of the claimed invention. The motivation for this combination of references would be the system of Son performs training where the image generation neural network may generate a fake image with a striking similarity to the real image so that its difficult distinguish between a real image and a fake image, which improves the discrimination ability of the image discrimination neural network. (See ¶ 0050).
This motivation for the combination of Qian and Son is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim 3, the Qian and Son combination teaches the position association system according to claim 2.
Additionally, Son teaches wherein the processor performs the image-to-image translation using an image-to-image translation model based on deep learning (¶ 0050: The image discrimination neural network 130 may aim to discriminate a real image of the training data from a fake image, for example, the first synthesized image G(x) 120, which is generated by the image generation neural network 110.).
Regarding claim 4, the Qian and Son combination teaches the position association system according to claim 3.
Additionally, Son teaches wherein the processor captures the geometric features of the subject in the satellite image, independent of imaging wavelength, exposure time, and other characteristics specific to the artificial satellite, by using the image-to-image translation. (¶ 0025: The method may include generating a synthesized image by applying the input data to the trained image generation neural network, and outputting the synthesized image.; ¶ 0052: By defining the concept of three-dimensional (3D) geometry consistency, an image may be transformed so that geometry in the image may be preserved. The term "geometry consistency" used herein may be construed to mean that geometric information before and after transformation of an image remain unchanged. The geometric information may include, for example, structure information such as semantic information, edge information, and skeleton information, but is not necessarily limited thereto. The edge information may correspond to two-dimensional (2D) appearance information of an object included in an input image, and the skeleton information may correspond to 3D pose information of an object included in an input image.).
Additionally, Quian teaches satellite images (¶ 0027: In step 100, an overhead image is obtained of a cityscape. In the examples provided herein, the overhead image of the cityscape is a satellite image obtained by a satellite…).
Regarding claim 6, claim 6 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above.
Regarding claim 8, claim 6 has been analyzed with regard to respective claim 2 and is rejected for the same reasons of obviousness as used above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JONES whose telephone number is (703)756-4573. The examiner can normally be reached Monday - Friday 8:00-5:00 EST, off Every Other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B. JONES/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667