Prosecution Insights
Last updated: April 19, 2026
Application No. 18/981,167

SYSTEM AND METHOD FOR PROVIDING IMPROVED GEOCODED REFERENCE DATA TO A 3D MAP REPRESENTATION

Non-Final OA §103§DP
Filed
Dec 13, 2024
Examiner
HANSELL JR., RICHARD A
Art Unit
2486
Tech Center
2400 — Computer Networks
Assignee
VANTOR SWEDEN AB
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
368 granted / 487 resolved
+17.6% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
45 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 487 resolved cases

Office Action

§103 §DP
DETAILED ACTION 1. The communication is in response to the application received 12/13/2024, wherein claims 1-20 are pending and are examined as follows. This is a continuation of 17/662,856 (now U.S. Patent No. 12,196,552), which is a continuation of 17/164,013 (now U.S. Patent No. 11,747,141). Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 3. The information disclosure statements (IDS) were submitted on 12/13/2024. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Priority 4. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Double Patenting 5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321I or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8, 9, 10, 13, and 17 of U.S. Patent No. 12,196,552 B2 in view of Isaksson et al. US 2015/0363972 A1, hereinafter referred to as 552 and Isaksson, respectively. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the uncertainty measure of Isaksson with disclosed system and method of 552 in order to allow a user of a 3D model to evaluate whether the model fulfills the requirements for a specific application or whether there are parts of the model that cannot be used to model the reality of said specific application (e.g. ¶0006). The claim mapping between claim sets is shown in table 1 below for reference. Claims 1-6, 8-11, 13-18, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 6-8, 12, and 13 of U.S. Patent No. 11,747,141 B2, hereinafter referred to as 141, in view of Isaksson. The motivation for combining Isaksson with the disclosure of 141 is the same as that presented above with respect to 552. The claim mapping between claim sets is shown in table 2 below for reference. Claim 6 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 3 of 141, in view of Haglund et al. US 2015/0243047 A1, hereinafter referred to as Haglund. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the method of Haglund for identifying differences between a 3D model of an environment and the environment in a 2D image (e.g. ¶0108) in a manner that can be performed in a highly automated way, so as to reduce the amount of time and/or workload an operator has to spend for identifying said differences (e.g. ¶0011).The claim mapping between claim sets is shown in table 2 below for reference. Table 1 **Note: The items below that are BOLD/UNDERLINED in the Instant Application/Co-pending Application, respectively, indicate differences in the claim limitation. Instant Application 18/981,167 U.S. Patent No. 12,196,552 B2 (17/662,856) Claim 1 A system arranged to provide improved geocoded reference data to a 3D map representation, said system comprising: a storage memory including a 3D map representation, said 3D map representation comprising, for each of at least one geographical area, a textured 3D representation provided with geocoded reference data and formed based on imagery provided for that geographical area; and a processor coupled in communication with the storage memory and configured to, for a new image captured by an imaging device: determine that the new image belongs to one of the at least one geographical area; perform registration of the new image to the 3D map representation; determine corresponding points in the new image and the 3D map representation; determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device and the determined corresponding points, the uncertainty data relating to the new image; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation. Note: See Isaksson below for support regarding “based on uncertainty data associated with the imaging device and the determined corresponding points…”. Claim 1 of 552 A system (200) arranged to provide improved geocoded reference data to a 3D map representation, said system (200) comprising: a storage (201) having stored thereupon a 3D map representation, said 3D map representation comprising for each of at least one geographical area, a textured 3D representation provided with geocoded reference data and formed based on imagery provided for that geographical area, said imagery being associated with information relating to at least one imaging device which has captured the imaging, said information comprising intrinsic and extrinsic parameters of said at least one imaging device, and a processor (208) configured to: receive at least one new image associated with information related to an imaging device which has captured the new image, said information comprising intrinsic and extrinsic parameters of the imaging device, determine that the new image belongs to at least one of the at least one geographical areas, perform registration of the new image to the 3D map representation, determine corresponding points in the new image and the 3D map representation, and determine displacement data for a plurality of 3D positions in the 3D map representation based only on the determined corresponding points in the new image and the 3D map representation, wherein: the determination of displacement data for a plurality of 3D positions in the 3D map representation for a plurality of 3D positions in the 3D map representation comprises weighting the influence from the new image against the influence from the 3D map representation and the weighting of the influence from the new image against the influence from the 3D map representation is based on at least one of: the specification of the imaging device(s) used, or reliability of relevant intrinsic and/or extrinsic parameters of the respective imaging device(s) used for the 3D map representation and/or the new image. Claim 2 The system according to claim 1, wherein the imagery includes an image set; and wherein the processor is configured, in determining corresponding points in the new image and the 3D map representation, to perform bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image. See claim 13 of 552 “and determining corresponding points in the new image and the 3D map representation comprises performing bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image.” Claim 3 The system according to claim 1, wherein the processor is configured, in determining the displacement data, to weight influence from the new image against influence from the 3D map representation; and wherein the influence from the new image is based on the uncertainty data relating to the new image. Note: Please refer to performing ‘bundle adjustments’ in claim 13 of 552 regarding “to weight influence from the new image against influence from the 3D map representation; and wherein the influence from the new image is based on the uncertainty data relating to the new image.” Claim 1 of 552 “…the determination of displacement data for a plurality of 3D positions in the 3D map representation for a plurality of 3D positions in the 3D map representation comprises weighting the influence from the new image against the influence from the 3D map representation and the weighting of the influence from the new image against the influence from the 3D map representation is based on at least one of: the specification of the imaging device(s) used, or reliability of relevant intrinsic and/or extrinsic parameters of the respective imaging device(s) used for the 3D map representation and/or the new image.” Claim 13 of 552 “and determining corresponding points in the new image and the 3D map representation comprises performing bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image.” Claim 4 The system according to claim 3, wherein the uncertainty data depends on: a specification of the imaging device; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the imaging device. See claim 1 of 552 “and the weighting of the influence from the new image against the influence from the 3D map representation is based on at least one of: the specification of the imaging device(s) used, or reliability of relevant intrinsic and/or extrinsic parameters of the respective imaging device(s) used for the 3D map representation and/or the new image.” Claim 5 The system according to claim 1, wherein the uncertainty data is further related to the 3D map representation. **See Isaksson below for support. Not in 552 Claim 6 The system according to claim 1, wherein the processor is configured, in determining the displacement data, to weight influence from the new image against influence from the 3D map representation; and wherein the influence from the new image is based on the uncertainty data relating to the new image; and wherein the influence of the 3D map representation is based on the uncertainty data relating to the 3D map representation. Note: Please refer to performing ‘bundle adjustments’ in claim 13 of 552 regarding the foregoing limitations. See claim 1 of 552 “wherein: the determination of displacement data for a plurality of 3D positions in the 3D map representation for a plurality of 3D positions in the 3D map representation comprises weighting the influence from the new image against the influence from the 3D map representation and the weighting of the influence from the new image against the influence from the 3D map representation is based on at least one of: the specification of the imaging device(s) used, or reliability of relevant intrinsic and/or extrinsic parameters of the respective imaging device(s) used for the 3D map representation and/or the new image” Claim 13 of 552 “and determining corresponding points in the new image and the 3D map representation comprises performing bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image.” Claim 7 The system according to claim 6, wherein the uncertainty data relating to the 3D map representation depends on: a specification of other imaging device(s) used to capture the imagery; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the other imaging device used to capture the imagery. Claim 1 of 552 “and the weighting of the influence from the new image against the influence from the 3D map representation is based on at least one of: the specification of the imaging device(s) used, or reliability of relevant intrinsic and/or extrinsic parameters of the respective imaging device(s) used for the 3D map representation and/or the new image.” Claim 8 The system according to claim 6, wherein the uncertainty data is based on a number of images used in the 3D map representation for modelling an area covered by the new image. **See Isaksson below for support Not in 552 Claim 9 The system according to claim 7, wherein the processor is configured to calculate updated 3D representation uncertainty data based on the displacement data; and wherein the updated 3D representation uncertainty data is stored as part of the 3D map representation. Claim 10 “determining (160) displacement data for a plurality of 3D positions in the 3D map representation based only on the determined corresponding points in the new image and the 3D map representation, and calculating updated 3D representation uncertainty data based on the displacement data, wherein the updated 3D representation uncertainty data may be stored as part of the 3D map representation.” Claim 10 The system according to claim 1, wherein the imagery includes satellite image(s) captured from one or a plurality of satellites. Claim 8 The system according to claim 1, wherein the images of the image set and/or new image(s) comprise satellite images captured from one or a plurality of satellites. Claim 11 The system according to claim 1, wherein the 3D map representation includes geocoded reference data and a textured, georeferenced mesh. Claim 9 The system according to claim 1, wherein the 3D representation provided with geocoded reference data and comprises a textured, georeferenced mesh. Claim 12 The system according to claim 1, wherein when the 3D map representation includes a textured 3D representation for a plurality of geographical areas, which includes the at least one geographical area; and wherein the processor is configured to: determine whether the new image belongs to at least two geographical areas and when it has been determined that the image belongs to at least two geographical areas and thus forms a bridge between said two geographical areas, and determine displacement data for a plurality of 3D positions in the textured 3D representations belonging to the two geographical areas based on the determined corresponding points in the new image and the textured 3D representations belonging to the two geographical areas. Claim 17 “determine whether the new image belongs to at least two geographical areas and when it has been determined that the image belongs to at least two geographical areas and thus forms a bridge between said two geographical areas, and determine displacement data for a plurality of 3D positions in the textured 3D representations belonging to the two geographical areas based on the determined corresponding points in the new image and the textured 3D representations belonging to the two geographical areas.” Claim 13 Similar to claim 1 above See claim 1 of 552 Claim 14 Similar to claim 2 above See claim 13 of 552 Claim 15 Similar to claim 3 above See claims 1 and 13 of 552 Claim 16 Similar to claim 4 above See claim 1 of 552 Claim 17 Similar to claim 5 above Not in 552 (see Isaksson) Claim 18 Similar to claim 6 above See claims 1 and 13 of 552 Claim 19 Similar to claim 7 above See claim 1 of 552 Claim 20 Similar to claim 10 above See claim 8 of 552 Obviousness rationale: Regarding claim 1, patented claim 1 of 552 discloses most of the featured limitations, with the exception of “determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device and the determined corresponding points, the uncertainty data relating to the new image; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation.” However, Isaksson from the same or similar field of endeavor is found to teach and/or suggest these features, i.e. “determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device [See ¶0055-¶0057 and ¶0061 with respect to uncertainty in the imaging device] and the determined corresponding points [¶0061-¶0062)], the uncertainty data relating to the new image [See ¶0061 for comparing an image I2 (construed as a new image) with an estimated image I2* based on another image I1, where uncertainty can be determined via said comparison; hence, there must be a relationship with the new image]; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation.” [See for e.g. ¶0006 and ¶0061. After identifying ‘parts’ of the model that are not reliable, the measurements can be updated from which the quality of said model can be validated. Updating said measurements for parts of said model suggests the entire 3D model does not need to be re-calculated] It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the uncertainty measure of Isaksson for allowing a user of a 3D model to evaluate whether the model fulfills the requirements for a specific application or whether there are parts of the model that cannot be used to model the reality of said specific application (e.g. ¶0006). Regarding claim 5, the claims of 552 do not appear to provide support. Isaksson however describes mesh uncertainties in ¶0034-¶0035. The motivation for introducing the work of Isaksson is the same as that presented for claim 1 above. Regarding claim 8, the claims of 552 do not appear to provide support. Isaksson however is found to address these features. Please refer to ¶0073-¶0074 with respect to determining the mesh uncertainty based on the number of images available for the certain area or point. The motivation for introducing the work of Isaksson is the same as that presented for claim 1 above. Table 2 **Note: The items below that are BOLD/UNDERLINED in the Instant Application/Co-pending Application, respectively, indicate differences in the claim limitation. Instant Application 18/981,167 U.S. Patent No. 11,747,141 B2 (17/164,013) Claim 1 A system arranged to provide improved geocoded reference data to a 3D map representation, said system comprising: a storage memory including a 3D map representation, said 3D map representation comprising, for each of at least one geographical area, a textured 3D representation provided with geocoded reference data and formed based on imagery provided for that geographical area; and a processor coupled in communication with the storage memory and configured to, for a new image captured by an imaging device: determine that the new image belongs to one of the at least one geographical area; perform registration of the new image to the 3D map representation; determine corresponding points in the new image and the 3D map representation; determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device and the determined corresponding points, the uncertainty data relating to the new image; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation. Note: See Isaksson below for support regarding “based on uncertainty data associated with the imaging device and the determined corresponding points…”. Claim 1 A system arranged to provide improved geocoded reference data to a 3D map representation, said system comprising: a storage having stored thereupon a 3D map representation, said 3D map representation comprising for each of at least one geographical area, a textured 3D representation provided with geocoded reference data and formed based on imagery provided for that geographical area, said imagery being associated with information relating to at least one imaging device which has captured the imaging, said information comprising intrinsic and extrinsic parameters of said at least one imaging device, and a processor configured to: receive at least one new image associated with information related to an imaging device which has captured the new image, said information comprising intrinsic and extrinsic parameters of the imaging device, determine that the new image belongs to at least one of the at least one geographical areas, perform registration of the new image to the 3D map representation, determine corresponding points in the new image and the 3D map representation, determine displacement data for a plurality of 3D positions in the 3D map representation based on the determined corresponding points in the new image and the 3D map representation; and when the 3D map representation comprises a textured 3D representation for a plurality of geographical areas: determine whether the new image belongs to at least two non-overlapping geographical areas; and when it has been determined that the image belongs to the at least two non-overlapping geographical areas, determine displacement data for a plurality of 3D positions in the textured 3D representations belonging to the at least two non-overlapping geographical areas based solely upon the determined corresponding points in the new image and the textured 3D representations belonging to the at least two non-overlapping geographical areas. Claim 2 The system according to claim 1, wherein the imagery includes an image set; and wherein the processor is configured, in determining corresponding points in the new image and the 3D map representation, to perform bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image. Claim 6 The system according to claim 1, wherein: the imagery comprises an image set comprising at least partly overlapping images belonging to the geographical area, each image of the image set being associated with the information relating to the imaging device which has captured the image, said information comprising intrinsic and extrinsic parameters of the imaging device, and determining corresponding points in the new image and the 3D map representation comprises performing bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image. Claim 3 The system according to claim 1, wherein the processor is configured, in determining the displacement data, to weight influence from the new image against influence from the 3D map representation; and wherein the influence from the new image is based on the uncertainty data relating to the new image. Claim 7 The system according to claim 1, wherein the 3D map representation further comprises 3D representation uncertainty data for a plurality of 3D positions in the 3D representation, said uncertainty data defining an uncertainty distance and direction, wherein the determination of displacement data for a plurality of 3D positions in the 3D map representation comprises weighting the influence from the new image against the influence from the 3D map representation based on the uncertainty data. Claim 4 The system according to claim 3, wherein the uncertainty data depends on: a specification of the imaging device; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the imaging device. Not in 141 ** See Isaksson for support Claim 5 The system according to claim 1, wherein the uncertainty data is further related to the 3D map representation. Not in 141 ** See Isaksson for support Claim 6 The system according to claim 1, wherein the processor is configured, in determining the displacement data, to weight influence from the new image against influence from the 3D map representation; and wherein the influence from the new image is based on the uncertainty data relating to the new image; and wherein the influence of the 3D map representation is based on the uncertainty data relating to the 3D map representation. ** See Haglund for support Claim 3 The system according to claim 2, wherein the determination of displacement data for a plurality of 3D positions in the 3D map representation for a plurality of 3D positions in the 3D map representation comprises weighting the influence from the new image against the influence from the 3D map representation. Claim 7 The system according to claim 6, wherein the uncertainty data relating to the 3D map representation depends on: a specification of other imaging device(s) used to capture the imagery; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the other imaging device used to capture the imagery. Not in 141 Claim 8 The system according to claim 6, wherein the uncertainty data is based on a number of images used in the 3D map representation for modelling an area covered by the new image. Not in 141 ** See Isaksson for support Claim 9 The system according to claim 7, wherein the processor is configured to calculate updated 3D representation uncertainty data based on the displacement data; and wherein the updated 3D representation uncertainty data is stored as part of the 3D map representation. Claim 8 The system according to claim 7, wherein the processor (208) is configured to calculate updated 3D representation uncertainty data based on the displacement data, wherein the updated 3D representation uncertainty data may be stored as part of the 3D map representation. Claim 10 The system according to claim 1, wherein the imagery includes satellite image(s) captured from one or a plurality of satellites. Claim 12 The system according to claim 1, wherein the images of the image set and/or new image(s) comprise satellite images captured from one or a plurality of satellites. Claim 11 The system according to claim 1, wherein the 3D map representation includes geocoded reference data and a textured, georeferenced mesh. Claim 13 The system according to claim 1, wherein the 3D representation provided with geocoded reference data and comprises a textured, georeferenced mesh. Claim 12 The system according to claim 1, wherein when the 3D map representation includes a textured 3D representation for a plurality of geographical areas, which includes the at least one geographical area; and wherein the processor is configured to: determine whether the new image belongs to at least two geographical areas and when it has been determined that the image belongs to at least two geographical areas and thus forms a bridge between said two geographical areas, and determine displacement data for a plurality of 3D positions in the textured 3D representations belonging to the two geographical areas based on the determined corresponding points in the new image and the textured 3D representations belonging to the two geographical areas. Not in 141 Claim 13 Similar to claim 1 above See claim 1 of 141 Claim 14 Similar to claim 2 above See claim 6 of 141 Claim 15 Similar to claim 3 above See claim 7 of 141 Claim 16 Similar to claim 4 above Not in 141 ** See Isaksson for support Claim 17 Similar to claim 5 above Not in 141 ** See Isaksson for support Claim 18 Similar to claim 6 above ** See claim 3 of 131 and Isaksson for support Claim 19 Similar to claim 7 above Not in 141 Claim 20 Similar to claim 10 above See claim 12 of 141 Obviousness rationale: Regarding claim 1, patented claim 1 of 141 discloses most of the featured limitations, with the exception of “determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device and the determined corresponding points, the uncertainty data relating to the new image; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation.” However, Isaksson from the same or similar field of endeavor is found to teach and/or suggest these features. Please refer to the obviousness rationale for claim 1 corresponding to table 1 above. The motivation for introducing the work of Isaksson is the same as that presented above. Regarding claim 4, the claims of 141 do not appear to provide support. Isaksson however describes uncertainty in the imaging device (e.g. uncertainty in position and direction of the optical axis of the at least one camera, its field of view, etc.) in for e.g. ¶0055-¶0057 and ¶0061. The motivation for introducing the work of Isaksson is the same as that presented for claim 1 above. Regarding claim 5, the claims of 141 do not appear to provide support. Isaksson however describes geometrical information uncertainties and texture information uncertainties of the 3D model at a given point or part (e.g. ¶0082-¶0084). The motivation for introducing the work of Isaksson is the same as that presented for claim 1 above. Regarding claim 6, the claims of 141 do not appear to provide support for “and wherein the influence from the new image is based on the uncertainty data relating to the new image; and wherein the influence of the 3D map representation is based on the uncertainty data relating to the 3D map representation.” Haglund however discloses (¶0114) identified differences between a 3D model and a 2D image (construed as a ‘new’ image) that are based on uncertainties. Hence, this relates to the uncertainty data of the 2D image. Since identifying differences between the 3D model and the 2D image is performed based on the uncertainties associated with each set of data, Haglund’s methods are deemed relevant. It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the method of Haglund for identifying differences between a 3D model of an environment and the environment in a 2D image (e.g. ¶0108) in a manner that can be performed in a highly automated way, so as to reduce the amount of time and/or workload an operator has to spend for identifying said differences (e.g. ¶0011). Regarding claim 8, the claims of 141 do not appear to provide support. Isaksson however describes determining the mesh uncertainty based on the number of images available for the certain area or point in for e.g. ¶0073-¶0074. The motivation for introducing the work of Isaksson is the same as that presented for claim 1 above. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 8, 10-11, 13-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Haglund et al. US 2015/0243047 A1, in view of Isaksson et al. US 2015/0363972 A1, and in further view of Olofsson US 2015/0078652 A1, hereinafter referred to as Haglund, Isaksson, and Olofsson, respectively. Regarding claim 1, Given the broadest reasonable interpretation (BRI) of the following limitations, Haglund teaches and/or suggests “A system arranged to provide improved geocoded reference data to a 3D map representation [Examples of 3D models (3D map representation) are described in for e.g. ¶0073-¶0075, where each point/part comprises geometrical and texture information. For “improved” geocoded reference data, see Isaksson], said system comprising: a storage memory including a 3D map representation [Said 3D model is stored in memory (¶0100-¶0101)], said 3D map representation comprising, for each of at least one geographical area [¶0007-¶0009 disclose a need for improved detection for detecting changes in the environment such as natural variations (e.g. glacial movements, flooding), etc., which are associated with geographical areas], a textured 3D representation provided with geocoded reference data [Abstract and ¶0074. Points in said 3D model comprise texture and geometrical information, where geometrical information comprises coordinate information in a 3D coordinate system (for e.g. a geo-referenced coordinate system)] and formed based on imagery provided for that geographical area [Said 3D model is based on imagery (e.g. ¶0075)];and a processor [Note processing element 650 in ¶0102 with reference to fig. 6] coupled in communication with the storage memory [See coupled memory element 654 also in fig. 6] and configured to, for a new image captured by an imaging device [A ‘new’ image corresponds to for e.g. ¶0108 and fig. 7, where a 2D image of the environment is taken at another time compared to those used in the 3D model]: determine that the new image belongs to one of the at least one geographical area [¶0108 shows the 2D image captured at another time is of the same scene as that of the 3D model. As such, point correspondences can be obtained as illustrated in fig. 7]; perform registration of the new image to the 3D map representation [See ¶0108-¶0112 with regards to matching corresponding points between the 3D model and the 2D image referenced above. Matching, which is integral to the registration process (for e.g. ¶0014 of US 2021/0349922 A1), may involve for e.g. global matching, bundle adjustments, etc.]; determine corresponding points in the new image and the 3D map representation [Same citations as above]; determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device [Haglund does not identify displacement data based on uncertainty data associated with the imaging device. See Isaksson below for support] and the determined corresponding points [Haglund describes difference values (i.e. displacement data) for corresponding points of the 3D model and the 2D image based on geometrical information uncertainty and/or texture information uncertainty (e.g. ¶0112-¶0114 and fig. 7). Also please see Isaksson below for support], the uncertainty data relating to the new image [¶0016-¶0017 of Haglund, for example, show the uncertainty data is based on a number of measurements on which the model is based. Since the “new image” can correspond to an image taken at another time (e.g. ¶0108 and fig. 7 as noted above), said uncertainty data can correspond with this image. Also see Isaksson below for support]; and modify the 3D map representation with the determined displacement data for the plurality of 3D positions [Haglund however does not appear to address modifying the 3D map representation. Please see Isaksson (¶0006) and Olofsson (¶0046) below for corresponding support], thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation.” [Since Haglund does not appear to address this feature, please refer to Isaksson and Olofsson below for support] Although Haglund’s work is deemed relevant, Haglund does not seem to refer to “improved” geocoded reference data, i.e. “A system arranged to provide improved geocoded reference data to a 3D map representation”. Isaksson on the other hand from the same or similar field of endeavor is relied on to address this feature. In particular, in the context of a geo-referenced model of an environment (e.g. ¶0001), Isaksson discloses uncertainty measures to facilitate determining the reliability of the model for a specific application, i.e. to further improve the modelling. [See for e.g. ¶0004-¶0006 in Isaksson]. Haglund also does not appear to “determine displacement data for a plurality of 3D positions in the 3D map representation based on uncertainty data associated with the imaging device”. However Isaksson is found to teach and/or suggest these features. [Please see for e.g. ¶0055-¶0057 and ¶0061 with respect to uncertainty in the imaging device. Isaksson also describes uncertainty data associated with the determined corresponding points (e.g. ¶0061-¶0062)]. Isaksson also teaches and/or suggests “the uncertainty data relating to the new image” [See for e.g. ¶0061 with respect to comparing an image I2 (construed as a new image) with an estimated image I2* based on another image I1. Uncertainty can be determined via said comparison; hence, there must be a relationship with the new image]. Lastly, Haglund does not appear to address “and modify the 3D map representation with the determined displacement data for the plurality of 3D positions, thereby improving the geocoded reference data in the 3D map representation without re-calculating the 3D map representation.” However, Isaksson is found to suggest these features. [See for e.g. ¶0006 with further reference to ¶0061. After identifying ‘parts’ of the model that are not reliable (i.e. contains errors), the measurements can be updated from which the quality of said model can be validated. Updating measurements for parts of said model suggests the entire 3D model does not need to be re-calculated. Also note the work of Olofsson (below) for further support] Given the teachings of Isaksson, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the methods of Haglund which improve the process for distinguishing changes in an environment from measurement uncertainties with reduced time and/or workload (¶0009, ¶0011, and ¶0013), to add the uncertainty measure of Isaksson as above that allow a user of a 3D model to evaluate whether the model fulfills the requirements for a specific application or whether there are parts of the model that cannot be used to model the reality of said specific application (e.g. ¶0006). Although collectively the teachings of Haglund and Isaksson are deemed relevant for the reasons given above, the work of Olofsson from the same or similar field of endeavor is brought in to further teach and/or suggest the aforementioned features. [Olofsson’s teachings allow for improved geo-positioning of an image (¶0003). As such, please see for e.g. ¶0013 and ¶0046 with respect to updating the 3D model based on additional information, where said updating can be construed as a means for improving said model. ‘Updating’ a model suggests making smaller changes as opposed to ‘re-building/re-calculating’ said model] Given the teachings of Olofsson, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the methods of both Haglund and Isaksson in relation to 3D models of an environment, to add the method and system of Olofsson as above to provide a means for improved geo-positioning of an image (e.g. ¶0003). Regarding claim 2, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund further teaches and/or suggests “wherein the imagery includes an image set [See ¶0089 regarding captured images]; and wherein the processor is configured, in determining corresponding points in the new image and the 3D map representation [Please refer to the disclosed matching step (e.g. ¶0087-¶0089) which involves matching corresponding points between data], to perform bundle adjustments between the new image and each image of the image set at least partly overlapping with the new image.” [Please refer to ¶0089 and ¶0111 regarding bundle adjustments. As to overlapping images, see ¶0066. Additional support of the foregoing may also be found in Isaksson (e.g. ¶0040, ¶0042, and ¶0044)] Regarding claim 3, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund further teaches and/or suggests “wherein the processor is configured, in determining the displacement data, [Determining ‘difference’ values when matching corresponding points of data (e.g. ¶0051-¶0053 and figs. 7-8)] to weight influence from the new image against influence from the 3D map representation [See for e.g. ¶0108-¶0114 with respect to performing bundle adjustments, where weighting is understood to be an integral part. Also note ¶0043-¶0044 and ¶0060 of Isaksson, where high/low quality is construed to mean low/high uncertainty, respectively]; and wherein the influence from the new image is based on the uncertainty data relating to the new image.” [Same citations and rationale as above]. Regarding claim 4, Haglund, Isaksson, and Olofsson teach all the limitations of claim 3, and are analyzed as previously discussed with respect to that claim. Haglund however does not address the features of claim 4. Isaksson on the other hand from the same or similar field of endeavor is brought in to teach and/or suggest “wherein the uncertainty data depends on: a specification of the imaging device; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the imaging device.” [Please see for e.g. ¶0055-¶0057 and ¶0061 of Isaksson with respect to uncertainty in the imaging device (e.g. uncertainty in position and direction of the optical axis of the at least one camera, its field of view, etc.)] The motivation for combining Haglund and Isaksson has been discussed in connection with claim 1, above. Regarding claim 5, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund further teaches and/or suggests “wherein the uncertainty data is further related to the 3D map representation.” [Please refer to the geometrical information uncertainties and texture information uncertainties of the 3D model at a given point or part (e.g. ¶0082-¶0084)] Regarding claim 6, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund further teaches and/or suggests “wherein the processor is configured, in determining the displacement data [See difference values in ¶0108-¶0114], to weight influence from the new image against influence from the 3D map representation [See e.g. ¶0108-¶0114 which identifies differences between points/parts of the 3D model of an environment and the environment reproduced at another timing (2D image) based on geometrical information uncertainty and/or texture information uncertainty. Such comparisons are construed as “to weight influence” of the 2D and 3D data]; and wherein the influence from the new image is based on the uncertainty data relating to the new image [¶0114 of Haglund shows the identified differences between the 3D model and the 2D image (i.e. construed as a ‘new’ image) are based on uncertainties. Hence, this relates to the uncertainty data of the 2D image]; and wherein the influence of the 3D map representation is based on the uncertainty data relating to the 3D map representation. [The above citation shows this relates to said 3D model as well. In other words, identifying differences between the 3D model and the 2D image is performed based on the uncertainties associated with each set of data] Regarding claim 8, Haglund, Isaksson, and Olofsson teach all the limitations of claim 6, and are analyzed as previously discussed with respect to that claim. Haglund however does not address the features of claim 8. Isaksson on the other hand from the same or similar field of endeavor is brought in to teach and/or suggest “wherein the uncertainty data is based on a number of images used in the 3D map representation for modelling an area covered by the new image.” [See for e.g. ¶0073-¶0074 with respect to determining the mesh uncertainty based on the number of images available for the certain area or point] The motivation for combining Haglund and Isaksson has been discussed in connection with claim 1, above. Regarding claim 10, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund however does not address the features of claim 10. Isaksson on the other hand from the same or similar field of endeavor is brought in to teach and/or suggest “wherein the imagery includes satellite image(s) captured from one or a plurality of satellites.” [See e.g. ¶0037, i.e. a satellite] Regarding claim 11, Haglund, Isaksson, and Olofsson teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Haglund further teaches and/or suggests “wherein the 3D map representation includes geocoded reference data and a textured, georeferenced mesh.” [Haglund’s 3D model(s) comprise geometrical information and texture information (abstract; ¶0074 and ¶0076). Also please see ¶0008-¶0009 of Isaksson for support] Regarding claim 13, claim 13 is rejected under the same art and evidentiary limitations as determined for the system of claim 1. Regarding claim 14, claim 14 is rejected under the same art and evidentiary limitations as determined for the system of claim 2. Regarding claim 15, claim 15 is rejected under the same art and evidentiary limitations as determined for the system of claim 3. Regarding claim 16, claim 16 is rejected under the same art and evidentiary limitations as determined for the system of claim 4. Regarding claim 17, claim 17 is rejected under the same art and evidentiary limitations as determined for the system of claim 5. Regarding claim 18, claim 18 is rejected under the same art and evidentiary limitations as determined for the system of claim 6. Regarding claim 20, claim 20 is rejected under the same art and evidentiary limitations as determined for the system of claim 10. Allowable Subject Matter 7. Claims 7, 9, 12, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In light of the specification, the Examiner finds the claimed invention to be patentably distinct from the prior art of records. The prior art of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the respective independent claims the limitations: 7. The system according to claim 6, wherein the uncertainty data relating to the 3D map representation depends on: a specification of other imaging device(s) used to capture the imagery; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the other imaging device used to capture the imagery. 9. The system according to claim 7, wherein the processor is configured to calculate updated 3D representation uncertainty data based on the displacement data; and wherein the updated 3D representation uncertainty data is stored as part of the 3D map representation. 12. The system according to claim 1, wherein when the 3D map representation includes a textured 3D representation for a plurality of geographical areas, which includes the at least one geographical area; and wherein the processor is configured to: determine whether the new image belongs to at least two geographical areas and when it has been determined that the image belongs to at least two geographical areas and thus forms a bridge between said two geographical areas, and determine displacement data for a plurality of 3D positions in the textured 3D representations belonging to the two geographical areas based on the determined corresponding points in the new image and the textured 3D representations belonging to the two geographical areas. 19. The computer-implemented method according to claim 18, wherein the uncertainty data relating to the 3D map representation depends on: a specification of other imaging device(s) used to capture the imagery; and/or a reliability of relevant intrinsic and/or extrinsic parameters of the other imaging device used to capture the imagery. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see PTO 892 for other relevant art. For e.g., see the work of Gros US 2020/0134847 A1 regarding structure depth-aware weighting in bundle adjustment (e.g. abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached on 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604042
LAYER INFORMATION SIGNALING-BASED IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12604096
ADAPTIVE BORESCOPE INSPECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12587660
METHOD FOR DECODING IMAGE ON BASIS OF IMAGE INFORMATION INCLUDING OLS DPB PARAMETER INDEX, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12587667
SYSTEMS AND METHODS FOR SIGNALING TEXT DESCRIPTION INFORMATION IN VIDEO CODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579871
CAMERA DETECTION OF OBJECT MOVEMENT WITH CO-OCCURRENCE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+28.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 487 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month