DETAILED ACTION
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 1/240 F.3d 1/2428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 21/24 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 21-39 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1, 4-5, 7-12, 14-18 of US Patent # US 12056817 B2.
Table 1 illustrates the conflicting claim pairs
Present Application
21
22-26
27-33
34-39
US Patent # 12056817
1
2, 4-5, 7
15-18
8-12, 14
Table 2 illustrates the conflicting claim pair with mapping, with the differences shown in bold form.
Claim 21 of present App.
Claim 1 of US 12056817 B2
A computer-implemented method comprising:
A computer-implemented method comprising:
generating a three-dimensional (3D) model from a plurality of images, the 3D model including separately tagged one or more estimated view locations to distinguish the one or more estimated view locations from model points of the 3D model;
obtaining a plurality of images; generating a three-dimensional (3D) model from the plurality of images, and wherein the first registered 3D model includes separately tagged estimated view locations to distinguish them from model points of the first registered 3D model;
downsampling one or more model points corresponding to the 3D model to generate a downsampled 3D model;
downsampling model points in the transformed 3D model to generate a downsampled 3D model;
determining an adjustment to the one or more model points in the 3D model based on the downsampled 3D model and a reference 3D model;
determining a first adjustment to model points in the transformed 3D model based on the downsampled 3D model and a reference 3D model;
and registering the 3D model to a geographic coordinate system as a first registered 3D model based on the determined adjustment and the one or more estimated view locations.
and registering the 3D model to a geographic coordinate system as a first registered 3D model using the one or more estimated view locations, wherein the registering the 3D model comprises: applying a transform to the 3D model to transform the 3D model from a first coordinate system to the geographic coordinate system; and …. applying the first adjustment to the transformed 3D model to generate the first registered 3D model.
As seen from the table all elements of claim 21 of application are anticipated by Claim 1 of US Patent # 12056817 with slight language variation claiming the same features and broader scope by eliminating the applying transformation on the and further processing the transformed model in the registration steps in the patent. Similarly elements of claims 22-26 of application are anticipated by Claims 2, 4-7 of US Patent # 12056817 as shown in table 1.
In addition elements of claims 27-33 of the application are anticipated by Claims 15-18 of US Patent # 12056817 as shown in table 1.
Claims 34-39 recite limitations similar in scope with limitations in claims 21-26 and therefore rejected under same rationale. Additionally claim 8 of Patent #12056817 teaches A system comprising: a processing device; and a memory coupled to the processing device.
Claim 40 rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over Claim 1 of US Patent # US 12056817 B2, and further in view of Salomonsson et al (US 20170169605 A1).
RE claim 40, Claim 1 of US Patent # US 12056817 is silent RE:
wherein the set of operations further comprises: culling one or more extraneous model points from the first registered 3D model.
However Salomonsson teaches in Figs 3-4, 7, [0007] [0049], [0054], [0055] to produce a reliable merged model, removing unreliable/redundant data points.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in claim 1 of US Patent # US 12056817 a system and method of wherein the set of operations further comprises: culling one or more extraneous model points from the first registered 3D model, as suggested by Salomonsson in order to produce a reliable merged model and thereby increasing system effectiveness and user experience.
Claims 21 and 27 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 2 and 9 of US Patent # US 11682170 B2.
Table 1 illustrates the conflicting claim pairs
Present Application
21
34
US Patent # 11682170
2
9
Table 2 illustrates the conflicting claim pair with mapping, with the differences shown in bold form.
Claim 21 of present App.
Claim 2 of US 11682170 B2
A computer-implemented method comprising:
A computer-implemented method comprising:
generating a three-dimensional (3D) model from a plurality of images, the 3D model including separately tagged one or more estimated view locations to distinguish the one or more estimated view locations from model points of the 3D model;
obtaining a plurality of images; generating a three-dimensional (3D) model from the plurality of images, and wherein the first registered 3D model includes separately tagged estimated view locations to distinguish them from model points of the first registered 3D model;
downsampling one or more model points corresponding to the 3D model to generate a downsampled 3D model;
downsampling model points in the transformed 3D model to generate a downsampled 3D model;
determining an adjustment to the one or more model points in the 3D model based on the downsampled 3D model and a reference 3D model;
determining a first adjustment to model points in the transformed 3D model based on the downsampled 3D model and a reference 3D model; and
and registering the 3D model to a geographic coordinate system as a first registered 3D model based on the determined adjustment and the one or more estimated view locations.
registering the 3D model to a geographic coordinate system as a first registered 3D model using the one or more estimated view locations; and applying the first adjustment to the transformed 3D model to generate the first registered 3D model.
As seen from the table all elements of claim 21 of application are anticipated by Claim 2 of US Patent # 11682170 with slight language variation claiming the same features and broader scope by eliminating eliminating the applying transformation on the and further processing the transformed model in the registration steps in the patent.
Claims 34 recite limitations similar in scope with limitations in claim 21 and therefore rejected under same rationale. Additionally claim 9 of Patent #11682170 teaches A system comprising: a processing device; and a memory coupled to the processing device.
Claim 27 is rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over Claim 1 of US Patent # US 11682170 B2, and further in view of Barajas Hernandez et al.
RE claim 27, Claim 1 of US Patent # US 11682170 teaches A computer-implemented method comprising: obtaining a plurality of images; generating a three-dimensional (3D) model from the plurality of images, the one or more 3D models including separately tagged one or more estimated view locations to distinguish the one or more estimated view locations from model points of the one or more 3D models; registering the 3D model to a geographic coordinate system as a registered 3D model (see claim 1);
Claim 1 of US Patent # US 11682170 is silent RE:
dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier; for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region.
However Barajas Hernandez teaches dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier (Figs 2B, 3A-3B, [0097]); for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region (Fig 3B, [0138], [0134], [0141], [0131]) wherein several 3D clusters are generated from an initial georeferenced 3D model and further processed and merged to generate the final full 3D representation utilizing parallel processing to accelerate the 3D model generation from a very large numbers of images [0022]-[0023].
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in claim 1 of US Patent # US 11682170 a system and method of dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier; for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region, as suggested by Barajas Hernandez in order to accelerate the final 3D model generation effectively merging the different model portions into a complete/final high quality 3D georeferenced scene/site model from multiple 3d model/submodel and thereby increasing system effectiveness and user experience.
Claims 21 and 34 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 2, 9 of US Patent # US 11182954 B2, in view of Chen et al (US 20170330375 A1).
Table 3 illustrates the conflicting claim pairs
Present Application
21
34
US Patent # 11182954
2
9
Table 4 illustrates the conflicting claim pair with mapping, with the differences shown in bold form.
Claim 21 of present App.
Claim 2 of US 11182954 B2
A computer-implemented method comprising:
A computer-implemented method comprising:
generating a three-dimensional (3D) model from a plurality of images, the 3D model including separately tagged one or more estimated view locations to distinguish the one or more estimated view locations from model points of the 3D model;
obtaining a plurality of images;generating a three-dimensional (3D) model from the plurality of images, and wherein the first registered 3D model includes separately tagged estimated view locations to distinguish them from model points of the first registered 3D model;
downsampling model points in the transformed 3D model to generate a downsampled 3D model; determining an adjustment to the one or more model points in the 3D model based on the downsampled 3D model and a reference 3D model;
and registering the 3D model to a geographic coordinate system as a first registered 3D model based on the determined adjustment and the one or more estimated view locations.
registering the 3D model to a geographic coordinate system as a first registered 3D model using the one or more estimated view locations; …and adjusting a location of the transformed 3D model in the geographic coordinate system based on a reference 3D model to generate the first registered 3D model.
As seen from the table all elements of claim 21 of application are anticipated by Claim 2 of US Patent # 11182954, except for: downsampling model points in the transformed 3D model to generate a downsampled 3D model; and determining the first adjustment to model points in the transformed 3D model based on the downsampled 3D model. However Chen teaches in [0074] to establish correspondence between sparse points Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Claim 2 of US Patent # 11182954 a system and method of downsampling model points in the transformed 3D model to generate a downsampled 3D model; and determining the first adjustment to model points in the transformed 3D model based on the downsampled 3D model, as suggested by Chen, as this doesn’t change the overall operation of the system, and it could be used to effectively register the model and thereby ensuring system effectiveness and user experience.
Claim 34 recites limitations similar in scope with limitations in claim 21 and therefore rejected under same rationale. Additionally claim 9 of Patent #11182954 teaches A system comprising: a processing device; and a memory coupled to the processing device.
Claim 27 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over Claim 1 of US Patent # US 11182954, and further in view of Barajas Hernandez et al.
RE claim 27, Claim 1 of US Patent # US 11182954 teaches A computer-implemented method comprising: obtaining a plurality of images; generating a three-dimensional (3D) model from the plurality of images, the one or more 3D models including separately tagged one or more estimated view locations to distinguish the one or more estimated view locations from model points of the one or more 3D models; registering the 3D model to a geographic coordinate system as a registered 3D model (see claim 1);
Claim 1 of US Patent # US 11182954 is silent RE:
dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier; for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region.
However Barajas Hernandez teaches dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier (Figs 2B, 3A-3B, [0097]); for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region (Fig 3B, [0138], [0134], [0141], [0131]) wherein several 3D clusters are generated from an initial georeferenced 3D model and further processed and merged to generate the final full 3D representation utilizing parallel processing to accelerate the 3D model generation from a very large numbers of images [0022]-[0023].
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in claim 1 of US Patent # US 11182954 a system and method of dividing a region of the one or more registered three-dimensional (3D) models into a plurality of volumes to generate a divided geographic coordinate system, wherein each volume of the plurality of volumes has a volume identifier; for each volume in the divided geographic coordinate system: identifying a subset of overlapping 3D models that include points within a respective volume; selecting one or more 3D models in the subset of overlapping 3D models based on one or more criteria; and generating a merged point cloud for the respective volume based on the one or more selected 3D models; and combining the merged point cloud for each volume to generate a merged point cloud for the geographic coordinate system region, as suggested by Barajas Hernandez in order to accelerate the final 3D model generation effectively merging the different model portions into a complete/final high quality 3D georeferenced scene/site model from multiple 3d model/submodel and thereby increasing system effectiveness and user experience.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Sultana M Zalalee/Primary Examiner, Art Unit 2614