DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/30/2026 has been entered.
Response to Amendment
Received 03/30/2026
Claim(s) 1, 5-14, and 18-30 is/are pending.
Claim(s) 26 and 30 has/have been amended.
Claim(s) 2-4 and 15-17 has/have been cancelled.
The 35 U.S.C § 103 rejection to claim(s) 26-30 have been fully considered in view of the amendments received on 03/30/2026 and are fully addressed in the prior art rejection below.
Claim(s) 1, 5-14, and 18-25 are allowed.
Response to Arguments
Received 03/30/2026
Regarding independent claim(s) 26 and 30:
Applicant’s arguments (Remarks, Page 10: ¶ 1-5), filed 03/30/2026, with respect to the rejection(s) of claim(s) 26 under 35 U.S.C § 103 have been fully considered and are persuasive. Further see Advisory Action of 03/09/2026. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Troy et al. (US PGPUB No. 20170243399 A1), in view of Scott et al. (US PGPUB No. 20230073587 A1), and further in view of Dann et al. (US PGPUB No. 20120300984 A1).
Applicant’s arguments (Remarks, Page 11: ¶ 2-5), filed 03/30/2026, with respect to the rejection(s) of claim(s) 30 under 35 U.S.C § 103 have been fully considered and are persuasive. Further see Advisory Action of 03/09/2026. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of Scott, in view of Chen et al. (US PGPUB No. 20240420454 A1), and further in view of Dann.
Regarding dependent claim(s) 27-29:
Applicant’s arguments (Remarks, Page 10: ¶ 6), filed 03/30/2026, with respect to the rejection(s) of claim(s) 27-29 under 35 U.S.C § 103 have been fully considered and are persuasive due the dependency upon claims 26 respectively. Therefore, the rejection has been withdrawn, necessitated by Applicant's amendments. However, upon further consideration, a new ground(s) of rejection is made in view of the prior art as mentioned above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 26-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Troy et al., US PGPUB No. 20170243399 A1, hereinafter Troy, in view of Scott et al., US PGPUB No. 20230073587 A1, hereinafter Scott, and further in view of Dann et al., US PGPUB No. 20120300984 A1, hereinafter Dann
Regarding claim 26, Troy discloses a method for retroactively auditing a condition of a large-scale manufacturing product (Troy; a method for retroactively auditing (corresponding to a 3D visualization application accessing a 3D model) a condition of a large-scale manufacturing product [¶ 0024 and ¶ 0067-0068]), the method comprising:
capturing one or more camera images of the large-scale manufacturing product, each at a respective first point in perspective, each camera image depicting a respective captured surface region of the large-scale manufacturing product (Troy; capturing one or more camera images of the large-scale manufacturing product each camera image depicting a respective captured surface region of the large-scale manufacturing product each at a respective 1st point of view / perspective [¶ 0027 and ¶ 0031-0032]);
providing a three-dimensional computer model of the large-scale manufacturing product independent from the one or more camera images (Troy; providing a 3D computer model of the large-scale manufacturing product independent from the one or more camera images [¶ 0033 and ¶ 0036]; wherein, the 3D model is from a DB in relation with the 3D visualization application is independent for the data/images received from the digital camera [¶ 0062]);
an image-overlaid model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (Troy; an image-overlaid model comprising the 3D computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the 3D computer model in a respective position [¶ 0031-0033, ¶ 0037, and ¶ 0039]; moreover, an overlay according to a viewpoint [¶ 0034-0035]);
at a second point in perspective after each respective first point in perspective, determining a region of interest of the large scale manufacturing product (Troy; determining a ROI of the large scale manufacturing product at a 2nd point of view / perspective after each respective first point of view / perspective [¶ 0036-0037 and ¶ 0052]; wherein, interaction with a model correlates to viewpoint is manipulated [¶ 0060]; moreover, computer viewpoint and 3D models displayed in the virtual environment produces an approximate viewpoint based on photograph of the target object [¶ 0033-0034]); and
displaying a view of the image-overlaid model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product (Troy; displaying a view of the image-overlaid model that includes the captured camera images depicting a captured surface region including said the determined ROI of the large-scale manufacturing product [¶ 0060-0063 and ¶ 0065]; moreover, manipulating the viewpoint of a 3D model [¶ 0060]).
Troy fails to disclose captured surface regions in a condition free of defects at the first point in time;
generating an image-overlaid model comprising the three-dimensional computer model;
storing the image-overlaid model;
at a second point in time after each respective first point in time; and
in response to identifying the defect, retrieving the stored image-overlaid model and displaying a view of the image-overlaid model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product, thereby enabling verification that the region of interest was in the condition free of defects at the first potin in time.
However, Scott teaches capturing one or more camera images of the large-scale manufacturing product, each at a respective first point in time (Scott; capturing one or more camera images of the large-scale manufacturing product each at a respective 1st point in time [¶ 0035-0036 and ¶ 0040]), each camera image depicting a respective captured surface region of the large-scale manufacturing product (Scott; each camera image depicting a respective captured surface region of the large-scale manufacturing product [¶ 0018-0019 and ¶ 0035-0036]), wherein the one or more camera images document the respective captured surface regions in a condition free of defects at the first point in time (Scott; the one or more camera images document the respective captured surface regions in a condition implicitly free of defects (given that the stored baseline image is utilized for comparisons) at the 1st point in time (i.e. a scan at a different time associated with a history) [¶ 0037-0038 and ¶ 0040]; moreover, comparing the 3D image record with one or more of a stored design model for the object or a stored baseline image record of the object [¶ 0049]; such that, generating findings data based on the comparing, where the findings data is indicative of a discrepancy identified between the 3D image record and the one or more of the stored design model for the object or the stored baseline image record of the object [¶ 0050, ¶ 0052, and ¶ 0066]);
providing a three-dimensional computer model of the large-scale manufacturing product independent from the one or more camera images (Scott; providing a 3D computer model of the large-scale manufacturing product independent from the one or more camera images [¶ 0035-0037]; wherein, design models or baseline scans are different than currently captured data [¶ 0017-0018]);
generating an image model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images form the three-dimensional computer model (Scott; generating an image model comprising the 3D computer model of the large-scale manufacturing product and each of the one or more camera images form the 3D computer model [¶ 0015, ¶ 0024, and ¶ 0035-0037]) in a respective position, size, and orientation corresponding to the respective captured surface region (Scott; in an implicit respective position, size, and orientation (given alignment techniques) corresponding to the respective captured surface region [¶ 0017 and ¶ 0021-0022]);
storing the image model (Scott; storing the image model [¶ 0037-0038 and ¶ 0040]; additionally, generating 3D image records [¶ 0017 and ¶ 0021]);
at a second point in time after each respective first point in time, identifying a defect in determining a region of interest of the large scale manufacturing product (Scott; identifying a defect in determining a ROI (i.e. compared scan surface) of the large scale manufacturing product at an implicit 2nd point in time after each respective 1st point in time (given scanned history and lifecycle) [¶ 0036-0038 and ¶ 0040]; moreover, changes over time [id.]; in other words, a 1st point in time corresponds to a recorded baseline image model and 2nd point in time corresponds to a updated scan image model [¶ 0015]; wherein, image capturing of an object occurs at any stage of manufacturing [¶ 0018]); and
in response to identifying the defect, retrieving the stored image model and displaying a view of the image model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product (Scott; retrieving the stored image model and displaying a view of the image model that includes the captured camera images depicting a captured surface region including said the determined ROI (i.e. compared scan surface) of the large-scale manufacturing product in response to identifying the defect [¶ 0036-0038]; moreover, information for the 3D image record is stored along with the 3D image record and used to track the object during manufacturing or object lifecycle [¶ 0040]; wherein, a repository is also operable to store a plurality of sets of findings data, each set of findings data corresponding to one or more of the plurality of 3D image records for the object, and the findings data indicates any discrepancies identified between the 3D image record and the stored design model for the object or the stored baseline image record of the object used for the comparison [id.]; additionally, the 3D image records for an object can provide a history for the object, including changes over time, etc., and be used to identify and quantify changes, defects, and/or anomalies [id.]), thereby enabling verification that the region of interest condition at the first potin in time (Scott; thereby enabling verification that the ROI (i.e. compared scan surface) condition at the first potin in time (corresponding to a baseline and/or tracked history) [¶ 00037-0038 and ¶ 0040]).
Troy and Scott are considered to be analogous art because both pertain to generating and/or managing data in relation with sensor data and/or captured images, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Troy, to incorporate capturing one or more camera images of the large-scale manufacturing product, each at a respective first point in time, each camera image depicting a respective captured surface region of the large-scale manufacturing product, wherein the one or more camera images document the respective captured surface regions in a condition free of defects at the first point in time; providing a three-dimensional computer model of the large-scale manufacturing product independent from the one or more camera images; generating an image model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images form the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region; storing the image model; at a second point in time after each respective first point in time, identifying a defect in determining a region of interest of the large scale manufacturing product; and in response to identifying the defect, retrieving the stored image model and displaying a view of the image model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product, thereby enabling verification that the region of interest condition at the first potin in time (as taught by Scott), in order to provide an improved automated and consistent imaging capture and comparison to design specifications or baseline image recordings (Scott; [¶ 0002-0004 and ¶ 0014-0015]).
Troy as modified by Scott fails to disclose generating an image-overlaid model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the three-dimensional computer model; and
enabling verification that the region of interest was in the condition free of defects at the first potin in time.
However, Dann teaches generating an image-overlaid model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (Dann; generating an image-overlaid model comprising the 3D computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the 3D computer model in an implicit respective position, size, and orientation (given alignment techniques) corresponding to the respective captured surface region [¶ 0011-0015]; moreover, 3D virtual design model of an object [¶ 0006 and ¶ 0018]);
storing the image-overlaid model (Dann; storing the image-overlaid model [¶ 0035-0037 and ¶ 0044]); and
in response to identifying the defect, retrieving the stored image-overlaid model and displaying a view of the image-overlaid model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product (Dann; retrieving the stored image model and displaying a view of the image model that includes the captured camera images depicting a captured surface region including said the determined ROI of the large-scale manufacturing product in response to identifying the defect [¶ 0036-0037 and ¶ 0040-0042]; moreover, with the damage layer and the real image layer laid over the model generated image [id. and ¶ 0046]), thereby enabling verification that the region of interest was in the condition free of defects at the first potin in time (Dann; thereby enabling verification that the region of interest was in the condition free of defects at the 1st potin in time (corresponding to a recorded history) [¶ 0044 and ¶ 0046-0047]; wherein a condition of being defect free is desirable given the recorded history [¶ 0041-0044], such that information may include one or more of the location, history, repair status, severity etc. of each recorded damage [¶ 0037] via information stored within a DB [¶ 0045-0047]).
Troy in view of Scott and Dann are considered to be analogous art because they pertain to generating and/or managing data in relation with sensor data and/or captured images, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Troy as modified by Scott, to incorporate generating an image-overlaid model comprising the three-dimensional computer model of the large-scale manufacturing product and each of the one or more camera images overlaid onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region; storing the image-overlaid model; and in response to identifying the defect, retrieving the stored image-overlaid model and displaying a view of the image-overlaid model that includes the captured camera images depicting a captured surface region including said the determined region of interest of the large-scale manufacturing product, thereby enabling verification that the region of interest was in the condition free of defects at the first potin in time (as taught by Dann), in order to provide quick and accurate identification of one or more points of interest in combination with a reduction in operational disruptions (Dann; [¶ 0006-0007, ¶ 0021, and ¶ 0025]).
Regarding claim 27, Troy in view of Scott and Dann further discloses the method of claim 26, wherein the large-scale manufacturing product is an airframe (Troy; the large-scale manufacturing product is an airframe/airplane [¶ 0024 and ¶ 0027]).
Scott further teaches the large-scale manufacturing product is an airframe (Scott; the large-scale manufacturing product is airframe (aircraft fuselage) [¶ 0018-0019]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Troy as modified by Scott and Dann, to incorporate the large-scale manufacturing product is an airframe (as taught by Scott), in order to provide an improved automated and consistent imaging capture and comparison to design specifications or baseline image recordings (Scott; [¶ 0002-0004 and ¶ 0014-0015]).
Regarding claim 28, Troy in view of Scott and Dann further discloses the method of claim 26, wherein the large-scale manufacturing product is a fuselage (Scott; the large-scale manufacturing product is a fuselage [¶ 0018-0019]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Troy as modified by Scott and Dann, to incorporate the large-scale manufacturing product is a fuselage (as taught by Scott), in order to provide an improved automated and consistent imaging capture and comparison to design specifications or baseline image recordings (Scott; [¶ 0002-0004 and ¶ 0014-0015]).
Regarding claim 29, Troy in view of Scott and Dann further discloses the method of claim 26, wherein the large-scale manufacturing product is a fuselage interior (Scott; the large-scale manufacturing product is a fuselage interior [¶ 0018 and ¶ 0028], as illustrated within Fig. 1D).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Troy as modified by Scott and Dann, to incorporate the large-scale manufacturing product is a fuselage interior (as taught by Scott), in order to provide an improved automated and consistent imaging capture and comparison to design specifications or baseline image recordings (Scott; [¶ 0002-0004 and ¶ 0014-0015]).
Claim(s) 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Scott, in view of Chen et al., US PGPUB No. 20240420454 A1, hereinafter Chen, and further in view of Dann.
Regarding claim 30, Scott discloses a method of identifying foreign object debris (FOD) on a type of large-scale manufacturing product (Scott; a method of identifying FOD (i.e. discrepancies) on a type of large-scale manufacturing product [¶ 0014-0015 and ¶ 0017-0018]; moreover, identifying discrepancies [¶ 0038, ¶ 0040, ¶ 0050, and ¶ 0052]), the method comprising:
capturing a plurality of camera images of the respective previous unit (Scott; the method [as addressed above] comprises capturing a plurality of camera images of the respective previous unit (corresponding to stored image data of an object) [¶ 0035-0038]; moreover, information for the 3D image record is stored along with the 3D image record and used to track the object during manufacturing or object lifecycle [¶ 0040]; wherein, a repository is also operable to store a plurality of sets of findings data, each set of findings data corresponding to one or more of the plurality of 3D image records for the object, and the findings data indicates any discrepancies identified between the 3D image record and the stored design model for the object or the stored baseline image record of the object used for the comparison [id.]; additionally, the 3D image records for an object can provide a history for the object, including changes over time, etc., and be used to identify and quantify changes, defects, and/or anomalies [id.]), each camera image depicting a respective captured surface region of the respective previous unit (Scott; each camera image depicting a respective captured surface region of the respective previous unit [¶ 0037-0038 and ¶ 0040]; moreover, comparing the 3D image record with one or more of a stored design model for the object or a stored baseline image record of the object [¶ 0049] associated with captured images [¶ 0035-0036]; such that, generating findings data based on the comparing, where the findings data is indicative of a discrepancy identified between the 3D image record and the one or more of the stored design model for the object or the stored baseline image record of the object [¶ 0050, ¶ 0052, and ¶ 0066]); and
generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (Scott; the method [as addressed above] comprises generating an image model comprising the 3D computer model of the large-scale manufacturing product and each of the one or more camera images form the 3D computer model [¶ 0015, ¶ 0024, and ¶ 0035-0037] in an implicit respective position, size, and orientation (given alignment techniques) corresponding to the respective captured surface region [¶ 0017 and ¶ 0021-0022]);
training a machine learning model for FOD-identification based on the plurality of image models of previous units of large-scale manufacturing product (Scott; the method [as addressed above] comprises training a ML model [¶ 0017 and ¶ 0060] for FOD-identification (i.e. identifying deficiencies) based on a plurality of image models of previous units (i.e. recorded/stored image data/models) of large-scale manufacturing products [¶ 0037-0038, ¶ 0040, ¶ 0049-0050, and ¶ 0052]; wherein, an image model is implicit (given image alignment and/or stitching in the process of form the model) [¶ 0021-0022, ¶ 0036-0037, and ¶ 0048-0049]);
capturing new camera images of a new unit of the large-scale manufacturing product (Scott; the method [as addressed above] comprises capturing new camera images of a new unit of the large-scale manufacturing product [¶ 0034-0036 and ¶ 0040]; wherein, several sensors provide image data corresponding to one or more images correlated to 3D imaging [¶ 0047-0049 and ¶ 0052]; wherein new or updated images are obtained at different stages in a lifecycle [¶ 0015-0018, ¶ 0035-0036, and ¶ 0040]), each new camera image depicting a respective captured surface region of the new unit (Scott; each new/updated camera image depicting a respective captured surface region of the new/updated unit [¶ 0015-0017]; moreover, image capturing scenarios [¶ 0018] in relation with 3D imaging involving multiple images [¶ 0015-0017 and ¶ 0020-0021] in relation with stages of a manufacture lifecycle [¶ 0037-0038, ¶ 0040, ¶ 0049, and ¶ 0052]);
generating a new image model comprising the three-dimensional computer model and each of the new camera images form the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (Scott; the method [as addressed above] comprises generating a new/updated image model comprising the 3D computer model [¶ 0036-0037 and ¶ 0048-0049] and each of the new/updated camera images form the 3D computer model in a respective position, size, and orientation (given image alignment and/or stitching related to the process of forming the model) corresponding to the respective captured surface region [¶ 0021-0022]; moreover, images in relation with CAD/CAM model(s) [¶ 0037 and ¶ 0049]; and moreover, converting captured image into a 3D image record (i.e. model) [¶ 0015, ¶ 0040, and ¶ 0052]); and
using the machine learning model for FOD-identification to identify FOD based on the new camera images (Scott; using the ML model [as addressed above] for FOD-identification (i.e. identifying deficiencies) to identify FOD based on the new/updated camera images [¶ 0037-0038, ¶ 0040, ¶ 0049-0050, and ¶ 0052]; wherein, inspection can be used at any point in relation with identifying deficiencies [¶ 0015 and ¶ 0017]).
Scott fails to disclose for each of a plurality of previous units of said type of large-scale manufacturing product:
a computer model of the type of large-scale manufacturing product; and
generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region;
training of a type of model; and
wherein each image-overlaid model comprises camera images of the respective unit overlaid on the three-dimensional model of the type of large-scale manufacturing product
However, Chen teaches for each of a plurality of previous units of said type of large-scale manufacturing product (Chen; for each of a plurality of previous units (i.e. training data of an object) of said type of large-scale manufacturing product [¶ 0057-0060], as illustrated within Fig. 3):
capturing a plurality of camera images of the respective previous unit (Chen; capturing a plurality of camera images (via one or more detectors) of the respective previous unit (i.e. training data of an object) [¶ 0057-0060]; wherein, camera images corresponds to image dataset [¶ 0035-0036]; additionally, ground truth data [¶ 0049 and ¶ 0061]), each camera image depicting a respective captured surface region of the respective previous unit (Chen; each camera image depicting a respective captured surface region [¶ 0033 and ¶ 0035-0036] of the respective previous unit (i.e. training data of an object) [¶ 0059-0060]; wherein, Fig. 1 illustrates, a capture surface region (or ROI) [¶ 0039-0040] associated with one or more objects of interest [¶ 0038, ¶ 0044-0045, and ¶ 0051-0052]);
training a machine learning model for FOD-identification based on a plurality of models of previous units of said type of large-scale manufacturing products (Chen; training a machine learning model for FOD-identification based on a plurality of models of previous units of said type of large-scale manufacturing products [¶ 0057-0058, ¶ 0060, and ¶ 0062]);
providing a computer model of the type of large-scale manufacturing product independent from the camera images (Chen; providing a computer model of the type of large-scale manufacturing product independent from the camera images [¶ 0040 and ¶ 0062]; moreover, different configurations or classes of an object [¶ 0043-0044 and ¶ 0051-0052] associated with a trained detector [¶ 0054-0056]); and
using the machine learning model for FOD-identification to identify FOD based on the new camera images (Chen; using the machine learning model for FOD-identification to identify FOD based on the implicit new camera images (given the training of the detector to be used in a non-training environment) [¶ 0002-0003, ¶ 0060, and ¶ 0062]).
Scott and Chen are considered to be analogous art because both pertain to generating and/or managing data in relation with sensor data and/or captured images, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Scott, to incorporate for each of a plurality of previous units of said type of large-scale manufacturing product: capturing a plurality of camera images of the respective previous unit, each camera image depicting a respective captured surface region of the respective previous unit; training a machine learning model for FOD-identification based on a plurality of models of previous units of said type of large-scale manufacturing products; providing a computer model of the type of large-scale manufacturing product independent from the camera images; and using the machine learning model for FOD-identification to identify FOD based on the new camera images (as taught by Chen), in order to provide improved machine learning predictions using training techniques (Chen; [¶ 0002-0004]).
Scott as modified by Chen fails to disclose generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the three-dimensional computer model.
However, Dann teaches generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (Dann; generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the 3D computer model in an implicit respective position, size, and orientation (given alignment techniques) corresponding to the respective captured surface region [¶ 0011-0015]; moreover, 3D virtual design model of an object [¶ 0006 and ¶ 0018]).
Scott in view of Chen and Dann are considered to be analogous art because they pertain to generating and/or managing data in relation with sensor data and/or captured images, wherein one or more computerized units are utilized in order to produce a visualization effect.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention was made to modify Scott as modified by Chen, to incorporate generating a respective image-overlaid model by overlaying each of the plurality of camera images onto the three-dimensional computer model in a respective position, size, and orientation corresponding to the respective captured surface region (as taught by Dann), in order to provide quick and accurate identification of one or more points of interest in combination with a reduction in operational disruptions (Dann; [¶ 0006-0007, ¶ 0021, and ¶ 0025]).
Allowable Subject Matter
Claims 1, 5-14, and 18-25 are allowed.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Charles Lloyd Beard whose telephone number is (571)272-5735. The examiner can normally be reached Monday - Friday, 8:00 AM - 5: 00 PM, alternate Fridays EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHARLES LLOYD. BEARD
Primary Examiner
Art Unit 2611
/CHARLES L BEARD/ Primary Examiner, Art Unit 2611