Prosecution Insights
Last updated: April 19, 2026
Application No. 18/563,961

METHOD FOR RECORDING INSPECTION DATA

Non-Final OA §101§103
Filed
Nov 24, 2023
Examiner
ANDERSON, MICHAEL W
Art Unit
3693
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Proceq SA
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
93 granted / 213 resolved
-8.3% vs TC avg
Strong +53% interview lift
Without
With
+53.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
7 currently pending
Career history
220
Total Applications
across all art units

Statute-Specific Performance

§101
32.8%
-7.2% vs TC avg
§103
32.5%
-7.5% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 213 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims The office action is being examined in response to the amendments filed by the applicant on November 28, 2025. Claims 1 and 29-31 have been amended and are hereby entered. Claims 19 and 27 have been cancelled by applicant. Claims 1-18, 20-26, and 28-33 are pending and have been examined. This action is made Non-FINAL. Information Disclosure Statement The information disclosure statement (IDS) submitted on November 28, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant's arguments filed November 28, 2025 have been fully considered but they are not persuasive. With regard to the limitations of claims 1-18, 20-26, and 28-33, Applicant argues that the claims are patent eligible under 35 USC 101 because they meet the analysis set forth by the Supreme Court. The examiner respectfully disagrees. The claims were analyzed using the MPEP guidelines and are still considered ineligible under U.S.C. 101. The claims are not eligible under 35 USC 101 because they do not meet the requirements of the test set forth by the Supreme Court. Step 1 is met because the claims are directed towards one of the four statutory categories. Part 2A-Prong1 of the test is trying to evaluate if the claims recite a judicial exception then Part 2A-Prong 2 is to evaluate whether the claims recite additional elements that integrate the exception into a practical application, then Part 2B checks whether they are applied. With respect to Applicant’s arguments, it is not that the claim as a whole is to such an abstract idea that would be the ultimate conclusion under both parts of the analysis. The claims are directed towards the abstract idea of a computer system that recites the steps of …receiving…the sequence of images…, …generating…an estimate of the camera path…, obtaining…a first position…, …receiving…a first user input indicative of a first position…, …saving the model.., …summarizing the transaction attribute variables…, …obtaining a second position…, and …calculating a first transformation…based upon the first position and second position…at an intersection position on the camera path…receiving inspection data…, …generating… report…, as drafted, is a process that, under its broadest reasonable interpretation, inspecting at various locations over time and comparing data which is an observation, evaluation, judgement, and/or opinion performed in the human mind thus mental process. Accordingly, the claim(s) recite an abstract idea. The claim as a whole is not more than a drafting effort designed to monopolize the exception. The additional limitations when taken individually and in combination are not sufficient to amount to significantly more that the judicial exception because the claims do not provide improvements to another technology or technical field, improvements to the function of the computer itself, and do not provide meaningful limitations beyond general linking the use of an abstract idea to a particular technological environment. Accordingly, the claim(s) recite an abstract idea. On page 13, the applicant argues that because of a PTAB decision the claims should be eligible. The examiner respectfully disagrees. A PTAB decision is fact specific to the case being decided. The term “generating an inspection report” and following limitations is merely an output of what is done in the mind. See Electric Power group. Accordingly, these limitations do not impose any meaningful limits on practicing the abstract idea. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). The applicant argues that Bandyopadhyay does not teach the limitations receiving a first user input. The examiner respectfully disagrees. Fleischman teaches “the camera path module 132 uses a SLAM (simultaneous localization and mapping) algorithm to simultaneously (1) determine an estimate of the camera path by inferring the location and orientation of the 360-degree camera 112 …(2) model the environment using direct methods or using landmark features (such as oriented FAST and rotated BRIEF (ORB), scale-invariant feature transform (SIFT), speeded up robust features (SURF), etc.) extracted from the sequence of images.” See at least paragraph 0037; “a floorplan is a to-scale, two-dimensional (2D) diagrammatic representation of an environment “ [0038]; “the presence of a feature between two nodes can be detected manually (e.g., by user input) “ [0090]; “the starting point may be provided as user input or determined based on location data (e.g., GPS or IPS data) received from the image capture system “ [0094]; Fig. 3A and associated text & [0060-61] and 7(user input device). In combination with Bandyopadhyay teaching “accompanying drawing…Additionally, user input may occur via any input device associated with screen implementation, or other device.” [0106]. Therefore, it is the combination that teaches the limitation. The applicant argues that Fleischman does not teach “automatically generating the inspection report, wherein the inspection report contains at least one of: a graphical representation of the scale drawing with position marks of the inspection positions, a graphical representation of the scale drawing with an indication of the camera path or an inspection area, and the inspection data together with a graphical representation of respective inspection positions on the scale drawing.”. However, the Examiner respectfully disagrees. The limitations are taught by teaching Fig. 3A -B and associated text & [0060-61]. For the reasons explained above applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18, 20-26, and 28-33 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-18, 20-26, and 28-33 are directed to a system, method, or product, which are one of the statutory categories of invention. (Step 1: YES). The Examiner has identified independent method Claim 1 as the claim that represents the claimed invention for analysis and is similar to independent product Claim 29 and system Claim 30. Claim 1 recites the limitations of: providing a camera capturing a sequence of images, moving the camera along a camera path through the environment, receiving the sequence of images from the camera, as the camera is moved along the camera path, generating an estimate of the camera path in sensor space based on the sequence of images, for a first image of the sequence of images, taken at a first position on the camera path, obtaining a first position in sensor space and receiving a first user input indicative of a first position of the camera in scale drawing space, the receiving of the first user input comprising displaying a graphical representation of a scale drawing on a screen, and receiving an input event from the user indicative of the first position of the camera on the graphical representation of the scale drawing, for a second image of the sequence of images, taken at a second position on the camera path, obtaining a second position in sensor space and receiving a second user input indicative of a second position of the camera in scale drawing space, the receiving of the second user input comprising displaying a graphical representation of the scale drawing on the screen, and receiving an input event from the user indicative of the second position of the camera on the graphical representation of the scale drawing, calculating a first transformation between sensor space and scale drawing space based on the first position and second position in sensor space and the first position and second position in scale drawing space, at an inspection position on the camera path, receiving inspection data, and storing the inspection data together with data indicative of the inspection position in scale drawing space, and automatically generating the inspection report, wherein the inspection report contains at least one of: a graphical representation of the scale drawing with position marks of the inspection positions, a graphical representation of the scale drawing with an indication of the camera path or an inspection area, and the inspection data together with a graphical representation of respective inspection positions on the scale drawing. These limitations, under their broadest reasonable interpretation, cover performance of the limitation in the mind. The limitation of inspecting at various locations over time and comparing data, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. Accordingly, the claim recites an abstract idea. The camera and related capturing images and moving camera in Claim 1 is just mere data gathering, which is a form of insignificant extra-solution activity. The recitation of generic computer components in a claim does not necessarily preclude that claim from reciting an abstract idea. Claims 29 and 30 are also abstract for similar reasons. (Step 2A-Prong 1: YES. The claims recite an abstract idea) This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of: camera (Claim 1) a computer - NT CRM (claim 29) and a camera and processor (Claim 30). The computer hardware/software is/are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The camera and related capturing images and moving camera in Claim 1 is just mere data gathering, which is a form of insignificant extra-solution activity. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea and are at a high level of generality. Therefore, claims 1, 29, and 30 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computer hardware amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. See Applicant’s specification para. [0010-25] about implantation using general purpose or special purpose computing devices and MPEP 2106.05(f) where applying a computer as a tool is not indicative of significantly more as well as MPEP 2106.05(d), if applicable. In addition, according to the specification, “Conventional inspection includes an inspector walking in or around the building and inspecting it, e.g. by eye and/or by non-destructive testing (NDT) method. Documentation of the inspection is usually done by acquiring inspection data, e.g. by taking photos and/or NDT data, and manually associating them with the inspection position, i.e. the location where the inspection data was acquired. Ideally, the documentation comprises the inspection data associated with their respective inspection positions in a scale drawing,…” page 1. As well as “The camera 2 may be a video cam- era configured to record a plurality of images, i.e. frames, per second, e.g. 30 or 60 frames/s. The processor 1 and the camera 2 may be integral parts of the same device, e.g. a tablet computer or a smartphone“ page 15. Accordingly, these additional elements, do not change the outcome of the analysis, when considered separately and as an ordered combination. Thus, claims 1, 29, and 30 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more) Dependent claims further define the abstract idea that is present in their respective independent claims 1, 29, and 30 and thus correspond to performance in the mind and hence are abstract for the reasons presented above. The dependent claims do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination. Therefore, the dependent claims are directed to an abstract idea. Thus, the claims 1-18, 20-26, and 28-33 are not patent-eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 10-18, 20-24, and 26, 28-33 are rejected under 35 U.S.C. 103 as being unpatentable over Fleischman (U.S. Pub. No. US 20200027267 A1) in view of BANDYOPADHYAY (CA 2695841 A1). Regarding claims 1, 29 and 30: Fleischman teaches: A Camera configured to capture a sequence of images, (Fig. 1 (112)) Computer; a processor in communication with the camera (Fig. 7 (701 & 701-109)) providing a camera capturing a sequence of images, moving the camera along a camera path through the environment, (“The sequence of images is captured by an image capture system as it is moved through the environment along a camera path. For example, the environment may be a floor of a building that is under construction, and the sequence of images is captured as a construction worker walks through the floor with the image capture system”[0020]; [0028]) receiving the sequence of images from the camera, as the camera is moved along the camera path, (“The camera path module 132 receives the images and the other data that were collected by the image capture system 110 as the system 110 was moved along the camera path and determines the camera path based on the received images and data….the camera path is defined as a 6D camera pose for each image in the sequence of images. ” See at least paragraph 0036) generating an estimate of the camera path in sensor space based on the sequence of images, (“the camera path module 132 uses a SLAM (simultaneous localization and mapping) algorithm to simultaneously (1) determine an estimate of the camera path by inferring the location and orientation of the 360-degree camera 112 and.” See at least paragraph 0037; Fig. 2A(224)) for a first image of the sequence of images, taken at a first position on the camera path, obtaining a first position in sensor space and receiving a first user input indicative of a first position of the camera in scale drawing space, the receiving of the first user input comprising displaying a graphical representation of a scale drawing on a screen, and [receiving an input event from the user] indicative of the first position of the camera on the graphical representation of the scale drawing, (“the camera path module 132 uses a SLAM (simultaneous localization and mapping) algorithm to simultaneously (1) determine an estimate of the camera path by inferring the location and orientation of the 360-degree camera 112 …(2) model the environment using direct methods or using landmark features (such as oriented FAST and rotated BRIEF (ORB), scale-invariant feature transform (SIFT), speeded up robust features (SURF), etc.) extracted from the sequence of images.” See at least paragraph 0037; “a floorplan is a to-scale, two-dimensional (2D) diagrammatic representation of an environment “ [0038]; “the presence of a feature between two nodes can be detected manually (e.g., by user input) “ [0090]; “the starting point may be provided as user input or determined based on location data (e.g., GPS or IPS data) received from the image capture system “ [0094]; Fig. 3A and associated text & [0060-61] and 7(user input device)) for a second image of the sequence of images, taken at a second position on the camera path. obtaining a second position in sensor space and receiving a second [user input] indicative of a second position of the camera in scale drawing space, the receiving of the second [user input] comprising displaying a graphical representation of the scale drawing on the screen, and receiving an [input event from the user] indicative of the second position of the camera on the graphical representation of the scale drawing (“ the SLAM algorithm is performed separately on each of the segments to generate a camera path segment for each segment of images. The motion processing module 220 receives the motion data 214 that was collected as the image capture system 110 was moved along the camera path and generates a second estimate 222 of the camera path. ” See at least paragraph 0046-47; “the presence of a feature between two nodes can be detected manually (e.g., by user input) “ [0090]; “the starting point may be provided as user input or determined based on location data (e.g., GPS or IPS data) received from the image capture system “ [0094]; Fig. 3A and associated text & [0060-61] and 7(user input device)) calculating a first transformation between sensor space and scale drawing space based on the first position and second position in sensor space and the first position and second position in scale drawing space, (“to calculate a difference between the three-dimensional locations of the two extracted images, as indicated by their respective 6D pose vectors.” See at least paragraph 0052 and “ the feature classifier extracts image features (e.g., SIFT SURG, or ORB features) from an image of the floorplan and uses the image features to classify different features (e.g., walls and doors) that appear at various positions in the floorplan. “ see at least paragraph 0090) at an inspection position on the camera path, receiving inspection data, (“ the general contractor has used the split-screen view to create a side-by-side view that displays an image from a day after drywall was installed on the right side and an image taken from an earlier date (e.g. the day before drywall was installed) on the left side. By using the visualization interface to “travel back in time” and view the electrical work before it was covered with the drywall, the general contractor can inspect the electrical issues while avoiding the need for costly removal of the drywall. Furthermore, because the spatial indexing system 130 can automatically index the location of every captured image without having a user perform any manual annotation,” See at least paragraph 0064) storing the inspection data together with data Indicative of the inspection position in scale drawing space. (“inspect…the spatial indexing system 130 can automatically index the location of every captured image” See at least paragraph 0064) automatically generating the inspection report, wherein the inspection report contains at least one of: a graphical representation of the scale drawing with position marks of the inspection positions, a graphical representation of the scale drawing with an indication of the camera path or an inspection area, and the inspection data together with a graphical representation of respective inspection positions on the scale drawing. (Fig. 3A -B and associated text & [0060-61]) Fleischman does not teach explicitly but BANDYOPADHYAY: receiving a first user input / receiving an input event from the user (“accompanying drawing…Additionally, user input may occur via any input device associated with screen implementation, or other device.” [0106] It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified Fleischman to include the teachings of BANDYOPADHYAY because it provides “tracking systems to correct scaling errors.” (BANDYOPADHYAY, See at least paragraph 0106). Regarding claim 2: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the scale drawing is a floor plan, (“to-scale, two-dimensional (2D) …floorplans” [0038]) wherein the scale drawing comprises a two-dimensional, to-scale representation of the environment in scale drawing space, (“to-scale, two-dimensional (2D) …floorplans” [0038]) in particular wherein the environment is a building. (“building or structure”, See at least paragraph 0038) Regarding claim 3: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the scale drawing is a map, (“aligning a camera path with a floorplan using a grid map of a floorplan”, See at least paragraph 0013 as well as 0019 & 0040) wherein the scale drawing comprises a two-dimensional, to-scale representation of the environment in scale drawing space, in particular wherein the environment is an outdoor environment or a mixed indoor-and-outdoor environment. (“an outdoor area “ See at least paragraph 0025; “to-scale, two-dimensional (2D) …floorplans” [0038]) Regarding claim 4: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the scale drawing specifies positions and dimensions of physical features in the environment in scale drawing space, in particular positions and dimensions of at least one of walls, doors, windows, pillars and stairs, and/or in particular positions and dimensions of at least one of buildings, streets, paths, vegetation. (“floorplan is a to-scale, two-dimensional (2D) diagrammatic representation of an environment (e.g., a portion of a building or structure) from a top-down perspective. The floorplan specifies the positions and dimensions of physical features in the environment, such as doors, windows, walls, and stairs.”, See at least paragraph 0038) Regarding claim 5: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein at least a part of the estimate of the camera path in sensor space is generated without taking into account global navigation satellite system (GNSS) position data. (“an indoor positioning system (IPS) that determines the position of the image capture system based on signals received from transmitters placed at known locations in the environment. For example, multiple radio frequency (RF) transmitters that transmit RF fingerprints are placed throughout the environment, and the location sensors 116 also include a receiver that detects RF fingerprints and estimates the location of the video capture system 110 within the environment based on the relative intensities of the RF fingerprints.”, See at least paragraph 0031) Regarding claim 6: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the estimate of the camera path in sensor space and the first transformation between sensor space and scale drawing space are calculated in a device moved along the camera path together with the camera. (“ combined estimate of the camera path is generated 440 by generating one or more additional estimates of the camera path, calculating a confidence score for each 6D pose in each path estimate, and selecting, for each spatial position along the camera path, ”, See at least paragraph 0075) Regarding claim 7: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the estimate of the camera path, the first transformation and the data indicative of the inspection position in scale drawing space are calculated in real time. (“ sends the captured data to the spatial indexing system 130 in real-time as the system 110 is being moved along the camera path.”, See at least paragraph 0033) Regarding claim 8: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the inspection data comprises an image received from the camera in particular wherein the inspection data additionally comprises an image from a 360-degree camera. (“ the general contractor can inspect the electrical issues…each of the images is a 360-degree image .”, See at least paragraphs 0064 & 67) Regarding claim 10: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: further comprising: for a third image of the sequence of images, taken at a third position on the camera path, obtaining a third position in sensor space and receiving a third user input indicative of a third position of the camera in scale drawing space, calculating a second transformation between sensor space and scale drawing space based on the second position and third position in sensor space and the second position and third position in scale drawing space, applying the second transformation for calculating data indicative of positions in scale drawing space, which are located on the camera path after the third position. (The Examiner could rely on MPEP 2144.04VI Duplication of parts, as well as there are some intended use in the claim, however, the Examiner provided the mapping as part of compact prosecution. “ the combined estimate of the camera path is generated 440 by generating one or more additional estimates of the camera path, calculating a confidence score for each 6D pose in each path estimate, and selecting, for each spatial position along the camera path, the 6D pose with the highest confidence score. For instance, the additional estimates of the camera path may include one or more of: a second estimate using motion data, as described above, a third estimate using data from a GPS receiver, and a fourth estimate using data from an IPS receiver. As described above, each estimate of the camera path is a vector of 6D poses that describe the relative position and orientation for each image in the sequence...”, See at least paragraphs 0075-76) Regarding claim 11: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 10. Fleischman further teaches: retrospectively applying the second transformation for calculating data indicative of positions in scale drawing space, which are located on the camera path between the second position and the third position, in particular changing the stored data indicative of the inspection position in scale drawing space for inspection data located on the camera path between the second position and the third position. (The Examiner could rely on MPEP 2144.04VI Duplication of parts, as well as there are some intended use in the claim, however, the Examiner provided the mapping as part of compact prosecution. “ The floorplan specifies the positions and dimensions of physical features in the environment, such as doors, windows, walls, and stairs. The different portions of a building or structure may be represented by separate floorplans. For example, in the construction example described above, the spatial indexing system 130 may store separate floorplans for each floor, unit, or substructure..”, See at least paragraphs 0038 & 75-76) Regarding claim 12: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the data indicative of the inspection position in scale drawing space comprise at least one of: the inspection position in scale drawing space, the inspection position in sensor space and the transformation between sensor space and scale drawing space, a timestamp of the inspection data and a timestamped version of the first estimate of the camera path in scale drawing space. (“the general contractor is instead able to access the visualization interface and use the 2D overhead map view to identify the location within the building where the problem was discovered. The general contractor can then click on that location to view an image taken at that location. In this example, the image shown in FIG. 3C is taken at the location where the problem was discovered.”, See at least paragraphs 0062 & fig 3B-C) Regarding claim 13: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein the first estimate of the camera path in sensor space is generated by visual odometry, in particular feature-based visual odometry, on the sequence of images. (“The spatial indexing system 130 receives 410 a sequence of images from an image capture system 110. The images in the sequence are captured as the image capture system 110 is moved through an environment (e.g., a floor of a construction site) along a camera path… The spatial indexing system 130 generates 420 a first estimate of the camera path based on the sequence of images. The first estimate of the camera path can be represented, for example, as a six-dimensional vector that specifies a 6D camera pose for each image in the sequence.” See at least paragraphs 0067-8) Regarding claim 14: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: wherein in generating the estimate of the camera path in sensor space, a vertical component of the camera path is neglected (“the camera height is assumed to have a constant value during the image capture process. .” See at least paragraphs 0051) Regarding claim 15: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: receiving acceleration data captured by an inertial measurement unit and/or orientation data captured by a magnetometer as the inertial measurement unit and/or magnetometer is moved along the camera path together with the camera, additionally using the acceleration data and/or orientation data for calculating the estimate of the camera path in sensor space. (“the motion data 214 also includes data from a magnetometer, the magnetometer data may be used in addition to or in place of the gyroscope data to determine changes to the orientation of the image capture system “ [0047]; “accelerometer, gyroscope, and/or magnetometer data… in a time interval centered on, preceding, or subsequent to the time of the 6D pose; ” See at least paragraphs 0076) Regarding claim 16: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 15. Fleischman further teaches: wherein the estimate of the camera path in sensor space is generated by performing visual inertial odometry on the sequence of images and at least one of the acceleration and orientation data. (“the motion data 214 also includes data from a magnetometer, the magnetometer data may be used in addition to or in place of the gyroscope data to determine changes to the orientation of the image capture system “ [0047]; “accelerometer, gyroscope, and/or magnetometer data… in a time interval centered on, preceding, or subsequent to the time of the 6D pose; ” See at least paragraphs 0076) Regarding claim 17: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: displaying, in real tine, on a graphical representation of the scale drawing, the inspection position and a current position of the camera in scale drawing space. (Fig. 3A & [0060-61]) Regarding claim 18: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 17. Fleischman further teaches: displaying, in real time, on the graphical representation of the scale drawing, the estimate of the camera path in scale drawing space. (Fig. 3A & 6B-C) Regarding claim 20: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: generating, in real time, an estimate of a camera viewing direction based on the sequence of images and, if applicable, on the acceleration and/or orientation data, storing the inspection data together with data the camera viewing direction at the inspection position in scale drawing space. (Fig. 3A and associated text & [0060-61]) Regarding claim 21: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 20. Fleischman further teaches: displaying, in real time, on a graphical representation of the scale drawing, the estimate of the camera viewing direction at a current position in scale drawing space. (Fig. 3A -B and associated text & [0060-61]) Regarding claim 22: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: triggering to automatically acquire inspection data in defined time intervals and/or in defined intervals of space along the camera path. (“regular time intervals (e.g., every couple of days) in order to monitor changes within the space over a period of time.” [0006]) Regarding claim 23: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: automatically triggering acquiring the inspection data upon reaching a predetermined inspection position, in particular upon the distance between a current position of the camera and the predetermined inspection position falling below a defined threshold, (“A route vector for an extracted image specifies a spatial distance (i.e., a direction and a magnitude) between the extracted image and one of the other extracted images.” [0024]; “a route vector for an extracted image is a vector representing a spatial distance between the extracted image and one of the other extracted images. For instance, the route vector associated with an extracted image has its tail at that extracted image and its head at the other extracted image, such that adding the route vector to the spatial location of its associated image yields the spatial location of the other extracted image.” [0052]) Regarding claim 24: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: generating guiding information for guiding the user to a predetermined inspection position, in particular by displaying the predetermined inspection position in scale drawing space on a graphical representation of the scale drawing, and/or in particular by displaying directions to the predetermined inspection position (“When the process is initiated, the first node that is identified is the starting point of the camera path. The starting point may be provided as user input or determined based on location data (e.g., GPS or IPS data) received from the image capture system .” [0094]) Regarding claim 26: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: storing raw data indicative of the estimate of the camera path in sensor space, in particular three room coordinates. three rotation angles and a confidence measure, - storing data indicative of the first, second and any further position in sensor space and of the first, second and any further position in scale drawing space. (“calculating a confidence score for each 6D pose in each path estimate, and selecting, for each spatial position along the camera path, the 6D pose with the highest confidence score. .” [0075]; “The SLAM algorithm estimates a six-dimensional (6D) camera pose (i.e., a 3D translation and a 3D rotation) for each of the images” [0019]) Regarding claim 28: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman further teaches: generating and storing a representation of the environment in sensor space based on the sequence of images, upon cold start. receiving a further sequence of images from the camera located at a cold start position, generating an estimate of the cold start position in sensor space based on the further sequence of images and the representation of the environment, determining the cold start position in scale drawing space based on the estimate of the cold start position in sensor space and on the transformation between sensor space and scale drawing space calculated prior to cold start. (“When the process is initiated, the first node that is identified is the starting point of the camera path. The starting point may be provided as user input or determined based on location data (e.g., GPS or IPS data) received from the image capture system .” [0094]) Regarding claim 31: Fleischman, as shown in the rejection above, discloses the limitations of claim 30. Fleischman further teaches: wherein the camera is a 360-degree camera in communication with the processor, configured to acquire inspection data. (“ the general contractor can inspect the electrical issues…each of the images is a 360-degree image .”, See at least paragraphs 0064 & 67 & Fig. 7 (701) ) Regarding claim 32: Fleischman, as shown in the rejection above, discloses the limitations of claim 30. Fleischman further teaches: a display in communication the processor, in particular wherein the inspection system comprises a tablet computer or a smartphone -}-. (“ smartphone, tablet.”, See at least paragraphs 0041 & Fig. 7 (713) ) Regarding claim 33: Fleischman, as shown in the rejection above, discloses the limitations of claim 30. Fleischman further teaches: an inertial measurement unit, in particular an accelerometer and/or a gyroscope, a magnetometer, a GNSS receiver in communication with the processor (“the motion data 214 also includes data from a magnetometer, the magnetometer data may be used in addition to or in place of the gyroscope data to determine changes to the orientation of the image capture system “ [0047]; “accelerometer, gyroscope, and/or magnetometer data… in a time interval centered on, preceding, or subsequent to the time of the 6D pose; ” See at least paragraphs 0076 & Fig. 1& 7) Claims 9 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Fleischman (U.S. Pub. No. US 20200027267 A1) in view of BANDYOPADHYAY (CA 2695841 A1) in further view of Chen (U.S. Pub No. 20130216089 A1). Regarding claim 9: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman does teach non-destructive testing by camera by teaching “Traditionally, examining this electrical work would require tearing down the drywall and other completed finishes in order to expose the work, which is a very costly exercise.” [0062], however, it does not teach the actual testing and methodology. However, Chen does teach: wherein the inspection data comprises non-destructive testing data, in particular at least one of a hardness value, ultrasonic data, ground-penetrating radar (GPR} data, eddy current data. (“ . Ultrasonic measurements and static load tests in bridge evaluation. NDT ”, See at least paragraph 0150; “Ground penetrating radar for concrete bridge” [0159-60]; “sensors, such as electromagnetic acoustic transducers, magnetic sensors, laser ultrasonics, infrared or thermal cameras, guided waves, field measurement probes, and strain gauges have been adopted to measure structural information, including static and dynamic displacement, strain and stress, acceleration, surface and interior damage, and corrosion.” [0010]) It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified the combination of Fleischman and BANDYOPADHYAY to include the testing and methodology as taught by Chen because it provides “accurate assessment of infrastructure condition and, through the promotion of proper maintenance, can reduce the cost of unnecessary structure replacement.” (Chen, See at least paragraph 0010). Regarding claim 25: The combination of Fleischman and BANDYOPADHYAY, as shown in the rejection above, discloses the limitations of claim 1. Fleischman does teach “ the spatial indexing system 130 scores each node that is separated from the identified node by less than a threshold number of edges (i.e., the spatial indexing system 130 scores the nodes that are close to the identified node). This may be useful, for example, when the grid map includes a large number of nodes and edges and it would be too computationally intensive to score each of the other nodes. The scores are generated 630 based on the transition scores for the edges between the identified node and the other node. The score is further based on the direction of the first estimate of the camera path near the identified node. For instance, if the first estimate of the camera path travels to the left near the identified node, then a higher score is generated for the edge connecting the identified node to the adjacent node on its left, while lower scores are generated for the edges connecting the identified node to the adjacent nodes above, below, and to the right. The score is also based on the distance traveled by the first estimate of the camera path near the identified node. For example, if the next 6D pose vector on the camera path is 4 feet away, and adjacent nodes in the grid map are separate by a distance of 2 feet, then nodes that are separated from the identified node by two edges are assigned a higher score.” [0095-96] however, it does not teach the user input . However, Chen does teach: generating, in real time, an error measure for the estimate of the camera path, if the error measure exceeds a defined error threshold at a current position outputting a warning or triggering the user to generate a further user input indicative of the current position of the camera in scale drawing space, calculating a further transformation between sensor space and scale drawing space based on the further position in sensor space and the further position in scale drawing space. [0054-57] The Examiner is using BRI to interpret transformation to be image processing.) It would have been obvious to one of ordinary skill in the art at the effective time of filing to have modified the combination of Fleischman and BANDYOPADHYAY to include the user input and as taught by Chen because it provides “accurate assessment of infrastructure condition and, through the promotion of proper maintenance, can reduce the cost of unnecessary structure replacement.” (Chen, See at least paragraph 0010). CONCLUSION The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jarecki (US 20230242279 A1) is pertinent because it receives captured data associated with a flight path flown by an unmanned aircraft system to acquire the captured data for the aircraft. The computer system compares the captured data with reference data for the aircraft to form a comparison. The computer system determines whether the captured data is within a set of tolerances for valid captured data using a result of the comparison. Prior to detecting anomalies for the aircraft using the captured data, the computer system determines a set of corrective actions in response to the captured data being outside of the set of tolerances for the valid captured data in which the set of corrective actions is performed.. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W ANDERSON whose telephone number is (571)270-0508. The examiner can normally be reached Monday - Thursday 9am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Debbie Reynolds can be reached at (571) 272-0734. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Mike Anderson Supervisor Patent Examiner Art Unit 3693 /Mike Anderson/Supervisory Patent Examiner, Art Unit 3693
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
May 12, 2025
Non-Final Rejection — §101, §103
Aug 15, 2025
Response Filed
Aug 26, 2025
Final Rejection — §101, §103
Nov 28, 2025
Request for Continued Examination
Dec 04, 2025
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §101, §103
Mar 25, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12590802
System for Guiding an Operator When Compacting Concrete
2y 5m to grant Granted Mar 31, 2026
Patent 12570510
CARGO HANDLING MANAGEMENT DEVICE, IN-VEHICLE TERMINAL DEVICE, CONTROL METHOD, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12566068
DETERMINING VEHICLE ROUTE MAPS AND ROUTES
2y 5m to grant Granted Mar 03, 2026
Patent 11250432
SYSTEMS AND METHODS FOR REDUCING FRAUD RISK FOR A PRIMARY TRANSACTION ACCOUNT
2y 5m to grant Granted Feb 15, 2022
Patent 11244387
Approving and Updating Dynamic Mortgage Applications
2y 5m to grant Granted Feb 08, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
97%
With Interview (+53.0%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 213 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month