Prosecution Insights
Last updated: April 19, 2026
Application No. 18/068,960

APPARATUSES AND METHODS FOR DETERMINING THE VOLUME OF A STOCKPILE

Final Rejection §103§112
Filed
Dec 20, 2022
Examiner
ESQUINO, CALEB LOGAN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Purdue Research Foundation
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
11 granted / 16 resolved
+6.8% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is in response to the remarks and amendments filed on September 24th, 2025. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 now require that a first and second estimate of the location and orientation of the image sensor in relation to the stockpile be based entirely on the image data from the image sensor. Claim 12 also requires that the second estimate of the locations and rotation orientations of the image sensor in relation to the stockpile be based entirely on the image data. This requires that there be a single image sensor and that no other data (except the image data) be used to assist when creating the estimate or estimates. This language is not consistent with the specification. Figure 6 shows the corresponding first estimate (labelled as “Coarse registration”), but also shows that at least the point cloud and the camera IOP are used to assist in the initial location and rotational estimate of the station. Claims 2-11 and 13-20 are also rejected due to their dependency on a rejected independent claim. To overcome this rejection, applicant could amend the claim language to include the use of the camera IOP and point clouds, or show sufficient evidence in the specification to prove that the coarse registration can be done entirely based on image data. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 10, 12-14, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Obropta (US20180047177) in view of Robert (US20230007962). In regards to claim 1, Obropta teaches a system for determining the volume of a stockpile, comprising: a sensor package including an image sensor configured to collect image data of a stockpile, and a light detection and ranging sensor connected to the image sensor and configured to collect additional information of the stockpile; and one or more processors (Obropta Paragraph [0034] “In some embodiments, the device may include: (1) an imaging sensor (e.g., a color camera, a monochrome camera, a multi-spectral camera, etc.) configured to capture one or more image(s) of a set of harvested specialty crops; (2) a depth sensor (e.g., an ultrasonic sensor, a LIDAR (Light Detection and Ranging) sensor, another imaging sensor, etc.); (3) processing circuitry configured to generate depth information at least in part by using data obtained by the depth sensor (and, in some embodiments, by also using the image(s) obtained by the imaging sensor);”) configured to: receive the image data (Obropta Paragraph [0152] “As shown in FIG. 8, process 800 includes: (1) obtaining an image of a set harvested specialty crops and associated depth information at act 802”), receive the additional information from the light detection and ranging sensor (Obropta Paragraph [0148] “Next, process 700 proceeds to act 704, where the sensor package obtains depth data associated with the image using a depth sensor. Examples of depth sensors are provided herein. The depth data may include ultrasound data, LIDAR data, and/or imaging data.”), generate an estimate of the stockpile volume based on the second estimate of the location and rotational orientation of the image sensor (Obropta Paragraph [0157] “Next, at act 806, the volume of the set of harvested specialty crops may be estimated using the 3D surface model generated at act 804. In some embodiments, the volume may be determined by computing a volume integral of the area underneath the 3D surface model. In some embodiments, e.g., where the surface model includes a collection of cuboids, the volume may be calculated by calculating the volume of portions of the surface model, e.g., one cuboid or section of the mesh at a time, and sum the volumes of the portions.” Examiner note: Given that the stockpile volume is based on a 3D model which was computed based on an image captured from the imaging sensor, the volume must be at least partially based on where the imaging sensor is located and how it is rotated), and provide the estimate of the stockpile volume to a user interface (Obropta Paragraph [0166] “Next, at step 816, any of the information derived during process 800 from the image of the set of harvested specialty crops and the associated depth information is output. In some embodiments, the information may be stored for subsequent use. In some embodiments, the information may be provided to an operator of harvesting equipment (e.g., a harvester or bin piler) and the operator may alter the operation of the harvesting equipment based on the received information.”). Obropta does not teach generating a first estimate of a location and rotational orientation of the image sensor in relation to the stockpile based entirely on the image data from the image sensor, and generating a second estimate of the location and rotational orientation of the image sensor in relation to the stockpile based entirely on the image data. However, Robert teaches generating a first estimate of a location and rotational orientation of the image sensor in relation to the stockpile based entirely on the image data from the image sensor, (Robert Figure 2 Step 210-240; Paragraph [0023] “At step 210, the photogrammetry application receives a set of images 150 of a scene captured by one or more physical cameras.”; Paragraph [0027] “In a first phase of this step, pairwise camera geometry is estimated. In this phase relative camera pose for each image pair is derived. Then, in a second phase, global camera geometry is estimated. Camera pose is estimated by first estimating rotations for camera(s) (e.g., using global averaging), and then deriving translations for camera(s) using the estimated rotations.”) and generating a second estimate of the location and rotational orientation of the image sensor in relation to the stockpile based entirely on the image data (Robert Figure 2 Step 250; Paragraph [0015] “As used herein, the term “camera”, when used without the modifier “physical”, refers to a mathematical object defined according to a pinhole camera model that describes how an image was captured. There is a one-to-one correspondence between cameras and images captured at different positions and orientations. This is independent of the particular physical camera that may have captured the images.”; Paragraph [0030] “As part of step 250, camera pose and position of keypoints in 3D space may be scaled and georeferenced. Typically, a global transform is generated for all cameras under consideration (i.e. all cameras for direct methods, or a growing subset of cameras for incremental methods) that best maps camera positions in 3D space to actual camera positions.” Examiner note: While paragraph [0030] describes multiple “cameras”, it can be seen from paragraph [0015] that these cameras are not necessarily different imaging sensors, but are instead different locations and orientations of images captured by one or more physical cameras). Robert is considered to be analogous to the claimed invention because they are in the same field of determining the orientation and location of a plurality of cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta to include the teachings of Robert, to provide the advantage of reducing calculation time and therefore saving processing power (Robert Paragraph [0050] “Integrating sensor-derived camera-pose at an early stage of the process may have a number of advantages, as mentioned above. Calculation efficiency may be improved, thereby reducing processor and memory resource consumption. Using the present techniques, it is possible to compute camera rotation in the 3D space of the scene from sensor-derived camera position using only pairwise relations between the given camera and other cameras that observe a common portion of the scene.”). In regards to claim 2, Obropta in view of Robert teaches the system of claim 1, wherein the one or more processors are configured to generate a first estimate of the location and rotational orientation of the image sensor utilizing quaternions (Robert Paragraph [0048] “At step 440, the camera rotation calculation subprocess 134 determines a camera rotation in the 3D space of the scene that best maps unit vectors defined based on differences in the optical centers determined from sensor-derived camera positions (from step 420) to the unit vectors along the epipoles (from step 430). As discussed above, provided there are multiple camera pairs available, an approximate solution may be determined using a singular value decomposition and cancelling the inside diagonal matrix or using unit quaternions”). In regards to claim 3, Obropta in view of Robert teaches the system of claim 2, wherein the one or more processors are configured to generate a first estimate of the location and rotational orientation of the image sensor utilizing image comparison (Robert Paragraph Figure 2 Step 220-230; Paragraph [0024] “At step 220, a feature detection process 131 of the photogrammetry application identifies common image features in individual images across the set of images, and computes information describing these image features.” Paragraph [0025] “At step 230, a feature correspondence and filtering process 132 of the photogrammetry application matches image features across image pairs.”). In regards to claim 4, Obropta in view of Robert teaches the system of claim 1, wherein the one or more processors are configured to generate a first estimate of the location and rotational orientation of the image sensor by comparison of images at different rotational orientations and different locations in relation to the stockpile. (Robert Paragraph Figure 2 Step 220-230; Paragraph [0024] “At step 220, a feature detection process 131 of the photogrammetry application identifies common image features in individual images across the set of images, and computes information describing these image features.” Paragraph [0025] “At step 230, a feature correspondence and filtering process 132 of the photogrammetry application matches image features across image pairs.”). In regards to claim 10, Obropta in view of Robert teaches the system of claim 1, wherein the one or more processors are configured to perform digital surface model generation for volume estimation (Obropta Paragraph [0155] “Next, at act 804, a 3D surface model of the set of harvested specialty crops is generated using the image and depth information received at act 802.”; Paragraph [0157] “Next, at act 806, the volume of the set of harvested specialty crops may be estimated using the 3D surface model generated at act 804.”). In regards to claim 12, Obropta teaches a method for determining the volume of a stockpile, comprising: receiving image data related to the stockpile from an image sensor (Obropta Paragraph [0152] “As shown in FIG. 8, process 800 includes: (1) obtaining an image of a set harvested specialty crops and associated depth information at act 802”); receiving range information data from a range sensor to multiple portions of the surface of the stockpile (Obropta Paragraph [0148] “Next, process 700 proceeds to act 704, where the sensor package obtains depth data associated with the image using a depth sensor. Examples of depth sensors are provided herein. The depth data may include ultrasound data, LIDAR data, and/or imaging data.”); generating with a processor a first estimate of the location of the image sensor in relation to the stockpile based entirely onObropta Paragraph [0067] “In some embodiments, the sensor package 211a may contain any of the numerous types of sensors described herein including, by way of example and not limitation, one or more imaging sensors (e.g., a color camera, a monochrome camera, or a multi-spectral camera, etc.) and one or more depth sensors (e.g., one or more ultrasound sensors, one or more LIDAR sensors, or one or more additional imaging sensors, etc.). In some embodiments, sensor package 211a may be configured to obtain information used for determining the distance between the harvested specialty crops on the conveyor 209 and the sensor package 211a.”); generating with a processor an estimate of the stockpile volume based on the second estimate of the location and rotational orientation of the image sensor (Obropta Paragraph [0157] “Next, at act 806, the volume of the set of harvested specialty crops may be estimated using the 3D surface model generated at act 804. In some embodiments, the volume may be determined by computing a volume integral of the area underneath the 3D surface model. In some embodiments, e.g., where the surface model includes a collection of cuboids, the volume may be calculated by calculating the volume of portions of the surface model, e.g., one cuboid or section of the mesh at a time, and sum the volumes of the portions. “); and providing via a user interface information concerning the volume of the stockpile (Obropta Paragraph [0166] “Next, at step 816, any of the information derived during process 800 from the image of the set of harvested specialty crops and the associated depth information is output. In some embodiments, the information may be stored for subsequent use. In some embodiments, the information may be provided to an operator of harvesting equipment (e.g., a harvester or bin piler) and the operator may alter the operation of the harvesting equipment based on the received information.”). Obropta does not teach generating with a processor a first estimate of the location of the image sensor in relation to the stockpile based entirely on the image data However, Robert teaches generating with a processor a first estimate of the location of the image sensor in relation to the stockpile based entirely on the image dataRobert Figure 2 Step 210-240; Paragraph [0023] “At step 210, the photogrammetry application receives a set of images 150 of a scene captured by one or more physical cameras.”; Paragraph [0027] “In a first phase of this step, pairwise camera geometry is estimated. In this phase relative camera pose for each image pair is derived. Then, in a second phase, global camera geometry is estimated. Camera pose is estimated by first estimating rotations for camera(s) (e.g., using global averaging), and then deriving translations for camera(s) using the estimated rotations.”); and generating with a processor a second estimate of the locations and rotational orientations of the image sensor in relation to the stockpile based entirely on the image data (Robert Figure 2 Step 250; Paragraph [0015] “As used herein, the term “camera”, when used without the modifier “physical”, refers to a mathematical object defined according to a pinhole camera model that describes how an image was captured. There is a one-to-one correspondence between cameras and images captured at different positions and orientations. This is independent of the particular physical camera that may have captured the images.”; Paragraph [0030] “As part of step 250, camera pose and position of keypoints in 3D space may be scaled and georeferenced. Typically, a global transform is generated for all cameras under consideration (i.e. all cameras for direct methods, or a growing subset of cameras for incremental methods) that best maps camera positions in 3D space to actual camera positions.”). Robert is considered to be analogous to the claimed invention because they are in the same field of determining the orientation and location of a plurality of cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta to include the teachings of Robert, to provide the advantage of reducing calculation time and therefore saving processing power (Robert Paragraph [0050] “Integrating sensor-derived camera-pose at an early stage of the process may have a number of advantages, as mentioned above. Calculation efficiency may be improved, thereby reducing processor and memory resource consumption. Using the present techniques, it is possible to compute camera rotation in the 3D space of the scene from sensor-derived camera position using only pairwise relations between the given camera and other cameras that observe a common portion of the scene.”). In regards to claim 13, Obropta in view of Robert renders obvious the claim limitations as in the consideration of claims 2, 3, and 12 above. In regards to claim 14, Obropta in view of Robert renders obvious the claim limitations as in the consideration of claims 4 and 12 above. In regards to claim 18, Obropta in view of Robert renders obvious the claim limitations as in the consideration of claims 10 and 12 above. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Obropta in view of Robert as applied to claims 1-4, 10, 12-14, and 18 above, and further in view of Lukas (US20100284572). In regards to claim 5, Obropta in view of Robert teaches the system of claim 1, but fails to teach wherein the one or more processors are configured to perform segmentation of planar features from individual scans. However, Lukas teaches wherein the one or more processors are configured to perform segmentation of planar features from individual scans (Lukas Figure 1 Part 102; Paragraph [0041] “Next, processor 104 determines which cells qualify as planar feature candidates (314). A cell qualifies as a planar feature candidate when a number of data points associated with the cell is equal to or above a threshold. Similar to block 310, cells having a number of data points below a certain threshold are disregarded.” Examiner note: Part 102 is the scanner which produces an individual scan, the second excerpt shows segmenting planar features.). Lukas is considered to be analogous to the claimed invention because they are in the same field of identifying features having a flat surface. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Lukas, to provide the advantage of a system that works well 3D data (such as LIDAR) (Lukas Paragraph [0002] “Point feature based approaches work well for some images. For example, point feature based approaches are a commonly used when dealing with standard intensity images and infrared images. Point feature based approaches, however, do not perform as well with images containing 3D data.”) In regards to claim 15, Obropta in view of Robert and Lukas renders obvious the claim limitations as in the consideration of claims 5 and 12 above. Claims 6-7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Obropta in view of Robert as applied to claims 1-4, 10, 12-14, and 18 above, and further in view of Tan (US20230115887). In regards to claim 6, Obropta in view of Robert teaches the system of claim 1, but fails to teach wherein the one or more processors are configured to perform image-based coarse registration of sensor scans at a single data collection location. However, Tan teaches wherein the one or more processors are configured to perform image-based coarse registration of sensor scans at a single data collection location (Tan Figure 5 Step 568 Paragraph [0051] “The method 500 may perform an initial iterative closest point (ICP) process 568 based on the pose adjusted digital twin and the preprocessed union data to perform the coarse alignment between the real world object and the digital twin. FIG. 8A shows the alignment of the object and digital twin after coarse alignment (with the real object shown as an image and the points of the digital twin with coarse alignment shown as squares with each square representing real world scanned 3D points by a scanner (for example, HoloLens) related to the digital twin CAD model. As shown in FIG. 8A, the alignment is still very rough and does not meet the criteria for a good alignment as discussed above. FIG. 8B shows the union of the point clouds for the coarse alignment based on the high confidence points from the pre-trained models shown in FIGS. 7A and 7B.” Examiner note: This excerpt teaches coarse alignment of sensor scans between real world scanned points and a computer model, where the real world scanned points are taken from a single location.). Tan is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Tan, to provide the advantage of a system that can handle precise alignment of real world objects into virtual data (Tan Paragraph [0024] “The system 100 must have one or more computing devices 102 that each may store and execute a client application to communicate over a communications path 104 to (and exchange data with) a backend system 106 that together provide an augmented reality/a mixed reality experience that benefits from the precise/submillimeter alignment of the real-world object and its digital twin.”) In regards to claim 7, Obropta in view of Robert teaches the system of claim 1, but fails to teach wherein the one or more processors are configured to perform feature matching and fine registration of sensor point clouds from a single data collection location. However, Tan teaches wherein the one or more processors are configured to perform feature matching and fine registration of sensor point clouds from a single data collection location (Tan Figure 5 Step 544; Paragraph [0052] “The method may also use the union of points to refine the submillimeter alignment (544) in which the accumulated high confidence point pairs are obtained from an intersection 800 (also referenced as 700X or 702X in FIGS. 7A and 7B) of all the points from each of the models as shown in FIG. 8B. The intersection is a portion of the point cloud at which each of the models had identified high confidence point pairs. Thus, in the second stage of alignment, the align refinement uses commonly detected regions, the center which has points identified by each of the models that overlap/adjacent to each other as shown in FIG. 8B.” Examiner note: This excerpt teaches that after the initial ICP of the previous step (568), the matching can be further refined and aligned based on matching the detected regions (or features)). In regards to claim 16, Obropta in view of Robert and Tan renders obvious the claim limitations as in the consideration of claims 6, 7, and 12 above. Claims 8-9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Obropta in view of Robert as applied to claims 1-4, 10, 12-14, and 18 above, and further in view of Su (CN116385507A). In regards to claim 8, Obropta in view of Robert teaches the system of claim 1, but fails to teach wherein the one or more processors are configured to perform coarse registration of point clouds from different data collection locations However, Su teaches wherein the one or more processors are configured to perform coarse registration of point clouds from different data collection locations (Su Page 2 Paragraph 4 “As a first aspect of the present invention, providing a multi-source point cloud data registration method”; Su Page 2 Step S3 “step S3: by adding scale factor in the 4 PCS algorithm, selecting the same name line segment with different proportions in the laser point cloud and dense image point cloud the target object as the base, so as to finish coarse registration of the dense image point cloud and the laser point cloud, and obtaining coarse registration parameters of the laser point cloud and the dense image point cloud” Examiner note: The first excerpt shows that this system takes multi-source point cloud data, the second excerpt shows coarse registration.) Su is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Su, to provide the advantage of a system that can precisely convert 3D point cloud data into virtual 3D models (Su Abstract “The invention can be applied to the laser point cloud and image data registration, it solves the problem that the different scale point cloud registration precision is low, at the same time, it improves the registration efficiency.”) In regards to claim 9, Obropta in view of Robert teaches the system of claim 1, and wherein the one or more processors are configured to perform feature matching (Obropta Paragraph [0084] “For example, in some embodiments, a correspondence between features in pairs of images may be computed to generate a disparity map, which together with information about the relative position of the two cameras to each other may be used to determine depths of one or more points in the image.”) but fails to teach fine registration of sensor point clouds from different data locations. However, Su teaches fine registration of sensor point clouds from different data collection locations. (Su Page 2 Step S4 “step S4: constructing a fine registration model based on dual quaternion, so as to finish fine registration of dense image point cloud and the laser point cloud and obtaining fine registration parameters of the laser point cloud and the dense image point cloud coarse registration.”) In regards to claim 17, Obropta in view of Robert and Su renders obvious the claim limitations as in the consideration of claims 8, 9, and 12. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Obropta in view of Robert as applied to claims 1-4, 10, 12-14, and 18 above, and further in view of Weng (US20060147188). In regards to claim 11, Obropta in view of Robert teaches the system of claim 1, but fails to teach an extension pole connected to the sensor package, wherein the extension pole is hand extendable and hand rotatable to raise and rotate the sensor package above the stockpile. However, Weng teaches an extension pole connected to the sensor package, wherein the extension pole is hand extendable and hand rotatable to raise and rotate the sensor package above the stockpile (Weng Paragraph [0037] “An elongating tube 341 is in between the thimble 34 and the hand rod 35 to extend positions of the digital camera C”; Paragraph [0039] “While in operation, the loading turntable 2 and the subject M can be rotated based on demanded angles per time by way of rotating the turn handle 273 by a single hand, and the digital camera C can be rotated around the subject M based on demanded angles per time by way of twisting the hand rod 35”). Weng is considered to be analogous to the claimed invention because they are in the same field of a camera system that captures all of a desired target. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Weng, to provide the advantage of a camera system that has a wide range of view (Weng Paragraph [0039] “therefore the digital camera C is capable of seriously taking surrounding horizontal and vertical pictures.”) Claim 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Obropta in view of Robert as applied to claims 1-4, 10, 12-14, and 18 above, and further in view of Lukas, Tan, and Su. In regards to claim 19, Obropta in view of Robert teaches the method of claim 12, renders obvious the claim limitations “said generating with a processor the first estimate of the location of the image sensor in relation to the stockpile includes utilizing quaternions and image comparison; and said generating with a processor a second estimate of the locations and rotational orientations of the image sensor in relation to the stockpile includes comparison of images at different rotational orientations and different locations in relation to the stockpile” as in the consideration of claims 12, 13, and 14, and teaches feature matching (Obropta Paragraph [0084] “For example, in some embodiments, a correspondence between features in pairs of images may be computed to generate a disparity map, which together with information about the relative position of the two cameras to each other may be used to determine depths of one or more points in the image.”). Obropta in view of Robert does not teach said generating with a processor an estimate of the stockpile volume includes performing segmentation of planar features from individual scans, performing image-based coarse registration of sensor scans at a data collection location, and performing fine registration of sensor point clouds from a data collection location, and performing coarse registration of point clouds from different data collection locations, and performing feature matching and fine registration of sensor point clouds from different data collection locations. However, Lukas teaches performing segmentation of planar features from individual scans (Lukas Figure 1 Part 102; Paragraph [0041] “Next, processor 104 determines which cells qualify as planar feature candidates (314). A cell qualifies as a planar feature candidate when a number of data points associated with the cell is equal to or above a threshold. Similar to block 310, cells having a number of data points below a certain threshold are disregarded.”). Lukas is considered to be analogous to the claimed invention because they are in the same field of identifying features having a flat surface. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Lukas, to provide the advantage of a system that works well 3D data (such as LIDAR) (Lukas Paragraph [0002] “Point feature based approaches work well for some images. For example, point feature based approaches are a commonly used when dealing with standard intensity images and infrared images. Point feature based approaches, however, do not perform as well with images containing 3D data.”) Furthermore, Tan teaches performing image-based coarse registration of sensor scans at a data collection location, and performing feature matching and fine registration of sensor point clouds from a data collection location (Tan Figure 5 Step 568 Paragraph [0051] “The method 500 may perform an initial iterative closest point (ICP) process 568 based on the pose adjusted digital twin and the preprocessed union data to perform the coarse alignment between the real world object and the digital twin. FIG. 8A shows the alignment of the object and digital twin after coarse alignment (with the real object shown as an image and the points of the digital twin with coarse alignment shown as squares with each square representing real world scanned 3D points by a scanner (for example, HoloLens) related to the digital twin CAD model. As shown in FIG. 8A, the alignment is still very rough and does not meet the criteria for a good alignment as discussed above. FIG. 8B shows the union of the point clouds for the coarse alignment based on the high confidence points from the pre-trained models shown in FIGS. 7A and 7B.”; Paragraph [0052] “The method may also use the union of points to refine the submillimeter alignment (544) in which the accumulated high confidence point pairs are obtained from an intersection 800 (also referenced as 700X or 702X in FIGS. 7A and 7B) of all the points from each of the models as shown in FIG. 8B. The intersection is a portion of the point cloud at which each of the models had identified high confidence point pairs. Thus, in the second stage of alignment, the align refinement uses commonly detected regions, the center which has points identified by each of the models that overlap/adjacent to each other as shown in FIG. 8B.”). Tan is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Tan, to provide the advantage of a system that can handle precise alignment of real world objects into virtual data (Tan Paragraph [0024] “The system 100 must have one or more computing devices 102 that each may store and execute a client application to communicate over a communications path 104 to (and exchange data with) a backend system 106 that together provide an augmented reality/a mixed reality experience that benefits from the precise/submillimeter alignment of the real-world object and its digital twin.”) Lastly, Su teaches performing coarse registration of point clouds from different data collection locations, and performing fine registration of sensor point clouds from different data collection locations (Su Page 2 Paragraph 4 “As a first aspect of the present invention, providing a multi-source point cloud data registration method”; Su Page 2 Step S3 “step S3: by adding scale factor in the 4 PCS algorithm, selecting the same name line segment with different proportions in the laser point cloud and dense image point cloud the target object as the base, so as to finish coarse registration of the dense image point cloud and the laser point cloud, and obtaining coarse registration parameters of the laser point cloud and the dense image point cloud”; Page 2 Step S4 “step S4: constructing a fine registration model based on dual quaternion, so as to finish fine registration of dense image point cloud and the laser point cloud and obtaining fine registration parameters of the laser point cloud and the dense image point cloud coarse registration.”) Su is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Su, to provide the advantage of a system that can precisely convert 3D point cloud data into virtual 3D models (Su Abstract “The invention can be applied to the laser point cloud and image data registration, it solves the problem that the different scale point cloud registration precision is low, at the same time, it improves the registration efficiency.”). In regards to claim 20, Obropta in view of Robert teaches the system of claim 1, wherein the one or more processor are configured to: generate a first estimate of the location and rotational orientation of the image sensor utilizing quaternions (Robert Paragraph [0048] “At step 440, the camera rotation calculation subprocess 134 determines a camera rotation in the 3D space of the scene that best maps unit vectors defined based on differences in the optical centers determined from sensor-derived camera positions (from step 420) to the unit vectors along the epipoles (from step 430). As discussed above, provided there are multiple camera pairs available, an approximate solution may be determined using a singular value decomposition and cancelling the inside diagonal matrix or using unit quaternions”); generate a first estimate of the location and rotational orientation of the image sensor utilizing image comparison (Robert Paragraph Figure 2 Step 220-230; Paragraph [0024] “At step 220, a feature detection process 131 of the photogrammetry application identifies common image features in individual images across the set of images, and computes information describing these image features.” Paragraph [0025] “At step 230, a feature correspondence and filtering process 132 of the photogrammetry application matches image features across image pairs.”); generate a first estimate of the location and rotational orientation of the image sensor by comparison of images at different rotational orientations and different locations in relation to the stockpile (Robert Paragraph Figure 2 Step 220-230; Paragraph [0024] “At step 220, a feature detection process 131 of the photogrammetry application identifies common image features in individual images across the set of images, and computes information describing these image features.” Paragraph [0025] “At step 230, a feature correspondence and filtering process 132 of the photogrammetry application matches image features across image pairs.”); performing feature matching (Obropta Paragraph [0084] “For example, in some embodiments, a correspondence between features in pairs of images may be computed to generate a disparity map, which together with information about the relative position of the two cameras to each other may be used to determine depths of one or more points in the image.”); and perform digital surface model generation for volume estimation (Obropta Paragraph [0155] “Next, at act 804, a 3D surface model of the set of harvested specialty crops is generated using the image and depth information received at act 802.”; Paragraph [0157] “Next, at act 806, the volume of the set of harvested specialty crops may be estimated using the 3D surface model generated at act 804.”). Obropta in view of Robert does not teach performing segmentation of planar features from individual scans; performing image-based coarse registration of sensor scans at a single data collection location; performing feature matching and fine registration of sensor point clouds from a first data collection location; performing coarse registration of point clouds from different data collection locations; and performing feature matching and fine registration of sensor point clouds from a second data collection location. However, Lukas teaches performing segmentation of planar features from individual scans (Lukas Figure 1 Part 102; Paragraph [0041] “Next, processor 104 determines which cells qualify as planar feature candidates (314). A cell qualifies as a planar feature candidate when a number of data points associated with the cell is equal to or above a threshold. Similar to block 310, cells having a number of data points below a certain threshold are disregarded.”). Lukas is considered to be analogous to the claimed invention because they are in the same field of identifying features having a flat surface. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Lukas, to provide the advantage of a system that works well 3D data (such as LIDAR) (Lukas Paragraph [0002] “Point feature based approaches work well for some images. For example, point feature based approaches are a commonly used when dealing with standard intensity images and infrared images. Point feature based approaches, however, do not perform as well with images containing 3D data.”) Furthermore, Tan teaches performing image-based coarse registration of sensor scans at a single data collection location; performing feature matching and fine registration of sensor point clouds from a first data collection location (Tan Figure 5 Step 568 Paragraph [0051] “The method 500 may perform an initial iterative closest point (ICP) process 568 based on the pose adjusted digital twin and the preprocessed union data to perform the coarse alignment between the real world object and the digital twin. FIG. 8A shows the alignment of the object and digital twin after coarse alignment (with the real object shown as an image and the points of the digital twin with coarse alignment shown as squares with each square representing real world scanned 3D points by a scanner (for example, HoloLens) related to the digital twin CAD model. As shown in FIG. 8A, the alignment is still very rough and does not meet the criteria for a good alignment as discussed above. FIG. 8B shows the union of the point clouds for the coarse alignment based on the high confidence points from the pre-trained models shown in FIGS. 7A and 7B.”; Paragraph [0052] “The method may also use the union of points to refine the submillimeter alignment (544) in which the accumulated high confidence point pairs are obtained from an intersection 800 (also referenced as 700X or 702X in FIGS. 7A and 7B) of all the points from each of the models as shown in FIG. 8B. The intersection is a portion of the point cloud at which each of the models had identified high confidence point pairs. Thus, in the second stage of alignment, the align refinement uses commonly detected regions, the center which has points identified by each of the models that overlap/adjacent to each other as shown in FIG. 8B.”). Tan is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Tan, to provide the advantage of a system that can handle precise alignment of real world objects into virtual data (Tan Paragraph [0024] “The system 100 must have one or more computing devices 102 that each may store and execute a client application to communicate over a communications path 104 to (and exchange data with) a backend system 106 that together provide an augmented reality/a mixed reality experience that benefits from the precise/submillimeter alignment of the real-world object and its digital twin.”) Lastly, Su teaches performing coarse registration of point clouds from different data collection locations, and performing fine registration of sensor point clouds from different data collection locations (Su Page 2 Paragraph 4 “As a first aspect of the present invention, providing a multi-source point cloud data registration method”; Su Page 2 Step S3 “step S3: by adding scale factor in the 4 PCS algorithm, selecting the same name line segment with different proportions in the laser point cloud and dense image point cloud the target object as the base, so as to finish coarse registration of the dense image point cloud and the laser point cloud, and obtaining coarse registration parameters of the laser point cloud and the dense image point cloud”; Page 2 Step S4 “step S4: constructing a fine registration model based on dual quaternion, so as to finish fine registration of dense image point cloud and the laser point cloud and obtaining fine registration parameters of the laser point cloud and the dense image point cloud coarse registration.”) Su is considered to be analogous to the claimed invention because they are in the same field of taking real world data and converting it into accurate 3D models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of claimed invention to have modified the system of Obropta in view of Robert to include the teachings of Su, to provide the advantage of a system that can precisely convert 3D point cloud data into virtual 3D models (Su Abstract “The invention can be applied to the laser point cloud and image data registration, it solves the problem that the different scale point cloud registration precision is low, at the same time, it improves the registration efficiency.”). Response to Arguments Applicant's arguments filed September 24th, 2025 have been fully considered but they are not persuasive. Applicant alleges on page 8 of “Remarks” that “As such, the claimed system generates estimates based entirely on image data from a single image sensor” and on page 9 that “However, paragraph [0027] of Robert further states that the scheme used in Robert utilizes multiple cameras… Nowhere does Robert indicate that a single camera can be used.” Examiner respectfully disagrees. Paragraph [0023] of Robert states “At step 210, the photogrammetry application receives a set of images 150 of a scene captured by one or more physical cameras” and in paragraph [0015] “As used herein, the term “camera”, when used without the modifier “physical”, refers to a mathematical object defined according to a pinhole camera model that describes how an image was captured. There is a one-to-one correspondence between cameras and images captured at different positions and orientations. This is independent of the particular physical camera that may have captured the images. For example, if several images were produced by the same physical camera at several different positions and orientations, each image is considered to correspond to one camera, such that there are several cameras.” These paragraphs suggest that Robert can use a single (one or more) physical camera (which is analogous to an imaging sensor) to capture multiple images of the scene from multiple positions and orientations. While paragraph [0027] of Robert does state “In incremental methods, a set of estimated camera poses and position of keypoints in 3D space is gradually grown by adding cameras one-by-one, and successively performing estimations. In direct methods, all cameras are added at once, and a single estimation is performed”, Robert teaches that the “cameras” (which correspond to an image) of this paragraph can refer to a single imaging sensor that has been moved to multiple locations and orientations. This paragraph is therefore stating that the images can be used to determine the location and orientation of a single imaging sensor that was placed at each location and orientation. Therefore, the first and second estimate of location and orientation of the image in relation to the stockpile is based entirely on the image data from the sensor. Thus, the rejection is maintained. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-M
Read full office action

Prosecution Timeline

Dec 20, 2022
Application Filed
Mar 19, 2025
Non-Final Rejection — §103, §112
Sep 24, 2025
Response Filed
Nov 17, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602924
Method for Semantic Localization of an Unmanned Aerial Vehicle
2y 5m to grant Granted Apr 14, 2026
Patent 12602813
DEEP APERTURE
2y 5m to grant Granted Apr 14, 2026
Patent 12541857
SYNTHESIZING IMAGES FROM THE PERSPECTIVE OF THE DOMINANT EYE
2y 5m to grant Granted Feb 03, 2026
Patent 12530787
TECHNIQUES FOR DIGITAL IMAGE REGISTRATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518425
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+41.7%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month