Prosecution Insights
Last updated: April 19, 2026
Application No. 18/500,768

GENERALIZED THREE DIMENSIONAL MULTI-OBJECT SEARCH

Non-Final OA §101§103
Filed
Nov 02, 2023
Examiner
KEUP, AIDAN JAMES
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Brown University
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
48 granted / 60 resolved
+18.0% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status The status of claims 1-16 is: Claims 1-16 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/20/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 35 U.S.C. 101 requires that a claimed invention must fall within one of the four eligible categories of invention (i.e. process, machine, manufacture, or composition of matter) and must not be directed to subject matter encompassing a judicially recognized exception as interpreted by the courts. MPEP 2106. Three categories of subject matter are found to be judicially recognized exceptions to 35 U.S.C. § 101 (i.e. patent ineligible) (1) laws of nature, (2) physical phenomena, and (3) abstract ideas. MPEP 2106(II). To be patent-eligible, a claim directed to a judicial exception must as whole be integrated into a practical application or directed to significantly more than the exception itself (MPEP 2106). Hence, the claim must describe a process or product that applies the exception in a meaningful way, such that it is more than a drafting effort designed to monopolize the exception. Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Each of the independent claims 1, 11, and 14 are directed to one of the four statutory categories of eligible subject matter; thus, the claims pass Step 1 of the Subject Matter Eligibility Test (See flowchart in MPEP 2106). Step 2A, Prong 1 Analysis Independent claim 1 is directed to in a robot equipped with one or more camera-based object detectors, receiving an input, the input comprising point cloud observations of a local region, and localization of a robot camera pose; and outputting a viewpoint to move to as a result of sequential online planning. An individual can receive an input comprising observations of a region and determine a viewpoint to move to. The collection of point cloud observations and localization of a robot camera pose are insignificant data acquisition. Accordingly, the analysis under prong one of Step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Independent claim 11 is directed to a method comprising: in an automated machine equipped with one or more camera-based object detectors, receiving human-provided information or information inferred from point cloud observations regarding target locations; maintaining information states regarding the target locations through a probability distribution structured as an octree; initializing the information states based on point cloud observations; updating the information states based on object detection observations or point cloud observations; determining a search region occupancy through constructing an octree-based occupancy grid based on point cloud observations; and using ray-tracing to determine visibility at three dimensional locations within the search region. An individual can receive human provided information and determine visibility at three dimensional regions within a search region. Maintaining information states regarding target locations through a probability distribution structured as an octree; initializing the information states based on point cloud observations; updating the information states based on object detection observations or point cloud observations; determining a search region occupancy through constructing an octree-based occupancy grid based on point cloud observations; and using ray-tracing are all mathematical concepts. Accordingly, the analysis under prong one of Step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Independent claim 14 is directed to a system comprising: a robot equipped with one or more camera-based object detectors; and a gRPC framework comprising a gRPC client and a gRPC server, the gRPC client providing an interface between the robot and the gRPC server, the gRPC server maintaining an occupancy octree, a Partially Observable Markov Decision Process (POMDP) agent and a belief state. Maintaining an occupancy octree, a Partially Observable Markov Decision Process (POMDP) agent and a belief state are all mathematical concepts. Accordingly, the analysis under prong one of Step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Additional elements Independent claim 1 claims a robot equipped with one or more camera-based object detectors. Independent claim 11 claims an automated machine equipped with one or more camera-based object detectors. Independent claim 16 claims a robot equipped with one or more camera-based object detectors; and a gRPC framework comprising a gRPC client and a gRPC server. Step 2A, Prong 2 Analysis The above-identified elements do not integrate the judicial into a practical application nor do they suggest an improvement. The additional elements of a robot or automated machine equipped with one or more camera-based object detectors and a gRPC framework comprising a gRPC client and a gRPC server amounts to merely using generic computer hardware or components as a tool to perform the claimed mental process. Using a general purpose computer to apply a judicial exception does not qualify as a particular machine and therefore, does not integrate a judicial exception into a practical application (See MPEP 2106.05(b)). Furthermore, implementing an abstract idea on a computer does not integrate a judicial exception into a practical application (See MPEP 2106.05(f)). Moreover, the additional elements of the claims do not recite an improvement in the functioning of a computer or another technology or technical field, the claimed steps do not effect a transformation, and the claims do not apply the judicial exception in any meaningful way beyond generically linking the use of the judicial exception to a particular technological environment (See MPEP 2106.04(d)). Further, the act of acquiring data is mere data gathering which amounts to insignificant extra-solution activity (See MPEP 2106.05(g)). Therefore, the analysis under prong two of step 2A of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). Step 2B Finally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding independent claims 1, 11, and 16, as noted above, the additional elements are generic computer features which perform generic computer functions that are well-understood, routine, and conventional and do not amount to more than implementing the abstract idea with a computerized system. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves and other technology. Their collective functions merely provide conventional computer implementation, and mere implementation on a generic computer does not add significantly more to the claims. Accordingly, the analysis under step 2B of the Subject Matter Eligibility Test does not result in a conclusion of eligibility (See flowchart in MPEP 2106). For all the foregoing reasons, independent claims [] do not recite eligible subject matter under 35 USC 101. Claim 2 claims wherein the input further comprises three dimensional (3D) bounding boxes with detected object labels. The features of claim 2 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 2 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 3 claims wherein each of the object labels comprises a label. The features of claim 3 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 3 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 4 claims wherein the input further comprises segmented point clouds for detected objects with detected object labels. The features of claim 4 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 4 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 5 claims wherein the input further comprises two dimensional (2D) bounding boxes on an image paired with a corresponding depth image with detected object labels. The features of claim 5 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 5 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 6 claims wherein the input further comprises detected object labels. The features of claim 6 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 6 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 7 claims maintaining information states regarding target object locations through a probability distribution structured as an octree and updated based on object detection observations or point cloud observations. The features of claim 7 are further mathematical processes. Accordingly, claim 7 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 8 claims dynamically determining search region occupancy through constructing an octree-based occupancy grid based on point cloud observations; and using ray-tracing to determine visibility at three dimensional locations within the local region. The features of claim 8 are further mathematical processes. Accordingly, claim 8 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 9 claims wherein determining viewpoints for the robot to move to and observe at is performed by sequential decision-making based on Partially Observable Markov Decision Process (POMDP) model for three dimensional multi-object search. The features of claim 9 are further mathematical processes. Accordingly, claim 9 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 10 claims wherein viewpoint candidates are initialized and updated by sampling from the local region based on a current information state and occupancy to form a viewpoint graph. The features of claim 10 are further mathematical processes. Accordingly, claim 10 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 12 claims performing sequential decision-making based on a Partially Observable Markov Decision Process (POMDP) for three dimensional multi-object search to determine various viewpoints for the automated machine to move to and observe at. The features of claim 12 are further mathematical processes. Accordingly, claim 12 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 13 claims signaling when an object is found, wherein a location of the found object is indicated in the information state at the time of the found signal. The features of claim 13 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 13 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 15 claims wherein the belief state represents belief over object locations in the structure of the occupancy octree. The features of claim 15 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 15 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim 16 claims wherein the occupancy octree represents a search region's occupancy. The features of claim 16 are directed to the mental process since they do not preclude the mental analysis as recited in claim 1. Accordingly, claim 16 does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6 is rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (Robinson, B., Langford, D., Jetton, J., Cannan, L., Patterson, K., Diltz, R., & English, W. (2021, April). Real-time object detection and geolocation using 3D calibrated camera/LiDAR pair. In Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021 (Vol. 11748, pp. 57-77). SPIE., hereinafter “Robinson”) in view of Kim et al. (Kim, P., Chen, J., & Cho, Y. K. (2018). SLAM-driven robotic mapping and registration of 3D point clouds. Automation in Construction, 89, 38-48., hereinafter “Kim”). Regarding claim 1, Robinson discloses a method comprising: in a robot equipped with one or more camera-based object detectors (Robinson Page 5: “The current system employs a camera, a LiDAR, an inertial measurement unit (IMU), and a global positioning system (GNSS) to provide near real-time detection and geolocation of artifacts on the runway surface”), receiving an input (Robinson Page 5: “The current system employs a camera, a LiDAR, an inertial measurement unit (IMU), and a global positioning system (GNSS) to provide near real-time detection and geolocation of artifacts on the runway surface”; Robinson Page 5: “The camera and lidar are rigidly mounted so that their mutual orientation is fixed, and they are co-calibrated, so that detections acquired with the camera can be located within the local coordinate system of the LiDAR instrument”), the input comprising point cloud observations of a local region (Robinson Page 6: “The intrinsic and extrinsic calibrations matrices are composed to give the projection matrix so that, distortion notwithstanding, points in the LiDAR point cloud can now be associated with pixels on the camera focal plane array via application of the linear transformation”), and localization of a robot camera pose (Robinson Pages 6-7: “Data from an IMU and two GNSS receivers are fused to report an absolute 6-DoF pose (position and attitude) to enable transforming of the local LiDAR coordinates into the global frame of reference”). Robinson does not explicitly disclose the method comprising: outputting a viewpoint to move to as a result of sequential online planning. However, Kim discloses the method comprising: outputting a viewpoint to move to as a result of sequential online planning (Kim Page 41: “Based on this information from the previous section, the mobile robot navigates itself. At first, it finds the farthest point at current orientation in the 2D map currently built from Hector SLAM data. Then, the mobile robot determines it as a 2D (x, y) local goal position”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate moving to another viewpoint as taught by Kim because it would improve the method by allowing the method to be able to actively monitor or map a location and because the method of Robinson is ideal to be used in robots that are mobile (Robinson Page 20). This motivation for the combination of Robinson and Kim is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Regarding claim 2, Robinson discloses the method wherein the input further comprises three dimensional (3D) bounding boxes with detected object labels (Robinson Fig. 19: shows the bounding boxes with labels). Regarding claim 3, Robinson discloses the method wherein each of the object labels comprises a label (Robinson Fig. 19: shows the bounding boxes with labels). Regarding claim 4, Robinson discloses the method wherein the input further comprises segmented point clouds for detected objects with detected object labels (Robinson Fig. 19: shows bounding boxes with labels; Robinson Page 16: “This point cloud segment can be centroided in order to find the location of the cup within the local coordinate system of the LiDAR, and its size can be determined from the extents of the point cloud segment. This first demonstration shows the usefulness of the method for determining location and dimensional characteristics of objects detected as a segment of covering pixels with an image-based object detector”). Regarding claim 5, Robinson discloses the method wherein the input further comprises two dimensional (2D) bounding boxes on an image paired with a corresponding depth image with detected object labels (Robinson Fig. 19: shows bounding boxes with object labels). Regarding claim 6, Robinson discloses the method wherein the input further comprises detected object labels (Robinson Fig. 19: shows bounding boxes with object labels). Claim(s) 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over the Robinson and Kim combination in view of Vasquez-Gomez et al. (Vasquez-Gomez, J. I., Sucar, L. E., & Murrieta-Cid, R. (2017). View/state planning for three-dimensional object reconstruction under uncertainty. Autonomous Robots, 41(1), 89-109., hereinafter “Vasquez”). Regarding claim 7, the Robinson and Kim combination does not explicitly disclose the method further comprising maintaining information states regarding target object locations through a probability distribution structured as an octree and updated based on object detection observations or point cloud observations. However, Vasquez disclose the method further comprising maintaining information states regarding target object locations through a probability distribution structured as an octree (Vasquez Page 5: “After each scan, the sensor readings are integrated into an octree that represents the object’s bounding box”) and updated based on object detection observations or point cloud observations (Vasquez Page 5: “However, as the robot discovers the object to be reconstructed, the proposed planner considers the new sensed information to avoid collisions with the object to be reconstructed and to find the next best view/state”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the octree structure as taught by Vasquez with the method of Robinson and Kim because it would improve the method by reducing positioning error, collision rate, and increasing the coverage (Vasquez Page 1). This motivation for the combination of Robinson, Kim, and Vasquez is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Regarding claim 8, the Robinson and Kim combination does not explicitly disclose the method further comprising: dynamically determining search region occupancy through constructing an octree-based occupancy grid based on point cloud observations; and using ray-tracing to determine visibility at three dimensional locations within the local region. However, Vasquez teaches the method further comprising: dynamically determining search region occupancy through constructing an octree-based occupancy grid based on point cloud observations (Vasquez Page 6: “To represent the object bounding box, Wbox, we use a probabilistic occupancy map based on the octomap structure [5], which is an octree with probabilistic occupancy estimation. See Fig. 3. In this representation each voxel has associated a probability of being occupied. We use a probabilistic octree because it is able to deal with noise on the sensor readings. From now on we refer to a probabilistic occupancy map as octree”; and using ray-tracing to determine visibility at three dimensional locations within the local region (Vasquez Page 7: “In [34], we introduce a Hierarchical Ray Tracing (HRT). It is based on tracing few rays in a rough resolution map; then, only when occupied voxels are touched by a ray, the resolution is increased for observing details (see Fig. 4)”). It would have been obvious to combine Vasquez with the method of Robinson and Kim for the same reasons as used for claim 7 above. Claim(s) 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over the Robinson, Kim, and Vasquez combination in view of Ahmad et al. (Ahmad, S., Sunberg, Z. N., & Humbert, J. S. (2021). End-to-end probabilistic depth perception and 3d obstacle avoidance using pomdp. Journal of Intelligent & Robotic Systems, 103(2), 33., hereinafter “Ahmad”). Regarding claim 9, the Robinson, Kim, and Vasquez combination does not explicitly disclose the method wherein determining viewpoints for the robot to move to and observe at is performed by sequential decision-making based on Partially Observable Markov Decision Process (POMDP) model for three dimensional multi-object search. However, Ahmad teaches the method wherein determining viewpoints for the robot to move to and observe at is performed by sequential decision-making based on Partially Observable Markov Decision Process (POMDP) model for three dimensional multi-object search (Ahmad Fig. 1: shows POMDP used for 3D object search). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate POMDP as taught by Ahmad with the method of Robinson, Kim, and Vasquez because it would improve the method by allowing for object reconstruction that takes into account the uncertainty of reaching the state and the uncertainty in the observations of the sensors (Vasquez Page 21). This motivation for the combination of Robinson, Kim, Vasquez, and Ahmad is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Regarding claim 10, the Robinson, Kim, and Vasquez combination does not explicitly disclose the method wherein viewpoint candidates are initialized and updated by sampling from the local region based on a current information state and occupancy to form a viewpoint graph. However, Ahmad teaches the method wherein viewpoint candidates are initialized and updated by sampling from the local region based on a current information state and occupancy to form a viewpoint graph (Ahmad Page 8: “Sequential Importance Resampling (SIR), exploits the two-step process, prediction and update”). It would have been obvious to combinate Ahmad with Robinson, Kim, and Vasquez for the same reasons used for claim 9 above. Claim(s) 11 is rejected under 35 U.S.C. 103 as being unpatentable over Robinson in view of Vasquez. Regarding claim 11, Robinson discloses a method comprising: in an automated machine equipped with one or more camera-based object detectors (Robinson Page 5: “The current system employs a camera, a LiDAR, an inertial measurement unit (IMU), and a global positioning system (GNSS) to provide near real-time detection and geolocation of artifacts on the runway surface”), receiving human-provided information or information inferred from point cloud observations regarding target locations (Robinson Page 5: “The current system employs a camera, a LiDAR, an inertial measurement unit (IMU), and a global positioning system (GNSS) to provide near real-time detection and geolocation of artifacts on the runway surface”; Robinson Page 5: “The camera and lidar are rigidly mounted so that their mutual orientation is fixed, and they are co-calibrated, so that detections acquired with the camera can be located within the local coordinate system of the LiDAR instrument”). Robinson does not explicitly disclose the method comprising: maintaining information states regarding the target locations through a probability distribution structured as an octree; initializing the information states based on point cloud observations; updating the information states based on object detection observations or point cloud observations; determining a search region occupancy through constructing an octree-based occupancy grid based on point cloud observations; and using ray-tracing to determine visibility at three dimensional locations within the search region. However, Vasquez teaches the method comprising: maintaining information states regarding the target locations through a probability distribution structured as an octree (Vasquez Page 5: “After each scan, the sensor readings are integrated into an octree that represents the object’s bounding box”); initializing the information states based on point cloud observations (Vasquez Page 5: “After each scan, the sensor readings are integrated into an octree that represents the object’s bounding box”); updating the information states based on object detection observations or point cloud observations (Vasquez Page 5: “However, as the robot discovers the object to be reconstructed, the proposed planner considers the new sensed information to avoid collisions with the object to be reconstructed and to find the next best view/state”); determining a search region occupancy through constructing an octree-based occupancy grid based on point cloud observations (Vasquez Page 6: “To represent the object bounding box, Wbox, we use a probabilistic occupancy map based on the octomap structure [5], which is an octree with probabilistic occupancy estimation. See Fig. 3. In this representation each voxel has associated a probability of being occupied. We use a probabilistic octree because it is able to deal with noise on the sensor readings. From now on we refer to a probabilistic occupancy map as octree”); and using ray-tracing to determine visibility at three dimensional locations within the search region (Vasquez Page 7: “In [34], we introduce a Hierarchical Ray Tracing (HRT). It is based on tracing few rays in a rough resolution map; then, only when occupied voxels are touched by a ray, the resolution is increased for observing details (see Fig. 4)”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the octree structure as taught by Vasquez with the method of Robinson because it would improve the method by reducing positioning error, collision rate, and increasing the coverage (Vasquez Page 1). This motivation for the combination of Robinson and Vasquez is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Claim(s) 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Robinson, Vasquez, and Ahmad. Regarding claim 12, the Robinson and Vasquez combination does not explicitly disclose the method further comprising performing sequential decision-making based on a Partially Observable Markov Decision Process (POMDP) for three dimensional multi-object search to determine various viewpoints for the automated machine to move to and observe at. However, Ahmad teaches the method further comprising performing sequential decision-making based on a Partially Observable Markov Decision Process (POMDP) for three dimensional multi-object search to determine various viewpoints for the automated machine to move to and observe at (Ahmad Fig. 1: shows POMDP used for 3D object search). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate POMDP as taught by Ahmad with the method of Robinson and Vasquez because it would improve the method by allowing for object reconstruction that takes into account the uncertainty of reaching the state and the uncertainty in the observations of the sensors (Vasquez Page 21). This motivation for the combination of Robinson, Vasquez, and Ahmad is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Regarding claim 13, the Robinson and Vasquez combination does not explicitly disclose the method further comprising signaling when an object is found, wherein a location of the found object is indicated in the information state at the time of the found signal. However, Ahmad teaches the method further comprising signaling when an object is found, wherein a location of the found object is indicated in the information state at the time of the found signal (Ahmad Page 8: “At each time step, N particles are drawn at random according to the belief distribution bt . Each particle is propagated forward in time by drawing a sample from the importance distribution P (st+1 | st). An image received at the time step t + 1 is discretized and each voxel is assigned the number of image points contained within it. Each propagated particle is then assigned a weight according to the observation likelihood P (ot+1 | st+1) using Eqs. 2 and 5”). It would have been obvious to combinate Ahmad with Robinson and Vasquez for the same reasons used for claim 12 above. Regarding claim 14, Robinson discloses a system comprising: a robot equipped with one or more camera-based object detectors (Robinson Page 5: “The current system employs a camera, a LiDAR, an inertial measurement unit (IMU), and a global positioning system (GNSS) to provide near real-time detection and geolocation of artifacts on the runway surface”); and a gRPC framework comprising a gRPC client and a gRPC server (Robinson Page 8: “Communication between the POS and the controller software would be performed using gRPC remote procedure calls (gRPC), with protocol buffers as the interface description language”), the gRPC client providing an interface between the robot and the gRPC server (Robinson Page 8: “Communication between the POS and the controller software would be performed using gRPC remote procedure calls (gRPC), with protocol buffers as the interface description language”). Robinson does not explicitly disclose the system comprising: the gRPC server maintaining an occupancy octree. However, Vasquez teaches disclose the system comprising: the gRPC server maintaining an occupancy octree (Vasquez Page 5: “After each scan, the sensor readings are integrated into an octree that represents the object’s bounding box”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the octree structure as taught by Vasquez with the system of Robinson because it would improve the system by reducing positioning error, collision rate, and increasing the coverage (Vasquez Page 1). This motivation for the combination of Robinson and Vasquez is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). The Robinson and Vasquez combination does not explicitly disclose the system comprising: the gRPC server maintaining a Partially Observable Markov Decision Process (POMDP) agent and a belief state. However, Ahmad teaches the system comprising: the gRPC server maintaining a Partially Observable Markov Decision Process (POMDP) agent and a belief state (Ahmad Page 8: “At each time step, N particles are drawn at random according to the belief distribution bt . Each particle is propagated forward in time by drawing a sample from the importance distribution P (st+1 | st). An image received at the time step t + 1 is discretized and each voxel is assigned the number of image points contained within it. Each propagated particle is then assigned a weight according to the observation likelihood P (ot+1 | st+1) using Eqs. 2 and 5”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate POMDP as taught by Ahmad with the system of Robinson and Vasquez because it would improve the system by allowing for object reconstruction that takes into account the uncertainty of reaching the state and the uncertainty in the observations of the sensors (Vasquez Page 21). This motivation for the combination of Robinson, Vasquez, and Ahmad is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention and exemplary rational (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results. MPEP 2141 (III). Regarding claim 15, Robinson does not explicitly disclose the system wherein the belief state represents belief over object locations in the structure of the occupancy octree. However, Vasquez teaches the system wherein the belief state represents belief over object locations in the structure of the occupancy octree (Vasquez Page 6: “To represent the object bounding box, Wbox, we use a probabilistic occupancy map based on the octomap structure [5], which is an octree with probabilistic occupancy estimation. See Fig. 3. In this representation each voxel has associated a probability of being occupied. We use a probabilistic octree because it is able to deal with noise on the sensor readings. From now on we refer to a probabilistic occupancy map as octree”). It would have been obvious to combinate Robinson and Vasquez for the same reasons used for claim 14 above. Regarding claim 16, Robinson does not explicitly disclose the system wherein the occupancy octree represents a search region's occupancy. However, Vasquez teaches the system wherein the occupancy octree represents a search region's occupancy (Vasquez Page 6: “To represent the object bounding box, Wbox, we use a probabilistic occupancy map based on the octomap structure [5], which is an octree with probabilistic occupancy estimation. See Fig. 3. In this representation each voxel has associated a probability of being occupied. We use a probabilistic octree because it is able to deal with noise on the sensor readings. From now on we refer to a probabilistic occupancy map as octree”). It would have been obvious to combinate Robinson and Vasquez for the same reasons used for claim 14 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Nov 02, 2023
Application Filed
Sep 21, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602774
Regional Pulmonary V/Q via image registration and Multi-Energy CT
2y 5m to grant Granted Apr 14, 2026
Patent 12597140
METHOD, SYSTEM AND DEVICE OF IMAGE SEGMENTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597168
METHOD FOR CONVERTING NEAR INFRARED IMAGE TO RGB IMAGE AND APPARATUS FOR SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592082
DEVICE AND METHOD FOR PROVIDING INFORMATION FOR VEHICLE USING ROAD SURFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12586182
Multi-Prong Multitask Convolutional Neural Network for Biomedical Image Inference
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+12.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month