Prosecution Insights
Last updated: April 19, 2026
Application No. 18/525,880

SENSOR-SUPPORTED OBJECT CHARACTERIZATION AS STATIC OR DYNAMIC

Non-Final OA §103§112
Filed
Dec 01, 2023
Examiner
ALLEN, LUCIUS CAMERON GREE
Art Unit
2673
Tech Center
2600 — Communications
Assignee
Intel Corporation
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
27 granted / 38 resolved
+9.1% vs TC avg
Strong +39% interview lift
Without
With
+39.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
20 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of AIA Status The present application is being examined under the AIA the first inventor to file provisions. Priority Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/12/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 11-12, are objected to because of the following informalities: In claim 11, Line 1 the term “the image generator” should be changed to “the one or more image sensors” for typographical/grammar issues to avoid clarity issues to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. In claim 12, Line 1 the term “further comprising one or more image sensors” should be changed to “further comprising the one or more image sensors” for typographical/grammar issues to avoid clarity issues to prevent a rejection under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8-11, and 19-20 are rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa. Regarding claim 1, Park explicitly teaches a device for detecting a dynamic object (Fig. 2, Paragraph [0036]- Park discloses when the sensing device 100 supports an object classification function by using an object classification model, the processor 120 may identify static objects such as the ground or buildings or dynamic objects such as animals, by applying the point cloud of a three-dimensional space to the object classification model or clustering the point cloud of a three-dimensional space.), comprising: a processor (Fig. 2, Paragraph [0030]- Park discloses the sensing device 100 according to an embodiment may include a memory 110, a processor 120, a sensor unit 130, and a communication interface 140.), configured to: determine a first point density of a first volume around a first point in a first image (Fig. 6, Paragraph [0061]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100, by comparing a ratio of an area of a first static point cloud to a static object area with a ratio of an area of a second static point cloud to a static object area. When the time point at which a static object area is determined is set to be a first time point, as illustrated in FIG. 6, while a first static point cloud at a first time point is regularly detected within a static object area, a second static point cloud at a second time point is detected only in a part of the static object area.), the image comprising image data corresponding to three dimensions (Fig. 2, Paragraph [0024]- Park discloses the sensing device 100 may include a light detection and ranging (LiDAR) sensor as a 3D sensor for sending a three-dimensional space and may obtain volumetric point cloud data.); determine one or more second point densities (Fig. 2, Paragraph [0062]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100 in a voxel map by comparing a ratio of the number of voxels forming a first static point cloud at a first time point to the number of all voxels forming the determined static object area with a ratio of the number of voxels forming a second static point cloud to the number of all voxels forming the determined static object area at a second time point.), each second point density of the one or more second point densities being a point density of a second volume around a second point (Fig. 2, Paragraph [0062]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100 in a voxel map by comparing a ratio of the number of voxels forming a first static point cloud at a first time point to the number of all voxels forming the determined static object area with a ratio of the number of voxels forming a second static point cloud to the number of all voxels forming the determined static object area at a second time point.); wherein the one or more second images of the environment are one or more images taken prior to the first image and include image data corresponding to the three dimensions (Fig. 2, Paragraph [0059]- Park discloses the processor 120 may extract a first static point cloud at a first time point and a second static point cloud at a second time point having a certain time difference from first time point, from among the static point clouds over time corresponding to the determined static object area.); and classify the first point as dynamic or static based on a comparison of the first point density and the one or more second point densities (Fig. 2, Paragraph [0046]- Park discloses the processor 120 may determine a static object area based on a period during which the point clouds in unit areas at corresponding positions between frames of a space information map is continuously detected to be the number of points greater than or equal to a minimum detection threshold value.). Park fails to explicitly teach the first image being an image of an environment of a robot. However, Ishikawa explicitly teaches the first image being an image of an environment of a robot (Fig. 1, paragraph [0071]- Ishikawa discloses the recognition unit 73 performs recognition processing of an environment around the vehicle. Further in Paragraph [0251]- Ishikawa discloses the present technology can also be applied to a case where an occupancy grid map is created in a mobile device other than a vehicle, for example, a drone, a robot, or the like.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa the first image being an image of an environment of a robot. Wherein having Park’s system of abnormality sensing wherein the first image being an image of an environment of a robot. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Regarding claim 8, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park further teaches wherein the first image of the environment and the one or more second images of the environment are configured as point cloud images (Fig. 2, Paragraph [0045]- Park discloses the processor 120 may determine a static object area based on a period during which a point could is continuously detected in a unit area at corresponding positions between frames of the space information map generated from the point clouds over time with respect to a three-dimensional space obtained by the sensor unit 130.). Regarding claim 9, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 8, Park fails to explicitly teach further comprising one or more image sensors configured to generate a plurality of images or depth images; wherein the processor is further configured to generate the point cloud images by resolving the plurality of images or depth images with a position of the one or more image sensors when the plurality of images or depth images are acquired. However, Ishikawa explicitly teaches further comprising one or more image sensors configured to generate a plurality of images or depth images (Fig. 1, Paragraph [0153]- Ishikawa discloses specifically, the camera 51 captures an image of surroundings of the vehicle 1, and supplies obtained image data to the information processing unit 301.); wherein the processor is further configured to generate the point cloud images by resolving the plurality of images or depth images with a position of the one or more image sensors when the plurality of images or depth images are acquired (Fig. 1, Paragraph [0245]- Ishikawa discloses a point cloud may be created by a radar, a depth camera (for example, a stereo camera or a ToF camera), or the like.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa further comprising one or more image sensors configured to generate a plurality of images or depth images; wherein the processor is further configured to generate the point cloud images by resolving the plurality of images or depth images with a position of the one or more image sensors when the plurality of images or depth images are acquired. Wherein having Park’s system of abnormality sensing wherein further comprising one or more image sensors configured to generate a plurality of images or depth images; wherein the processor is further configured to generate the point cloud images by resolving the plurality of images or depth images with a position of the one or more image sensors when the plurality of images or depth images are acquired. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Regarding claim 10, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 9, Park fails to explicitly teach wherein the one or more image sensors comprise a stereo camera or a depth camera. However, Ishikawa explicitly teaches wherein the one or more image sensors comprise a stereo camera or a depth camera (Fig. 1, Paragraph [0245]- Ishikawa discloses a point cloud may be created by a radar, a depth camera (for example, a stereo camera or a ToF camera), or the like.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa wherein the one or more image sensors comprise a stereo camera or a depth camera. Wherein having Park’s system of abnormality sensing wherein the one or more image sensors comprise a stereo camera or a depth camera. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Regarding claim 11, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 9, Park further teaches wherein the image generator is a LiDAR, Light Detection and Ranging, device (Fig. 1, Paragraph [0034]- Park discloses the sensor unit 130 may be a LiDAR sensor, and may include at least one three-dimensional LiDAR sensor to obtain data of a space in a certain range). Regarding claim 19, Park in view of Ishikawa teaches the device for detecting a dynamic object of claims 1, Park fails to explicitly teach wherein the dynamic object detection device is configured as an autonomous robot. However, Ishikawa explicitly teaches wherein the dynamic object detection device is configured as an autonomous robot (Fig. 1, Paragraph [0251]- Ishikawa discloses the present technology can also be applied to a case where an occupancy grid map is created in a mobile device other than a vehicle, for example, a drone, a robot, or the like.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa wherein the one or more image sensors comprise a stereo camera or a depth camera. Wherein having Park’s system of abnormality sensing wherein the dynamic object detection device is configured as an autonomous robot. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Regarding claim 20, Park teaches a method for detecting a dynamic object (Fig. 3-5, Paragraph [0039]- Park discloses a method of distinguishing a dynamic point cloud corresponding to a dynamic object from a static point cloud corresponding to a static object in space information map, and a method of determining a static object area in a space information map, are described below with reference to FIGS. 3 to 5.), comprising: determining a first point density of a first volume around a first point in a first image (Fig. 6, Paragraph [0061]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100, by comparing a ratio of an area of a first static point cloud to a static object area with a ratio of an area of a second static point cloud to a static object area. When the time point at which a static object area is determined is set to be a first time point, as illustrated in FIG. 6, while a first static point cloud at a first time point is regularly detected within a static object area, a second static point cloud at a second time point is detected only in a part of the static object area.), the image including image data corresponding to three dimensions (Fig. 2, Paragraph [0024]- Park discloses the sensing device 100 may include a light detection and ranging (LiDAR) sensor as a 3D sensor for sending a three-dimensional space and may obtain volumetric point cloud data.); determining one or more second point densities (Fig. 2, Paragraph [0062]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100 in a voxel map by comparing a ratio of the number of voxels forming a first static point cloud at a first time point to the number of all voxels forming the determined static object area with a ratio of the number of voxels forming a second static point cloud to the number of all voxels forming the determined static object area at a second time point.), each second point density of the one or more second point densities being a point density of a second volume about a second point (Fig. 2, Paragraph [0062]- Park discloses the processor 120 may determine the installation abnormality of the sensing device 100 in a voxel map by comparing a ratio of the number of voxels forming a first static point cloud at a first time point to the number of all voxels forming the determined static object area with a ratio of the number of voxels forming a second static point cloud to the number of all voxels forming the determined static object area at a second time point.); the second point corresponding to the first point in each of one or more second images, the one or more second images of the environment being one or more images taken prior to the first image and including image data corresponding to the three dimensions (Fig. 2, Paragraph [0059]- Park discloses the processor 120 may extract a first static point cloud at a first time point and a second static point cloud at a second time point having a certain time difference from first time point, from among the static point clouds over time corresponding to the determined static object area.); and classifying the first point as dynamic or static based on a comparison of the first point density and the one or more second point densities (Fig. 2, Paragraph [0046]- Park discloses the processor 120 may determine a static object area based on a period during which the point clouds in unit areas at corresponding positions between frames of a space information map is continuously detected to be the number of points greater than or equal to a minimum detection threshold value.). Park fails to explicitly teach the first image being an image of an environment of a robot. However, Ishikawa explicitly teaches the first image being an image of an environment of a robot (Fig. 1, paragraph [0071]- Ishikawa discloses the recognition unit 73 performs recognition processing of an environment around the vehicle. Further in Paragraph [0251]- Ishikawa discloses the present technology can also be applied to a case where an occupancy grid map is created in a mobile device other than a vehicle, for example, a drone, a robot, or the like.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park of a method for detecting a dynamic object, comprising: determining a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa the first image being an image of an environment of a robot. Wherein having Park’s system of abnormality sensing wherein the first image being an image of an environment of a robot. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Claims 3-4 are rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and Li et al. (US 20220366185 A1) hereafter referenced as Li. Regarding claim 3, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park in view of Ishikawa fails to explicitly teach wherein the processor is further configured to change the second point density based on a comparison of the depth information of the first point and the depth information of the second point. However, Li explicitly teaches wherein the processor is further configured to change the second point density based on a comparison of the depth information of the first point and the depth information of the second point (Fig. 1a, Paragraph [0018]- Li discloses increasing the search radius increases the number of neighboring points and therefore helps to cluster points in lower density areas. Further in Paragraph [0028]- Li discloses as the distance of the point from the LiDAR sensor (range(p.sub.i)) increases, the search radius ε.sub.1st,i increases. Likewise, as the resolution of the LiDAR sensor (resol) increases, the search radius ε.sub.1st,i increases.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Li wherein the processor is further configured to change the second point density based on a comparison of the depth information of the first point and the depth information of the second point. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to change the second point density based on a comparison of the depth information of the first point and the depth information of the second point. The motivation behind the modification would have been to have a more accurate system, since both Park and Li are both systems use point clouds. Wherein Park’s system provides a way to increase accuracy, while Li’s system provides a way to further increase accuracy. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Li et al. (US 20220366185 A1) Paragraph [0018]. Regarding claim 4, Park in view of Ishikawa and Li teaches the device for detecting a dynamic object of claim 3, Park in view of Ishikawa fails to explicitly teach wherein the processor is further configured to increase the second point density if the depth information corresponds to a greater distance to the first point than to a second point, or to decrease the second volume if the depth information corresponds to a smaller distance to the first point than to a second point. However, Li explicitly teaches wherein the processor is further configured to increase the second point density if the depth information corresponds to a greater distance to the first point than to a second point (Fig. 1a, Paragraph [0018]- Li discloses increasing the search radius increases the number of neighboring points and therefore helps to cluster points in lower density areas. Further in Paragraph [0028]- Li discloses as the distance of the point from the LiDAR sensor (range(p.sub.i)) increases, the search radius ε.sub.1st,i increases. Likewise, as the resolution of the LiDAR sensor (resol) increases, the search radius ε.sub.1st,i increases.), or to decrease the second volume if the depth information corresponds to a smaller distance to the first point than to a second point (Fig. 1a, Paragraph [0018]- Li discloses the minimum point threshold minPt is increased for points located close to the LiDAR sensor and decreased for points related further away from the LiDAR sensor). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Li wherein the processor is further configured to increase the second point density if the depth information corresponds to a greater distance to the first point than to a second point, or to decrease the second volume if the depth information corresponds to a smaller distance to the first point than to a second point. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to increase the second point density if the depth information corresponds to a greater distance to the first point than to a second point, or to decrease the second volume if the depth information corresponds to a smaller distance to the first point than to a second point. The motivation behind the modification would have been to have a more accurate system, since both Park and Li are both systems use point clouds. Wherein Park’s system provides a way to increase accuracy, while Li’s system provides a way to further increase accuracy. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Li et al. (US 20220366185 A1) Paragraph [0018]. Claim 6 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Doria et al. (US 20200184234 A1) hereafter referenced as Doria. Regarding claim 6, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park in view of Ishikawa fails to explicitly teach wherein the first volume and the second volume are spherical. However, Doria explicitly teaches wherein the first volume and the second volume are spherical (Fig. 3, Paragraph [0029]- Doria discloses the neighborhood may be defined by a spatial volume or area. In some examples, the spatial volume is spherical and set by a predetermined radius. Thus, the neighborhood includes the points within the predetermined radius to a starting point.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Doria wherein the first volume and the second volume are spherical. Wherein having Park’s system of abnormality sensing wherein the first volume and the second volume are spherical. The motivation behind the modification would have been to have a faster and more efficient machine, since both Park and Doria are both systems use point clouds to create to do object detection. Wherein Park’s system provides a way to increase accuracy, while Doria’s system provides a way to increase speed, efficiency, and accuracy. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Doria et al. (US 20200184234 A1) Paragraph [0020]. Claim 7 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Lee et al. (US 20230243970 A1) hereafter referenced as Lee. Regarding claim 7, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park further teaches wherein the processor is further configured to classify the first point as dynamic (Fig. 3, Paragraph [0042]- Park discloses the vehicles and pedestrians that move correspond to dynamic objects, and the positions of the dynamic point clouds corresponding to dynamic objects are changed in the space information map so that the dynamic point clouds are not continuously detected in the same area for more than a certain period.), static based on the comparison of the first point density and the one or more second point densities (Fig. 2, Paragraph [0046]- Park discloses the processor 120 may determine a static object area based on a period during which the point clouds in unit areas at corresponding positions between frames of a space information map is continuously detected to be the number of points greater than or equal to a minimum detection threshold value.), Park in view of Ishikawa fails to explicitly teach or unknown. However, Lee explicitly teaches or unknown (Fig. 3, Paragraph [0059]- Lee discloses the type of object determined based on the calculated score may be determined to be any one of a static object, a dynamic object, or an unknown object.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Lee or unknown. Wherein having Park’s system of abnormality sensing wherein or unknown. The motivation behind the modification would have been to have a more accurate and easier to use system, since both Park and Lee are both systems use point clouds to determine static and dynamic objects Wherein Park’s system provides a way to increase accuracy, while Lee’s system provides a way to increase accuracy and make the system simpler. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Lee et al. (US 20230243970 A1) Paragraph [0120-121]. Claim 12 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Yu et al. (US 20230360406 A1) hereafter referenced as Yu. Regarding claim 12, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 9, Park fails to explicitly teach further comprising one or more image sensors configured to generate a plurality of images. However, Ishikawa explicitly teaches further comprising one or more image sensors configured to generate a plurality of images (Fig. 1, Paragraph [0153]- Ishikawa discloses specifically, the camera 51 captures an image of surroundings of the vehicle 1, and supplies obtained image data to the information processing unit 301.); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Ishikawa further comprising one or more image sensors configured to generate a plurality of images. Wherein having Park’s system of abnormality sensing wherein further comprising one or more image sensors configured to generate a plurality of images. The motivation behind the modification would have been to a better control and safer control system, since both Park and Ishikawa are both systems use point clouds to create a map of static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Ishikawa’s system provides a way to improve safety. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Ishikawa et al. (US 20220383749 A1) Paragraph [0239]. Park in view of Ishikawa fails to explicitly teach wherein the processor is further configured to generate the point cloud images by resolving the plurality of images using one or more photogrammetry techniques. However, Yu explicitly teaches wherein the processor is further configured to generate the point cloud images by resolving the plurality of images using one or more photogrammetry techniques (Fig. 1, Paragraph [0042]- Yu discloses photogrammetry methods, which are known in the art, are used to produce the point cloud from the captured image data.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image with the teachings of Yu wherein the processor is further configured to generate the point cloud images by resolving the plurality of images using one or more photogrammetry techniques. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to generate the point cloud images by resolving the plurality of images using one or more photogrammetry techniques. The motivation behind the modification would have been to have a more accurate system, since both Park and Yu are both systems use point clouds. Wherein Park’s system provides a way to increase accuracy, while Yu’s system provides a further way to improve accuracy. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Yu et al. (US 20230360406 A1) Paragraph [0049]. Claim 13 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Zhang et al. (US 20150324658 A1) hereafter referenced as Zhang. Regarding claim 13, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Although Park further teaches wherein classifying the first point as dynamic or static comprises classifying the first point as dynamic or static based on a comparison of the density of the first point and the density of the one or more second points within a data set (Fig. 2, Paragraph [0046]- Park discloses the processor 120 may determine a static object area based on a period during which the point clouds in unit areas at corresponding positions between frames of a space information map is continuously detected to be the number of points greater than or equal to a minimum detection threshold value.). Park in view of Ishikawa fails to explicitly teach wherein the processor is further configured to generate a modified data set by removing from the first image and the one or more second images data corresponding to a surface traversable by the dynamic object detection device; and wherein classifying the first point as dynamic or static comprises classifying the first point as dynamic or static based on a comparison of the density of the first point and the density of the one or more second points within the modified data set. However, Zhang explicitly teaches wherein the processor is further configured to generate a modified data set by removing from the first image and the one or more second images data corresponding to a surface traversable by the dynamic object detection device (Fig. 14, Paragraph [0104]- Zhang discloses at 1410, a ground plane in the 3D point cloud information is determined and removed. Modified 3D information may be generated by removing the ground plane from the acquired or obtained 3D information.); the modified data set (Fig. 13, Paragraph [0096]- Zhang discloses the 3D candidate objects (e.g., blobs) may be identified using the modified 3D information. For example, the 3D module 1314 may cluster (e.g., in an unsupervised manner) proximal points from the modified 3D information into object groups to identify 3D candidate objects.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Zhang wherein the processor is further configured to generate a modified data set by removing from the first image and the one or more second images data corresponding to a surface traversable by the dynamic object detection device and the modified data set. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to generate a modified data set by removing from the first image and the one or more second images data corresponding to a surface traversable by the dynamic object detection device and the modified data set. The motivation behind the modification would have been to increase system reliability and reduce false positives, since both Park and Zhang are both systems that use lidar for object detection. Wherein Park’s system provides a way to increase accuracy, while Zhang’s system provides a way to improve reliability. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Zhang et al. (US 20150324658 A1) Paragraph [0108-109]. Claim 14 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Yasuda et al. (US 20240241523 A1) hereafter referenced as Yasuda. Regarding claim 14, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Although Park explicitly teaches based on comparison of the first point density and the one or more second point densities. Park in view of Ishikawa fails to explicitly teach wherein the processor is further configured to generate a two-dimensional grid of the environment and to label portions of the two-dimensional grid as dynamic or static based on classification of the first point as dynamic or static based on comparison of the first point density and the one or more second point densities. However, Yasuda explicitly teaches wherein the processor is further configured to generate a two-dimensional grid of the environment (Fig, 1, Paragraph [0041]- Yasuda discloses the determination unit 12 generates grid data that is obtained by dividing the map data into a plurality of sections (hereinafter also referred to as a plurality of cells), and determines or specifies whether or not there is an obstacle in each cell of the grid, or whether the presence/absence of an obstacle is unknown.) and to label portions of the two-dimensional grid as dynamic or static based on classification of the first point as dynamic or static (Fig. 5, Paragraph [0132]- Yasuda discloses the label setting means sets a label indicating that there is a stationary obstacle, there is a moving obstacle, or there is no obstacle in each cell of the grid). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Yasuda wherein the processor is further configured to generate a two-dimensional grid of the environment and to label portions of the two-dimensional grid as dynamic or static based on classification of the first point as dynamic or static. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to generate a two-dimensional grid of the environment and to label portions of the two-dimensional grid as dynamic or static based on classification of the first point as dynamic or static. The motivation behind the modification would have been to have a more efficient and accurate system, since both Park and Yasuda are both systems determine static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Yasuda’s system provides a further way to improve accuracy and efficiency. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Yasuda et al. (US 20240241523 A1) Paragraph [0092]. Claim 15 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa, in view of Yasuda et al. (US 20240241523 A1) hereafter referenced as Yasuda, in view of Kirstein et al. (US 20210295062 A1) hereafter referenced as Kirstein, in view of Sonoura et al. (US 20080201014 A1) hereafter referenced as Sonoura. Regarding claim 15, Park in view of Ishikawa and Yasuda teaches the device for detecting a dynamic object of claim 14, Park in view of Ishikawa and Yasuda fails to explicitly teach wherein the processor is further configured to associate height information with at least one cell of the grid. However, Kirstein explicitly teaches wherein the processor is further configured to associate height information with at least one cell of the grid (Fig, 1, Paragraph [0029]- Kirstein discloses in a step S4, height information is determined by means of the at least one first 2a and/or second environment detection sensor 2b. In step S5, said determined height information is added to each grid cell.), Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa and Yasuda of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Kirstein wherein the processor is further configured to associate height information with at least one cell of the grid. Wherein having Park’s system of abnormality sensing wherein the processor is further configured to associate height information with at least one cell of the grid. The motivation behind the modification would have been to have an improved environment representation, since both Park and Kirstein are both systems that use lidar to detect in an area. Wherein Park’s system provides a way to increase accuracy, while Kirstein’s system provides a way to improve the representation of the environment. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Kirstein et al. (US 20210295062 A1) Paragraph [0003]. Park in view of Ishikawa, Yasuda, and Kirstein fails to explicitly teach when the height information is within a range, to calculate a first minimum distance to be maintained to an object in the cell, and when the height information is outside the range, to calculate a second minimum distance to be maintained to an object in the cell, wherein the first minimum distance is greater than the second minimum distance. However, Sonoura explicitly teaches when the height information is within a range, to calculate a first minimum distance to be maintained to an object in the cell (Fig. 8, Paragraph [0047]- Sonoura discloses if the height h is greater than the threshold Dh, then the robot 1 proceeds to step S714 shown in FIG. 8 and judges that the extracted person is not a person of caution level. In this case, the robot 1 further proceeds to step S8 shown in FIG. 2, sets the level of the approach permission distance limitation (degree of caution) of the robot 1 to the person to Lv.), and when the height information is outside the range, to calculate a second minimum distance to be maintained to an object in the cell (Fig. 12, Paragraph [0059]- Sonoura discloses this traveling velocity is based on a traveling restriction law having two-dimensional matrix condition stipulations in which the distance to the obstacle is increased or the maximum traveling velocity is decreased as the caution level becomes higher as shown in FIG. 12.), wherein the first minimum distance is greater than the second minimum distance (Fig. 12, Paragraph [0059]- Sonoura discloses the robot 1 in the present embodiment compares a distance which can be ensured between the robot 1 and the obstacle with preset values (=0, L1, L2, L3 and L4) of the approach permission distance, and determines the traveling velocity of the robot 1 on the basis of a result of the comparison and preset levels of the approach permission distance limitation, i.e., levels of the degree of caution (=no limitations, Lv. 1, Lv. 2, Lv. 3 and Lv. 4). This traveling velocity is based on a traveling restriction law having two-dimensional matrix condition stipulations in which the distance to the obstacle is increased or the maximum traveling velocity is decreased as the caution level becomes higher as shown in FIG. 12.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa, Yasuda, and Kirstein of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Sonoura when the height information is within a range, to calculate a first minimum distance to be maintained to an object in the cell, and when the height information is outside the range, to calculate a second minimum distance to be maintained to an object in the cell, wherein the first minimum distance is greater than the second minimum distance. Wherein having Park’s system of abnormality sensing wherein when the height information is within a range, to calculate a first minimum distance to be maintained to an object in the cell, and when the height information is outside the range, to calculate a second minimum distance to be maintained to an object in the cell, wherein the first minimum distance is greater than the second minimum distance. The motivation behind the modification would have been to have a safer system to use around people, since both Park and Sonoura are both systems that detect static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Sonoura’s system provides a way to improve safety of the system. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Sonoura et al. (US 20080201014 A1) Paragraph [0011 and 0059-61]. Claim 16 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Schafer et al. (US 20230222928 A1) hereafter referenced as Schafer. Regarding claim 16, Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park in view of Ishikawa fails to explicitly teach wherein the processor is configured to operate in a first mode of operation when a distance between the robot and an object corresponding to a static point is within a predetermined range, and to operate in a second mode of operation when the distance between the robot and the object corresponding to a dynamic point is within the predetermined range. However, Schafer explicitly teaches wherein the processor is configured to operate in a first mode of operation when a distance between the robot and an object corresponding to a static point is within a predetermined range (Fig. 9, Paragraph [0089]- Schafer discloses the example process of FIG. 9 also handles static and dynamic objects differently in that dynamic objects are handled with higher priority and may be continuously tracked to estimate their motion trajectory… In case the received ID corresponds to a non-recorded DID or SID, it gets recorded and respective proximity checks are performed.), and to operate in a second mode of operation when the distance between the robot and the object corresponding to a dynamic point is within the predetermined range (Fig. 9, Paragraph [0089]- Schafer discloses the example process of FIG. 9 also handles static and dynamic objects differently in that dynamic objects are handled with higher priority and may be continuously tracked to estimate their motion trajectory…In case the received ID corresponds to a non-recorded DID or SID, it gets recorded and respective proximity checks are performed.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Schafer wherein the processor is configured to operate in a first mode of operation when a distance between the robot and an object corresponding to a static point is within a predetermined range, and to operate in a second mode of operation when the distance between the robot and the object corresponding to a dynamic point is within the predetermined range. Wherein having Park’s system of abnormality sensing wherein the processor is configured to operate in a first mode of operation when a distance between the robot and an object corresponding to a static point is within a predetermined range, and to operate in a second mode of operation when the distance between the robot and the object corresponding to a dynamic point is within the predetermined range. The motivation behind the modification would have been to minimize risk of collisions while efficiently using the available space, since both Park and Schafer are both systems that detect static and dynamic objects. Wherein Park’s system provides a way to increase accuracy, while Schafer’s system provides a way to improve collision avoidance. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Schafer et al. (US 20230222928 A1) Paragraph [0002-6]. Claim 17 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa, in view of Schafer et al. (US 20230222928 A1) hereafter referenced as Schafer, and in view of Abramson et al. (US 20200201328 A1) hereafter referenced as Abramson. Regarding claim 17, Park in view of Ishikawa and Schafer teaches the device for detecting a dynamic object of claim 16, Park in view of Ishikawa and Schafer fails to explicitly teach wherein the first mode of operation comprises the processor not sending a command to decelerate or stop or perform an avoidance maneuver of the robot; and wherein the second mode of operation comprises the processor sending a command to decelerate or stop or perform an avoidance maneuver of the robot. However, Abramson explicitly teaches wherein the first mode of operation comprises the processor not sending a command to decelerate or stop or perform an avoidance maneuver of the robot (Fig. 2, Paragraph [0151]- Abramson discloses the machine 20 is configured to recognize the animate beings 110 and 112 as animate beings and to conduct a behavior in response to the recognition. (wherein this shows it only sends the command to decelerate/stop based on animate beings similar to dynamic objects)); and wherein the second mode of operation comprises the processor sending a command to decelerate or stop or perform an avoidance maneuver of the robot (Fig. 2, Paragraph [0153]- Abramson discloses a human detection or animate being detection of more than 50% within the section 264B within the immediate pathway of the machine 20 can trigger the control system 60 to control the machine 20 to stop and wait, stop the working mechanism 42, and alert the user via a wirelessly transmitted message.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa and Schafer of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Abramson wherein the first mode of operation comprises the processor not sending a command to decelerate or stop or perform an avoidance maneuver of the robot; and wherein the second mode of operation comprises the processor sending a command to decelerate or stop or perform an avoidance maneuver of the robot. Wherein having Park’s system of abnormality sensing wherein the first mode of operation comprises the processor not sending a command to decelerate or stop or perform an avoidance maneuver of the robot; and wherein the second mode of operation comprises the processor sending a command to decelerate or stop or perform an avoidance maneuver of the robot. The motivation behind the modification would have been to improve navigation obstacle avoidance and efficiency, since both Park and Abramson are both systems that use lidar to detect objects. Wherein Park’s system provides a way to increase accuracy, while Schafer’s system provides improve navigation obstacle avoidance and efficiency. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Abramson et al. (US 20200201328 A1) Paragraph [0002]. Claim 18 is rejected under 35 U.S.C 103 as being unpatentable over Park et al. (US 20240192342 A1) hereafter referenced as Park in view of Ishikawa et al. (US 20220383749 A1) hereafter referenced as Ishikawa and in view of Wang et al. (US 20240095928 A1) hereafter referenced as Wang. Regarding claim 18 Park in view of Ishikawa teaches the device for detecting a dynamic object of claim 1, Park in view of Ishikawa fail to explicitly teach wherein the processor is configured to compare the depth information for a point to an estimated depth information for the point and label the point as being occluded if a difference between the depth information and the estimated depth information is outside a range. However, Wang explicitly teaches wherein the processor is configured to compare the depth information for a point to an estimated depth information for the point and label the point as being occluded if a difference between the depth information and the estimated depth information is outside a range (Fig. 2, Paragraph [0062]- Wang discloses By individually comparing the magnitude relationship between the difference in depth values of each pixel and multiple pixels directly adjacent to it in its surrounding neighborhood and the predetermined threshold, the occlusion relationship for each pair of adjacent pixels can be determined. If the difference in depth values of any pair of adjacent pixels is greater than the predetermined threshold, an occlusion relationship between that pair of adjacent pixels is determined; otherwise, there is no occlusion relationship.) and to label the point as being visible if a difference between the depth information and the estimated depth information is within a range (Fig. 2, Paragraph [0073]- Wang discloses for any pixel 100 in the training image, its occlusion relationship with adjacent pixels, such as 101, can fall into three cases: pixel 100 occludes pixel 101 (represented as 1), pixel 100 is occluded by pixel 101 (represented as −1), and there is no occlusion between pixel 100 and pixel 101 (represented as 0).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Park in view of Ishikawa of a device for detecting a dynamic object, comprising: a processor, configured to: determine a first point density of a first volume around a first point in a first image, with the teachings of Wang wherein the processor is configured to compare the depth information for a point to an estimated depth information for the point and label the point as being occluded if a difference between the depth information and the estimated depth information is outside a range. Wherein having Park’s system of abnormality sensing wherein the processor is configured to compare the depth information for a point to an estimated depth information for the point and label the point as being occluded if a difference between the depth information and the estimated depth information is outside a range. The motivation behind the modification would have been to improve the accuracy and reliability of the system, since both Park and Wang are both systems that use lidar. Wherein Park’s system provides a way to increase accuracy, while Wang’s system provides enhanced accuracy reliability and robustness of the system. Please see Park et al. (US 20240192342 A1) Paragraph [0027] and Wang et al. (US 20240095928 A1) Paragraph [0005]. Allowable Subject Matter Claims 2 and 5 along with their dependent claims respectively, are therefrom objected to as being dependent upon rejected base claim, claims 1, respectively but would be allowable if rewritten in independent form including all of the limitations of the base claims and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 2, the prior arts fail to explicitly teach, determine a modified image set as an image set having one or more second images in which the first depth information corresponds to a greater depth than a depth of the second depth information of the corresponding second image, as claimed in claim 2. Regarding claim 5, the prior arts fail to explicitly teach, wherein varying the point density surrounding the second point based on a depth information of the first point compared to a depth information of the second point comprises increasing the second point density when the first depth information corresponds to a depth less than a depth of the second depth information, and decreasing the second point density when the first depth information corresponds to a depth greater than a depth of the second depth information, as claimed in claim 5. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure. HANSEN et al. (US 20240399582 A1)- A method for the safe operation of a machine, which has a movable machine part comprising a hazardous section, comprises: the movable machine part moving according to a predefined sequence program; and an environment of the hazardous section being monitored, wherein, in the event of an engagement of an object into a defined protective volume, which is dependent on the current position of the hazardous section, within the monitored environment, a safety-related reaction is triggered that comprises the movement of the movable machine part being stopped if the engagement exceeds a defined engagement threshold of the protective volume. For a teaching-in of the protective volume, it is provided: that an initial protective volume is first predefined; that the machine is controlled so that the movable machine part moves according to the predefined sequence program while the environment of the hazardous section is monitored; that, if the movement of the movable machine part is stopped as a result of an object engaging into initial protective volume, a teach-in mode can be started by means of a first user input, in which teach-in mode the movement is continued and position data of objects in the environment of the hazardous section are acquired in so doing; that the teach-in mode can be terminated by means of a second user input; and that the protective volume is defined based on the acquired position data....................Please see Fig. 1. Abstract. MORIOKA et al. (US 20200171662 A1)- A monitor system for a robot that includes a base installed on an installation surface, and a movable part supported movably with respect to the base, the monitor system including: a sensor that monitors the presence or absence of an object around the robot; and a monitored region control part that controls a monitored region of the sensor based on a motion command signal for the robot. The sensor has the monitored region on each of both sides across a vertical plane that includes a central axis line of the movable parts, and the monitored region control part makes the monitored region at the rear in a moving direction of the movable part smaller than the monitored region at the front in the moving direction of the movable part.....................Please see Fig. 1. Abstract. MITANI et al. (US 20210120186 A1)- An imaging device in which an autofocus function can be performed without using brightness information is provided. In an imaging device according to one aspect, a density of points obtained by plotting two-dimensional point data of a plurality of event data as points on a plane, the event data outputted from an imaging element in a predetermined period in a state in which a focal point of a light receiving lens is adjusted by an adjustment mechanism, is calculated as a point density. When the point density is calculated, a control unit drives and controls the adjustment mechanism based on comparison results between the point density currently calculated and the point density last calculated to thereby adjust the focal point toward the in-focus position. In another aspect, an imaging device having an autofocus function can be provided without using event data......................Please see Fig. 1. Abstract. Ponto et al. (US 20200126208 A1)- In accordance with some aspects, systems, methods and media for detecting manipulations of point cloud data are provided. In some aspects, a method for presenting information indicative of whether a manipulation of point cloud data has occurred is provided, the method comprising: receiving point cloud data comprising a plurality of points, wherein each point of the plurality of points is associated with a position; determining, for each of the plurality of points, a value indicative of a density of points in a region surrounding the respective point; associating, for each of the plurality of points, the value indicative of density with the respective point; and causing a representation of at least a portion of the point cloud data to be presented based on the location information associated with each point of the plurality of points, and the value indicative of density associated with each of the plurality of points.......................Please see Fig. 1. Abstract. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUCIUS C.G. ALLEN whose telephone number is (703)756-5987. The examiner can normally be reached Mon - Fri 8-5pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571)272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LUCIUS CAMERON GREEN ALLEN/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Feb 03, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597105
SEMANTIC-AWARE AUTO WHITE BALANCE
2y 5m to grant Granted Apr 07, 2026
Patent 12579755
OVERLAYING AUGMENTED REALITY (AR) CONTENT WITHIN AN AR HEADSET COUPLED TO A MAGNIFYING LOUPE
2y 5m to grant Granted Mar 17, 2026
Patent 12541972
Computing Device and Method for Handling an Object in Recorded Images
2y 5m to grant Granted Feb 03, 2026
Patent 12536247
Roughness Compensation Method and System, Image Processing Device, and Readable Storage Medium
2y 5m to grant Granted Jan 27, 2026
Patent 12529684
INSPECTION DEVICE, INSPECTION METHOD, AND INSPECTION PROGRAM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+39.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month