Prosecution Insights
Last updated: April 19, 2026
Application No. 18/175,631

RULE-BASED DIGITIZED IMAGE COMPRESSION

Non-Final OA §103
Filed
Feb 28, 2023
Examiner
FUJITA, KATRINA R
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Ford Global Technologies LLC
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
472 granted / 674 resolved
+8.0% vs TC avg
Strong +24% interview lift
Without
With
+24.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
25 currently pending
Career history
699
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 674 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 26, 2025 has been entered. Claim Notes The previous interpretation of claim 10 has been withdrawn in light of Applicant’s amendment. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 7, 8, 11-14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Anumula et al. (US 11,533,484) and Delp et al. (US 2021/0303916). Regarding claim 1, Anumula et al. discloses a system, comprising: a processor coupled to a memory that stores instructions executable by the processor (“In its most basic configuration, the operating environment 1200 typically includes at least one processing unit 1202 and memory 1204. Depending on the exact configuration and type of computing device, memory 1204 (instructions for encoding and/or optimizing as disclosed herein)” at col. 12, line 51) to: obtain a camera image to include a scene (“data from the camera sensor” at col. 3, line 39); identify, from non-camera sensor data, an area of interest in the scene (“The vehicle 110 may further comprise a processor configured to receive data from the multiple sensors 120 and to process the data before encoding the data” at col. 3, line 36; “The set of multiple sensors 120 may include a camera, a LIDAR, a radar, a time-of-flight device and other sensors and devices that may be used for observing the environment of the vehicle 110” at col. 3, line 31); identify, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest and a second portion of the camera image that excludes the area of interest (“Each of the detectors 310, or alternatively a subset of the detectors, may generate an ROI probability map, for example, at pixel level. These probability maps may be converted, by a ROI adaption module 320, into a single ROI-id map at pixel-block level” at col. 5, line 9; areas not identified as part of an ROI is therefore the second portion); and apply a first compression rule to the first portion of the camera image and a second compression rule to the second portion of the camera image, wherein the first compression rule is less lossy than the second compression rule (“For example, a pixel block within an ROI may be compressed less aggressively than a pixel block that is not within the ROI” at col. 5, line 49; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI” at col. 8, line 19). Anumula et al. does not explicitly disclose that the area of interest is determined by forming a point group based on the non-camera sensor data, the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors, and that correspond to respective pixel coordinates. Delp et al. teaches a system in the same field of endeavor of object detection, comprising: a processor coupled to a memory that stores instructions executable by the processor (“The processing circuitry may be a single processor (or a single processor network) alternatingly executing software code corresponding to the generative and the non-generative image models, or the processing circuitry can be split into multiple hardware portions that have these respective functionalities” at paragraph 0029, last sentence) to: identify, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest, wherein the area of interest is determined by forming a point group based on the non-camera sensor data (“Further explanation of how the clustering system 170 groups points and identifies clusters will be described in relation to examples that are illustrated in FIGS. 4-5. FIG. 4 illustrates an example 400 of how an object 405 (a pickup truck) is perceived by a LiDAR and subsequently clustered” at paragraph 0050, line 1), the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors (“The cell features are illustrated for purposes of discussion and include velocity indicators for each cell. Thus, the clustering system 170 produces the grid as shown at block 420 and processes the grid as shown at block 425. That is, the system 170 applies the clustering model 260 to the grid, which analyzes the cells for similarities. In this illustrated instance, the clustering model 260 connects all of the shown cells due to the common velocity among the cells, even though the cells have an apparent discontinuity from the separation in distance. Accordingly, the clustering system 170 assigns the cells to a single cluster, as shown at block 430” at paragraph 0051), and that correspond to respective pixel coordinates (“This volume of data can be further expanded through fusion with other data sources such as camera images, radar, and so on. As such, the points within the point cloud may include further attributes, such as intensity, reflectivity, RGB values, and so on” at paragraph 0019, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the point cell clustering as taught by Delp et al. in defining the ROI of Anumula et al. as it “improves the processing of point clouds to cluster points associated with entities” (Delp et al. at paragraph 0005, line 2). Regarding claim 12, Anumula et al. discloses a method, comprising: obtaining a camera image to include a scene (“data from the camera sensor” at col. 3, line 39); identifying, from non-camera sensor data, an area of interest in the scene (“The vehicle 110 may further comprise a processor configured to receive data from the multiple sensors 120 and to process the data before encoding the data” at col. 3, line 36; “The set of multiple sensors 120 may include a camera, a LIDAR, a radar, a time-of-flight device and other sensors and devices that may be used for observing the environment of the vehicle 110” at col. 3, line 31); identifying, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest and a second portion of the camera image that excludes the area of interest (“Each of the detectors 310, or alternatively a subset of the detectors, may generate an ROI probability map, for example, at pixel level. These probability maps may be converted, by a ROI adaption module 320, into a single ROI-id map at pixel-block level” at col. 5, line 9; areas not identified as part of an ROI is therefore the second portion); and applying a first compression rule to the first portion of the camera image and a second compression rule to the second portion of the camera image, wherein the first compression rule is less lossy than the second compression rule (“For example, a pixel block within an ROI may be compressed less aggressively than a pixel block that is not within the ROI” at col. 5, line 49; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI” at col. 8, line 19). Anumula et al. does not explicitly disclose that the area of interest is determined by forming a point group based on the non-camera sensor data, the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors, and that correspond to respective pixel coordinates. Delp et al. teaches a method in the same field of endeavor of object detection, comprising: identifying, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest, wherein the area of interest is determined by forming a point group based on the non-camera sensor data (“Further explanation of how the clustering system 170 groups points and identifies clusters will be described in relation to examples that are illustrated in FIGS. 4-5. FIG. 4 illustrates an example 400 of how an object 405 (a pickup truck) is perceived by a LiDAR and subsequently clustered” at paragraph 0050, line 1), the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors (“The cell features are illustrated for purposes of discussion and include velocity indicators for each cell. Thus, the clustering system 170 produces the grid as shown at block 420 and processes the grid as shown at block 425. That is, the system 170 applies the clustering model 260 to the grid, which analyzes the cells for similarities. In this illustrated instance, the clustering model 260 connects all of the shown cells due to the common velocity among the cells, even though the cells have an apparent discontinuity from the separation in distance. Accordingly, the clustering system 170 assigns the cells to a single cluster, as shown at block 430” at paragraph 0051), and that correspond to respective pixel coordinates (“This volume of data can be further expanded through fusion with other data sources such as camera images, radar, and so on. As such, the points within the point cloud may include further attributes, such as intensity, reflectivity, RGB values, and so on” at paragraph 0019, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the point cell clustering as taught by Delp et al. in defining the ROI of Anumula et al. as it “improves the processing of point clouds to cluster points associated with entities” (Delp et al. at paragraph 0005, line 2). Regarding claim 17, Anumula et al. discloses an article, comprising: a non-transitory computer-readable media having instructions encoded thereon which, when executed by a processor coupled to at least one memory are operable (“In its most basic configuration, the operating environment 1200 typically includes at least one processing unit 1202 and memory 1204. Depending on the exact configuration and type of computing device, memory 1204 (instructions for encoding and/or optimizing as disclosed herein)” at col. 12, line 51) to: obtain a camera image to include a scene (“data from the camera sensor” at col. 3, line 39); identify, from non-camera sensor data, an area of interest in the scene (“The vehicle 110 may further comprise a processor configured to receive data from the multiple sensors 120 and to process the data before encoding the data” at col. 3, line 36; “The set of multiple sensors 120 may include a camera, a LIDAR, a radar, a time-of-flight device and other sensors and devices that may be used for observing the environment of the vehicle 110” at col. 3, line 31); identify, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest and a second portion of the camera image that excludes the area of interest (“Each of the detectors 310, or alternatively a subset of the detectors, may generate an ROI probability map, for example, at pixel level. These probability maps may be converted, by a ROI adaption module 320, into a single ROI-id map at pixel-block level” at col. 5, line 9; areas not identified as part of an ROI is therefore the second portion); and apply a first compression rule to the first portion of the camera image and a second compression rule to the second portion of the camera image, wherein the first compression rule is less lossy than the second compression rule (“For example, a pixel block within an ROI may be compressed less aggressively than a pixel block that is not within the ROI” at col. 5, line 49; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI” at col. 8, line 19). Anumula et al. does not explicitly disclose that the area of interest is determined by forming a point group based on the non-camera sensor data, the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors, and that correspond to respective pixel coordinates. Delp et al. teaches an article in the same field of endeavor of object detection, comprising: a non-transitory computer-readable media having instructions encoded thereon which, when executed by a processor coupled to at least one memory are operable (“The memory 210 is a random-access memory (RAM), read-only memory (ROM), a hard disk drive, a flash memory, or other suitable memory for storing the modules 220 and 230. The modules 220 and 230 are, for example, computer-readable instructions that, when executed by the processor 110, cause the processor 110 to perform the various functions disclosed herein.” at paragraph 0027, second to last sentence) to: identify, based on the non-camera sensor data, a first portion of the camera image that includes the area of interest, wherein the area of interest is determined by forming a point group based on the non-camera sensor data (“Further explanation of how the clustering system 170 groups points and identifies clusters will be described in relation to examples that are illustrated in FIGS. 4-5. FIG. 4 illustrates an example 400 of how an object 405 (a pickup truck) is perceived by a LiDAR and subsequently clustered” at paragraph 0050, line 1), the point group including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors (“The cell features are illustrated for purposes of discussion and include velocity indicators for each cell. Thus, the clustering system 170 produces the grid as shown at block 420 and processes the grid as shown at block 425. That is, the system 170 applies the clustering model 260 to the grid, which analyzes the cells for similarities. In this illustrated instance, the clustering model 260 connects all of the shown cells due to the common velocity among the cells, even though the cells have an apparent discontinuity from the separation in distance. Accordingly, the clustering system 170 assigns the cells to a single cluster, as shown at block 430” at paragraph 0051), and that correspond to respective pixel coordinates (“This volume of data can be further expanded through fusion with other data sources such as camera images, radar, and so on. As such, the points within the point cloud may include further attributes, such as intensity, reflectivity, RGB values, and so on” at paragraph 0019, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize the point cell clustering as taught by Delp et al. in defining the ROI of Anumula et al. as it “improves the processing of point clouds to cluster points associated with entities” (Delp et al. at paragraph 0005, line 2). Regarding claim 2, Anumula et al. discloses a system wherein first compression rule operates to: decrease compression of the first portion of the camera image (“For example, a pixel block within an ROI may be compressed less aggressively than a pixel block that is not within the ROI” at col. 5, line 49; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI” at col. 8, line 19). Regarding claim 3, Anumula et al. discloses a system wherein first compression rule operates to: decrease compression of the first portion of the camera image (“For example, a pixel block within an ROI may be compressed less aggressively than a pixel block that is not within the ROI” at col. 5, line 49; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI” at col. 8, line 19). The Anumula et al. and Delp et al. combination does not explicitly disclose decreasing compression nearby the first portion of the camera image. However, using Figure 2 of Anumula et al. as reference, if there are two ROI objects of importance adjacent to each other, such as two vehicles 204 in adjacent lanes ahead of the vehicle 110 of Anumula et al., both of these ROIs would be subject to lossless compression such as described in conjunction with claim 13 below. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to apply the same compression rule to both ROIs to ensure that both objects of interest are not subject to loss in quality. Regarding claim 7, Anumula et al. discloses a system wherein the first compression rule is to apply zero or negligible compression responsive to determination that the area of interest includes a moving vehicle, a moving pedestrian, a moving bicyclist (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles” at col. 4, line 45), a moving motorcycle, a moving natural object, or a moving animal. Regarding claim 8, Anumula et al. discloses a system wherein the first compression rule is to apply non-zero or non-negligible compression responsive to a determination that the area of interest includes a stationary vehicle, a stationary pedestrian, a stationary bicyclist, a stationary natural object (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles, lane/road markers, etc. in a video stream may be more important than objects like sky, vegetation or buildings etc. Therefore, not all of the objects in a video frame are of same importance and different compression/quality factors (depending upon the importance of the objects) may be chosen while compressing the video frame” at col. 4, line 45), or a stationary animal. Regarding claim 11, Anumula et al. discloses a system wherein the second compression rule operates to: apply lossy compression in the second portion of the camera image that includes free space (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles, lane/road markers, etc. in a video stream may be more important than objects like sky, vegetation or buildings etc. Therefore, not all of the objects in a video frame are of same importance and different compression/quality factors (depending upon the importance of the objects) may be chosen while compressing the video frame” at col. 4, line 45; “In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI. In other words, a quantization parameter may be computed for each pixel-block to preserve target quality in the ROI regions and aggressive compression elsewhere to meet the target bitrate.” at col. 8, line 19; non-ROI areas include background areas where there are no objects of interest, such as the sky region at numeral 207 in Figure 2). Regarding claim 13, Anumula et al. discloses a method further comprising: applying the first compression rule to include zero or negligible compression of the first portion of the camera image (“In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI. In other words, a quantization parameter may be computed for each pixel-block to preserve target quality in the ROI regions and aggressive compression elsewhere to meet the target bitrate. For almost lossless compression, a high value is chosen for the quantization parameter, resulting in a smaller quantization step size” at col. 8, line 19). Regarding claim 14, the Anumula et al. and Delp et al. combination discloses a method as described in claim 12 above. The Anumula et al. and Delp et al. combination does not explicitly disclose applying the first compression rule to bring about zero or negligible compression of an area nearby the first portion of the camera image. However, using Figure 2 of Anumula et al. as reference, if there are two ROI objects of importance adjacent to each other, such as two vehicles 204 in adjacent lanes ahead of the vehicle 110 of Anumula et al., both of these ROIs would be subject to lossless compression such as described in conjunction with claim 13 above. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to apply the same compression rule to both ROIs to ensure that both objects of interest are not subject to loss in quality. Claim(s) 4-6, 15, 16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Anumula et al. and Delp et al. and further in view of Yonezawa (US 11,281,927). Regarding claims 4, 15 and 18, the Anumula et al. and Delp et al. combination discloses the elements of claims 1, 12 and 17 as described above. The Anumula et al. and Delp et al. combination does not explicitly disclose that the encoded instructions are additionally to: determine whether the first portion of the camera image includes a moving object or includes a stationary object. Yonezawa teaches a system, method and article in the same field of endeavor of ROI correlated variable quantization, wherein the encoded instructions are additionally to: determine whether the first portion of the camera image (“In step S530, processing regarding setting of a specific object ROI (addition or deletion of a specific object ROI) is performed” at col. 7, line 39) includes a moving object or includes a stationary object (“In step S540, the second detection unit 214 detects a moving object from the selected frame image. Detection of a moving object from the frame image may be performed for each frame, or may be performed every few frames. A moving object may be detected from a range using the entire selected frame image as the range, or may be detected from a range using the specific object ROI set by the region generation unit 213 as the range” at col. 9, line 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a moving object estimation as taught by Yonezawa in the system of the Anumula et al. and Delp et al. combination to prioritize particular motion objects as objects of interest (see Yonezawa at col. 11, lines 30-36 that states moving objects for the ROI have a lower QP compression QP). Therefore, non-motion objects can still be objects of interest but designated less important in accordance with the above cited disclosure of Anumula et al. Regarding claim 5, the Anumula et al. and Delp et al. combination discloses a system wherein the first compression rule is to apply a first level of decreased compression responsive to a determination that the first portion of the camera image indicates presence of an object, and wherein the first compression rule is to apply a second level of decreased compression responsive to a determination that the first portion of the camera image indicates presence of another object (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles, lane/road markers, etc. in a video stream may be more important than objects like sky, vegetation or buildings etc. Therefore, not all of the objects in a video frame are of same importance and different compression/quality factors (depending upon the importance of the objects) may be chosen while compressing the video frame” Anumula et al. at col. 4, line 45). The Anumula et al. and Delp et al. combination does not explicitly disclose determining presence of a moving object and presence of a stationary object to determine compression level. Yonezawa teaches a system in the same field of endeavor of ROI correlated variable quantization, wherein the encoded instructions are additionally to: determine whether the first portion of the camera image (“In step S530, processing regarding setting of a specific object ROI (addition or deletion of a specific object ROI) is performed” at col. 7, line 39) includes a moving object or includes a stationary object (“In step S540, the second detection unit 214 detects a moving object from the selected frame image. Detection of a moving object from the frame image may be performed for each frame, or may be performed every few frames. A moving object may be detected from a range using the entire selected frame image as the range, or may be detected from a range using the specific object ROI set by the region generation unit 213 as the range” at col. 9, line 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a moving object estimation as taught by Yonezawa in the system of the Anumula et al. and Delp et al. combination to prioritize particular motion objects as the most important objects of interest (see Yonezawa at col. 11, lines 30-36 that states moving objects for the ROI have a lower QP compression QP). Therefore, non-motion objects can still be objects of interest but designated less important in accordance with the above cited disclosure of Anumula et al. Regarding claims 6 and 19, the Anumula et al. and Delp et al. combination discloses a system and article wherein the encoded instructions are additionally to: apply zero or negligible compression to the first portion of the camera image responsive to a determination that the area of interest indicates presence of an object (“In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI. In other words, a quantization parameter may be computed for each pixel-block to preserve target quality in the ROI regions and aggressive compression elsewhere to meet the target bitrate. For almost lossless compression, a high value is chosen for the quantization parameter, resulting in a smaller quantization step size” Anumula et al. at col. 8, line 19). The Anumula et al. and Delp et al. combination does not explicitly disclose determining presence of a moving object. Yonezawa teaches a system and article in the same field of endeavor of ROI correlated variable quantization, wherein the encoded instructions are additionally to: apply zero or negligible compression to the first portion of the camera image (“In step S560, for the specific object ROI including part or all of the moving object region, the compression encoding unit 215 sets “35” as “the qP value inside the ROI” at col. 11, line 30) responsive to a determination that the area of interest indicates presence of a moving object (“In step S540, the second detection unit 214 detects a moving object from the selected frame image. Detection of a moving object from the frame image may be performed for each frame, or may be performed every few frames. A moving object may be detected from a range using the entire selected frame image as the range, or may be detected from a range using the specific object ROI set by the region generation unit 213 as the range” at col. 9, line 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a moving object estimation as taught by Yonezawa in the system of the Anumula et al. and Delp et al. combination to prioritize particular motion objects as the most important objects of interest (see Yonezawa at col. 11, lines 30-36 that states moving objects for the ROI have a lower QP compression QP). Regarding claim 16, the Anumula et al. and Delp et al. combination discloses a method comprising: applying the first compression rule to include zero or negligible compression to the first portion of the camera image based on the first portion of the camera image including an object (“In step S550, a quantization parameter for the grouped pixels is determined. The quantization parameter for the grouped pixels of the ROI yields less compression than a quantization parameter of a non-ROI. In other words, a quantization parameter may be computed for each pixel-block to preserve target quality in the ROI regions and aggressive compression elsewhere to meet the target bitrate. For almost lossless compression, a high value is chosen for the quantization parameter, resulting in a smaller quantization step size” Anumula et al. at col. 8, line 19); and applying non-zero or non-negligible compression based on the first portion of the camera image including another object (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles, lane/road markers, etc. in a video stream may be more important than objects like sky, vegetation or buildings etc. Therefore, not all of the objects in a video frame are of same importance and different compression/quality factors (depending upon the importance of the objects) may be chosen while compressing the video frame” Anumula et al. at col. 4, line 45). The Anumula et al. and Delp et al. combination does not explicitly disclose determining presence of a moving object and presence of a stationary object to determine compression level. Yonezawa teaches a method in the same field of endeavor of ROI correlated variable quantization, wherein the encoded instructions are additionally to: determine whether the first portion of the camera image (“In step S530, processing regarding setting of a specific object ROI (addition or deletion of a specific object ROI) is performed” at col. 7, line 39) includes a moving object or includes a stationary object (“In step S540, the second detection unit 214 detects a moving object from the selected frame image. Detection of a moving object from the frame image may be performed for each frame, or may be performed every few frames. A moving object may be detected from a range using the entire selected frame image as the range, or may be detected from a range using the specific object ROI set by the region generation unit 213 as the range” at col. 9, line 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a moving object estimation as taught by Yonezawa in the system of the Anumula et al. and Delp et al. combination to prioritize particular motion objects as the most important objects of interest (see Yonezawa at col. 11, lines 30-36 that states moving objects for the ROI have a lower QP compression QP). Therefore, non-motion objects can still be objects of interest but designated less important in accordance with the above cited disclosure of Anumula et al. Regarding claim 20, the Anumula et al. and Delp et al. combination discloses an article wherein the encoded instructions are additionally operable to: apply non-zero or non-negligible compression to the first portion of the camera image responsive to determining that the first portion of the camera image includes an object (“Notably, detection of objects-of-interest, e.g. other vehicles, traffic lights, pedestrians, bicycles, lane/road markers, etc. in a video stream may be more important than objects like sky, vegetation or buildings etc. Therefore, not all of the objects in a video frame are of same importance and different compression/quality factors (depending upon the importance of the objects) may be chosen while compressing the video frame” Anumula et al. at col. 4, line 45; certain objects are therefore compressed at a relatively higher rate to important objects of interest). The Anumula et al. and Delp et al. combination does not explicitly disclose determining presence of a stationary object to determine compression level. Yonezawa teaches a method in the same field of endeavor of ROI correlated variable quantization, wherein the encoded instructions are additionally to: determine whether the first portion of the camera image (“In step S530, processing regarding setting of a specific object ROI (addition or deletion of a specific object ROI) is performed” at col. 7, line 39) includes a moving object or includes a stationary object (“In step S540, the second detection unit 214 detects a moving object from the selected frame image. Detection of a moving object from the frame image may be performed for each frame, or may be performed every few frames. A moving object may be detected from a range using the entire selected frame image as the range, or may be detected from a range using the specific object ROI set by the region generation unit 213 as the range” at col. 9, line 51). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a moving object estimation as taught by Yonezawa in the system of the Anumula et al. and Delp et al. combination to prioritize particular motion objects as the most important objects of interest (see Yonezawa at col. 11, lines 30-36 that states moving objects for the ROI have a lower QP compression QP). Therefore, non-motion objects can still be objects of interest but designated less important in accordance with the above cited disclosure of Anumula et al. Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Anumula et al. and Delp et al. and further in view of Chondro et al. (US 2020/0111225). The Anumula et al. and Delp et al. combination disclose a system wherein a camera to generate the camera image and sensors to generate the non-camera sensor data are mounted on a common platform (“FIG. 1 illustrates a system 100 including a vehicle 110, a set of multiple sensors 120 of the vehicle 110, another vehicle 130, and a cloud environment 140. The set of multiple sensors 120 may include a camera, a LIDAR, a radar, a time-of-flight device and other sensors and devices that may be used for observing the environment of the vehicle 110” Anumula et al. at col. 3, line 29). The Anumula et al. and Delp et al. combination does not explicitly disclose that the camera and sensors having at least partially overlapping fields-of-view. Chondro et al. teaches a system in the same field of endeavor of exterior vehicle monitoring, wherein a camera to generate the camera image and sensors to generate the non-camera sensor data are mounted on a common platform having at least partially overlapping fields-of-view (“With reference to FIG. 5 and FIG. 6A˜6F, the framework to be described would include a depth estimation apparatus that utilizes multiple types of sensing devices (e.g. a RGB camera and a LiDAR transducer illustrated in FIG. 6A) to perform depth estimations by using multiple algorithms for each type of sensing devices over the overlapping FOVs (as illustrated in FIG. 6B), wherein the FOV distance of the RGB camera is 100 meters and the FOV degree of the LiDAR sensor is 360 degrees. The multiple types of sensing devices may include a first type of sensor (e.g. RGB camera array 501 illustrated in FIG. 5) and a second type of sensor (e.g. LiDAR transducer array 502 illustrated in FIG. 5)” at paragraph 0041, line 8). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a configuration as taught by Condro et al. for the sensors of the Anumula et al. and Delp et al. combination to ensure that the ROI data from each sensor system is properly correlated for object detection. Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Anumula et al. and Delp et al. and further in view of Gayaka et al. (US 2022/0300001). The Anumula et al. and Delp et al. combination discloses a system wherein the non-camera sensor data comprises data from at least one of a LIDAR sensor and a radar sensor (“FIG. 1 illustrates a system 100 including a vehicle 110, a set of multiple sensors 120 of the vehicle 110, another vehicle 130, and a cloud environment 140. The set of multiple sensors 120 may include a camera, a LIDAR, a radar, a time-of-flight device and other sensors and devices that may be used for observing the environment of the vehicle 110” Anumula et al. at col. 3, line 29). The Anumula et al. and Delp et al. combination does not explicitly disclose a infrared sensor and an ultrasonic sensor. Gayaka et al. teaches a system in the same field of endeavor of obstacle detection, wherein the non-camera sensor data comprises data from a LIDAR sensor, a radar sensor, an infrared sensor, and an ultrasonic sensor (“Depth sensors such as ultrasonic sensors, optical sensors such as a TOF depth camera, LIDAR, radar, and so forth may provide depth data that is indicative of the presence or absence of objects in the physical space 102 within the FOV 110 of the depth sensor. For example, a sensor 134 such as a TOF depth camera may emit a pulse of infrared light and use a detected return time for reflected light to determine a distance between the sensor and the object that reflected the light” at paragraph 0067, line 1). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to utilize a configuration as taught by Gayaka et al. for the sensors of the Anumula et al. and Delp et al. combination to provide various depth information at particular resolutions for detection of the object data. Response to Arguments Summary of Remarks (@response page labeled 8): “Anumula discloses at most that ‘[e]ach of the detectors 310, or alternatively a subset of the detectors, may generate an ROI probability map, for example, at [the] pixel level.’ Col.5:9-11. Anumula’s ‘detectors,’ in the only described embodiment, are ‘trained machine learning (ML) based detectors, to detect one or more pre-defined objects-of-interest on video frames that are fed into the system.’ Col. 5:5-8. Thus, Anumula’s detectors generating an ROI probability map do so based on analyzing video camera data, and not with non-camera sensor data, and not by ‘including points that are determined by respective range and/or velocity measurements from one or more non-camera sensors.’” Examiner’s Response: This argument is moot in view of the newly cited Delp et al. reference. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATRINA R FUJITA whose telephone number is (571)270-1574. The examiner can normally be reached Monday - Friday 9:30-5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 5712723638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATRINA R FUJITA/ Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
May 12, 2025
Non-Final Rejection — §103
Jul 15, 2025
Examiner Interview Summary
Jul 15, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103
Nov 26, 2025
Response after Non-Final Action
Dec 15, 2025
Request for Continued Examination
Jan 13, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597250
DETECTION OF PLANT DETRIMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12582476
SYSTEMS FOR PLANNING AND PERFORMING BIOPSY PROCEDURES AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 24, 2026
Patent 12585698
MULTIMEDIA FOCALIZATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586190
SYSTEM AND METHOD OF CLASSIFICATION OF BIOLOGICAL PARTICLES
2y 5m to grant Granted Mar 24, 2026
Patent 12566341
PREDICTING SIZING AND/OR FITTING OF HEAD MOUNTED WEARABLE DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
94%
With Interview (+24.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 674 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month