Prosecution Insights
Last updated: April 19, 2026
Application No. 18/547,028

METHOD AND DEVICE FOR THE DETECTION AND DETERMINATION OF THE HEIGHT OF OBJECTS

Non-Final OA §103
Filed
Aug 18, 2023
Examiner
CATTUNGAL, ROWINA J
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Continental Autonomous Mobility Germany GmbH
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
393 granted / 521 resolved
+17.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
554
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 521 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to RCE filed 12/29/2025 in which claims 1-12 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered. Response to Arguments Applicant's arguments filed 12/29/2025 have been fully considered and are moot in view of new grounds of rejection by relying on the teachings of Levandowski et al. (US 2020/0183395 A1) (IDS provided 12/05/2025). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5-6, 9, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida et al. (US 2016/0275359 A1) (US Corresponding to WO 2015/098222 A1, IDS filed 08/15/2024) in view of Zhang et al. (US 2018/0101739 A1) and Levandowski et al. (US 2020/0183395 A1) (IDS provided 12/05/2025). Regarding claim 1, Yoshida discloses a method for detection and determination of a height of objects by means of an environment detection system (Para[0124]-[0130] & Fig. 6 teaches calculates a height of the specific photographic object), comprising a first environment detection sensor (Para[0048] teaches the information processing apparatus 100 acquires a photographic image from a camera 200 mounted on the vehicle) and a second environment detection sensor of a vehicle (para[0048] & Fig. 1 teaches distance information is obtained from the sensor 300), the method comprising: - capturing a image using the camera (para[0056] teaches photographic image acquisition unit 101 acquires the photographic image photographed by the camera 200); - capturing an environment representation using the second environment detection sensor (Para[0058] teaches a distance information acquisition unit 102 acquires the distance information indicating the distance to the object acquired by the sensor 300); - carrying out, by a computing unit having a first input connected to an output of the first environment sensor and a second input connected to an output of the second environment sensor, object detection in the image (para[0091] & Fig. 1 teaches next, the image extraction unit 104 acquires the photographic image 400 and the distance information 500 associated by the matching point detection unit 103 and extracts the image of the photographic object (the front vehicle) of the creation target for the wire frame (S603)); - carrying out, by the computing unit, object detection in the environment representation of the second environment detection sensor (Para[0133] & Fig. 13 teaches the distance calculation processing execution unit 105 acquires from the image extraction unit 104 the image of the target photographic object (the front vehicle) extracted by the image extraction unit 104, operates the depth map processing exclusively to the image of the target photographic object (the front vehicle)); - measuring, by the computing unit, a distance from the a detected object in the environment representation of the second environment detection sensor (para[0100] teaches & FIG. 10, the distance information from the sensor 300, predicts the recognition range 803 suitable for the image of the front vehicle in the photographic image 400, scans the predicted recognition range 803 on the photographic image 400, and extracts the image of the front vehicle); - carrying out, by the computing unit, height determination of the detected object (Para[0124] teaches after extracting the image of the target photographic object (S603), next, the image extraction unit 104 calculates the width and a height of the target photographic object (the front vehicle) (S604), para[0125] –[0128] & Figs. 10-11 teaches image extraction unit 104 calculates only the height of the target photographic object . Para[0130] teaches S6033 in FIG. 11 calculating height). Yoshida does not explicitly disclose wherein at least one of the environment detection sensors is a mono camera, the method comprising: capturing a mono image using the mono camera; object detection in the mono image; confirming, by the computing unit, a detected object by comparing the mono image and the second environment representation to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the second environment representation; and performing an automated driving operation for the vehicle based on the height of the detected object. However Zhang discloses wherein at least one of the environment detection sensors is a mono camera, the method comprising: capturing a mono image using the mono camera (Para[0040] & Fig. 8 teaches the image component 802 is configured to obtain and/or store images from a camera of a vehicle. For example, the images may include video images captured by a monocular camera of a vehicle); object detection in the mono image (Para[0049] teaches the object detection component 812 is configured to detect objects within images obtained or stored by the image component 802. For example, the object detection component 812 may process each image to detect objects such as vehicles, pedestrians, animals, cyclists, road debris, road signs, barriers, or the like); and performing an automated driving operation for the vehicle based on the height of the detected object (Fig. 1 teaches automated driving/assistance system 102. para[0038] teaches based on the object distance and the object height, a control system of a vehicle such as the automated driving/assistance system 102 of FIG. 1, may make driving and/or collision avoidance decisions). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida with the method involves identifying image features in a frame of Zhang in order to provide a system in which thus enables to accurately navigate roads in a variety of different driving environments, provides safety features, eliminates user involvement entirely and helps the driver to park the vehicle, without impacting the obstacle. Yoshida in view of Zhang does not explicitly disclose confirming, by the computing unit, a detected object by comparing the mono image and the second environment representation to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the second environment representation. However Levandowski discloses confirming, by the computing unit, a detected object by comparing the mono image and the second environment representation (Para[0073] –[0074] teaches a RADAR sensors may include one or more radio components that are configured to detect objects/targets in an environment of the vehicle 104. In some embodiments, the RADAR sensors may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time, The ultrasonic sensors may include one or more components that are configured to detect objects/targets in an environment of the vehicle 104. In some embodiments, the ultrasonic sensors may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time) to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the second environment representation (Para[0113] teaches the second node 304B may correspond to an object identifier or object detector configured to identify/detect and then output information associated with one or more objects (such as a location of the object and object information (size, distance, category, type etc.) from the image provided by the first node 304A. Each identified object may be associated with an object ID, where the object ID may be made accessible to one or more nodes. Para[0116] Accordingly, the second node 304B may correspond to an object tracker configured to identify and then output one or more objects (such as a location of the object and object information (size, distance, category, type etc.) from the image provided by the first node 304A. Para[0131]-[0132] & Fig. 22A teaches detecting the object, object detection node can automatically draw a box over the object(s) in the image. For example, the object detection node may draw boxes 2206, 2210, 2214, 2218, and 2222 around objects 2204-2220. The image 2200 with only the boxes showing the location of the objects may be as shown in FIG. 22B. The operation shown in FIG. 22B may actually occur before identifying the objects in each box 2206-2222, which may be as shown in FIG. 22A. The ML model may then identify the object in each of the boxes, including identifying object 2216 in box 2218 as a second vehicle. the object detection node can determine a center 2236 of the box, and thus, the center of the object 2216 in the image 2200). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction involves identifying image features in a frame and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida in view of Zhang with the method involves receiving image data from a sensor (208) of a vehicle. and object identified based on the image data of Levandowski in order to provide a system with autonomous features for autonomous and semi-autonomous control of one or more vehicles are provided Regarding claim 2, Yoshida in view of Zhang and Levandowski discloses the method according to Claim 1, Yoshida discloses wherein the height of the detected object is determined based on a height of the object in pixels in the image (Para[0117] teaches further, the image extraction unit 104 also calculates a height per pixel by the same ration calculation. Para[0127], [0130] & Fig. 11 teaches specifically, the image extraction unit 104 counts the number of pixels in the height direction in the extracted image of the front vehicle, and the height of the front vehicle is calculated by multiplying the counted number of pixels by the height per pixel), the measured distance of the object (para[0131] teaches next, the distance calculation processing execution unit 105 calculates the distance to the closest point within the target photographic object (the front vehicle) (S605)) as well as a known angular resolution of the environment detection sensors (para[0052] teaches the LIDAR measures, for example, as exemplified in FIG. 3, a distance to an object surrounding the vehicle by scanning a laser horizontally with about 0.4 degrees angular resolution over a wide range of 240 degrees). Yoshida does not explicitly disclose height of the object in pixels in the mono image. However Zhang discloses height of the object in pixels in the mono image (para[0041] teaches the feature component 804 is configured to detect image features within the images. The image features may include pixels located at high contrast boundaries, locations with high frequency content, or the like, Para[0050] teaches the object detection component 812 also determines a location for the object such as a two-dimensional location within an image frame or an indication of which pixels correspond to the object. Para[0051] determining a height of one or more objects based on corresponding feature points). Motivation to combine as indicated in claim 1. Regarding claim 5, Yoshida in view of Zhang and Levandowski discloses the method according to Claim 1, Yoshida further discloses wherein the object detection for one of the environment detection sensors or specifies a region of interest for the objection detection for the other environment detection sensor (Para[0100] & Fig. 10-11 teaches FIG. 10, the distance information from the sensor 300, predicts the recognition range 803 suitable for the image of the front vehicle in the photographic image 400, scans the predicted recognition range 803 on the photographic image 400, and extracts the image of the front vehicle). Regarding claim 6, Yoshida discloses an environment detection system for a vehicle, comprising a first environment detection sensor (Para[0048] teaches the information processing apparatus 100 acquires a photographic image from a camera 200 mounted on the vehicle) and a second environment detection sensor (para[0048] & Fig. 1 teaches distance information is obtained from the sensor 300) having a determined angular resolution (para[0052] teaches the LIDAR measures, for example, as exemplified in FIG. 3, a distance to an object surrounding the vehicle by scanning a laser horizontally with about 0.4 degrees angular resolution over a wide range of 240 degrees) and a computing unit (Fig. 1), wherein at least the first environment detection sensor is configured as a camera, wherein a image is captured using the camera (para[0056] teaches photographic image acquisition unit 101 acquires the photographic image photographed by the camera 200); and a further environment representation is captured using the second environment detection sensor (Para[0058] teaches a distance information acquisition unit 102 acquires the distance information indicating the distance to the object acquired by the sensor 300), wherein the computing unit includes a first input connected to an output of the first environment sensor and a second input connected to an output of the second environment sensor , wherein the computing unit is configured to detect an object in the image as well as in the environment representation of the second environment detection sensor (para[0091] & Fig. 1 teaches next, the image extraction unit 104 acquires the photographic image 400 and the distance information 500 associated by the matching point detection unit 103 and extracts the image of the photographic object (the front vehicle) of the creation target for the wire frame (S603)); wherein the computing unit is further configured to carry out distance determination (para[0100] teaches & FIG. 10, a second method uses the distance information from the sensor 300, predicts the recognition range 803 suitable for the image of the front vehicle in the photographic image 400, scans the predicted recognition range 803 on the photographic image 400, and extracts the image of the front vehicle); as well as height determination of the object (Para[0124] teaches after extracting the image of the target photographic object (S603), next, the image extraction unit 104 calculates the width and a height of the target photographic object (the front vehicle) (S604), para[0125] –[0128] & Figs. 10-11 teaches image extraction unit 104 calculates only the height of the target photographic object . Para[0130] teaches S6033 in FIG. 11 calculating height). Yoshida does not explicitly disclose wherein a mono image is captured using the camera; wherein the computing unit is configured to detect an object in the mono image; wherein the computing unit is further configured to, after the detection of an object in the mono image as well as in the environment representation, confirm a detected object by comparing the mono image and the second environment representation to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the environment representation of the second environment detection sensor, as well as height determination of the object for use in performing an automated driving operation of the vehicle. However Zhang discloses wherein a mono image is captured using the camera (Para[0040] & Fig. 8 teaches the image component 802 is configured to obtain and/or store images from a camera of a vehicle. For example, the images may include video images captured by a monocular camera of a vehicle); wherein the computing unit is configured to detect an object in the mono image (Para[0049] teaches the object detection component 812 is configured to detect objects within images obtained or stored by the image component 802. For example, the object detection component 812 may process each image to detect objects such as vehicles, pedestrians, animals, cyclists, road debris, road signs, barriers, or the like); as well as height determination of the object for use in performing an automated driving operation of the vehicle (Fig. 1 teaches automated driving/assistance system 102. para[0038] teaches based on the object distance and the object height, a control system of a vehicle such as the automated driving/assistance system 102 of FIG. 1, may make driving and/or collision avoidance decisions). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida with the method involves identifying image features in a frame of Zhang in order to provide a system in which thus enables to accurately navigate roads in a variety of different driving environments, provides safety features, eliminates user involvement entirely and helps the driver to park the vehicle, without impacting the obstacle. Yoshida in view of Zhang does not explicitly disclose wherein the computing unit is further configured to, after the detection of an object in the mono image as well as in the environment representation, confirm a detected object by comparing the mono image and the second environment representation to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the environment representation of the second environment detection sensor, However Levandowski discloses wherein the computing unit is further configured to, after the detection of an object in the mono image as well as in the environment representation (Para[0073] –[0074] teaches a RADAR sensors may include one or more radio components that are configured to detect objects/targets in an environment of the vehicle 104. In some embodiments, the RADAR sensors may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time, The ultrasonic sensors may include one or more components that are configured to detect objects/targets in an environment of the vehicle 104. In some embodiments, the ultrasonic sensors may determine a distance, position, and/or movement vector (e.g., angle, speed, etc.) associated with a target over time), confirm a detected object by comparing the mono image and the second environment representation to determine whether a position of the detected object in the mono image corresponds to a position of the detected object in the environment representation of the second environment detection sensor Para[0113] teaches the second node 304B may correspond to an object identifier or object detector configured to identify/detect and then output information associated with one or more objects (such as a location of the object and object information (size, distance, category, type etc.) from the image provided by the first node 304A. Each identified object may be associated with an object ID, where the object ID may be made accessible to one or more nodes. Para[0116] Accordingly, the second node 304B may correspond to an object tracker configured to identify and then output one or more objects (such as a location of the object and object information (size, distance, category, type etc.) from the image provided by the first node 304A. Para[0131]-[0132] & Fig. 22A teaches detecting the object, object detection node can automatically draw a box over the object(s) in the image. For example, the object detection node may draw boxes 2206, 2210, 2214, 2218, and 2222 around objects 2204-2220. The image 2200 with only the boxes showing the location of the objects may be as shown in FIG. 22B. The operation shown in FIG. 22B may actually occur before identifying the objects in each box 2206-2222, which may be as shown in FIG. 22A. The ML model may then identify the object in each of the boxes, including identifying object 2216 in box 2218 as a second vehicle. the object detection node can determine a center 2236 of the box, and thus, the center of the object 2216 in the image 2200). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction involves identifying image features in a frame and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida in view of Zhang with the method involves receiving image data from a sensor (208) of a vehicle. and object identified based on the image data of Levandowski in order to provide a system with autonomous features for autonomous and semi-autonomous control of one or more vehicles are provided. Regarding claim 9, Yoshida in view of Zhang and Levandowski discloses the environment detection system according to Claim 6, Yoshida further discloses wherein the height of the detected object is determined based on a height of the object in pixels in the image (Para[0124] teaches after extracting the image of the target photographic object (S603), next, the image extraction unit 104 calculates the width and a height of the target photographic object (the front vehicle) (S604)), the determined distance of the object (para[0131] teaches next, the distance calculation processing execution unit 105 calculates the distance to the closest point within the target photographic object (the front vehicle) (S605)) as well as a known angular resolution of the environment detection sensors (para[0052] teaches the LIDAR measures, for example, as exemplified in FIG. 3, a distance to an object surrounding the vehicle by scanning a laser horizontally with about 0.4 degrees angular resolution over a wide range of 240 degrees). Yoshida does not explicitly disclose height of the object in pixels in the mono image. However Zhang discloses height of the object in pixels in the mono image (para[0041] teaches the feature component 804 is configured to detect image features within the images. The image features may include pixels located at high contrast boundaries, locations with high frequency content, or the like, Para[0050] teaches the object detection component 812 also determines a location for the object such as a two-dimensional location within an image frame or an indication of which pixels correspond to the object. Para[0051] determining a height of one or more objects based on corresponding feature points.). Motivation to combine as indicated in claim 6. Regarding claim 12, Yoshida in view of Zhang and Levandowski discloses the environment detection system according to Claim 6, Yoshida further discloses, wherein the object detection for one of the environment detection sensors or specifies a region of interest for the objection detection for the other environment detection sensor (Para[0100] & Fig. 10-11 teaches FIG. 10, the distance information from the sensor 300, predicts the recognition range 803 suitable for the image of the front vehicle in the photographic image 400, scans the predicted recognition range 803 on the photographic image 400, and extracts the image of the front vehicle). 8. Claims 3, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida et al. (US 2016/0275359 A1) (Corresponding to WO 2015/098222 A1, IDS filed 08/15/2024) in view of Zhang et al. (US 2018/0101739 A1 and Levandowski et al. (US 2020/0183395 A1) (IDS provided 12/05/2025) in further view of Chen et al. (CN 112912895B) (machine translation attached). Regarding claim 3, Yoshida in view of Zhang and Levandowski discloses the method according to Claim 1, Zhang discloses mono image (Para[0040] & Fig. 8 teaches the image component 802 is configured to obtain and/or store images from a camera of a vehicle. For example, the images may include video images captured by a monocular camera of a vehicle). Yoshida in view of Zhang and Levandowski does not explicitly discloses wherein the object detection is carried out in the image by a semantic segmentation based on a trained convolutional neural network which is part of the computing unit. However Chen discloses wherein the object detection is carried out in the image by a semantic segmentation based on a trained convolutional neural network which is part of the computing unit (Figure 6(b) & Para[0077]-[0080] is a schematic diagram of the effect of each pixel block after semantic segmentation of the IPM, semantic segmentation is a typical computer vision problem that involves taking some raw data (such as flat images) as input and transforming them into masks with highlighted regions of interest, i.e. using the image block divides each pixel into corresponding categories. Exemplarily, the neural network adopted in this application adopts semantic segmentation as an encoder-decoder structure). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction involves identifying image features in a frame and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida in view of Zhang and Levandowski with the semantic segmentation method of Chen in order to provide a system in which distance between a vehicle and the object corresponding to each pixel block. Regarding claim 10, Yoshida in view of Zhang and Levandowski discloses the environment detection system according to Claim 6, Zhang discloses mono image (Para[0040] & Fig. 8 teaches the image component 802 is configured to obtain and/or store images from a camera of a vehicle. For example, the images may include video images captured by a monocular camera of a vehicle). Yoshida in view of Zhang and Levandowski does not explicitly disclose wherein the object detection is carried out in the image by a semantic segmentation based on a trained convolutional neural network that is part of the computing unit. However Chen discloses wherein the object detection is carried out in the image by a semantic segmentation based on a trained convolutional neural network that is part of the computing unit (Figure 6(b) & Para[0077]-[0080] is a schematic diagram of the effect of each pixel block after semantic segmentation of the IPM, semantic segmentation is a typical computer vision problem that involves taking some raw data (such as flat images) as input and transforming them into masks with highlighted regions of interest, i.e. using the image block divides each pixel into corresponding categories. Exemplarily, the neural network adopted in this application adopts semantic segmentation as an encoder-decoder structure). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction involves identifying image features in a frame and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida in view of Zhang and Levandowski with the semantic segmentation method of Chen in order to provide a system in which distance between a vehicle and the object corresponding to each pixel block. 9. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida et al. (US 2016/0275359 A1) (Corresponding to WO 2015/098222 A1, IDS filed 08/15/2024) in view of Zhang et al. (US 2018/0101739 A1) and Levandowski et al. (US 2020/0183395 A1) (IDS provided 12/05/2025) in further view of Fujimoto et al. (JP2010271143) (machine translation attached). Regarding claim 7, Yoshida in view of Zhang and Levandowski discloses the environment detection system according to Claim 6, Yoshida further discloses and the second environment detection sensor is a stereo camera, a radar sensor or a lidar sensor (Para[0051] teaches the sensor 300 is, for example, a LIDAR (Light Detection And Ranging)). Yoshida in view of Zhang and Levandowski does not explicitly disclose wherein the first environment detection sensor is a telephoto camera. However Fujimoto discloses wherein the first environment detection sensor is a telephoto camera para[0016]teaches usually, a stereo camera for short-distance shooting is provided with a wide-angle lens, and a stereo camera for long-distance shooting is relatively provided with a telephoto lens). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use a method of image extraction involves identifying image features in a frame and perform distance calculation selectively only on image of particular object extracted and calculating an accurate height of Yoshida in view of Zhang and Levandowski with the method of distance measuring apparatus using a stereo camera of Fujimoto in order to improve the distance measurement accuracy for long-distance obstacles by triangulation using a stereo camera. Regarding claim 8, Fujimoto further discloses the environment detection system according to Claim 7, wherein if the second environment detection sensor is configured as a stereo camera (para[0016] teaches stereo camera system), the telephoto camera is a part of the stereo camera (para[0016] teaches Usually, a stereo camera for short-distance shooting is provided with a wide-angle lens, and a stereo camera for long-distance shooting is relatively provided with a telephoto lens). Motivation to combine as indicated in claim 7. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Aug 18, 2023
Application Filed
Aug 18, 2023
Response after Non-Final Action
Apr 30, 2025
Non-Final Rejection — §103
Jul 25, 2025
Response Filed
Sep 05, 2025
Final Rejection — §103
Oct 27, 2025
Interview Requested
Nov 12, 2025
Examiner Interview Summary
Nov 12, 2025
Examiner Interview (Telephonic)
Dec 08, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604092
AUTOMATED DEVICE FOR DRILL CUTTINGS IMAGE ACQUISITION
2y 5m to grant Granted Apr 14, 2026
Patent 12604076
ENDOSCOPE SYSTEM, CONTROL METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12604036
METHOD AND APPARATUS OF ENCODING/DECODING IMAGE DATA BASED ON TREE STRUCTURE-BASED BLOCK DIVISION
2y 5m to grant Granted Apr 14, 2026
Patent 12604037
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12604038
IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+13.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 521 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month