Prosecution Insights
Last updated: April 19, 2026
Application No. 18/405,841

REVERSE DISPARITY ERROR CORRECTION

Non-Final OA §103
Filed
Jan 05, 2024
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the application filed on January 05, 2024 Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 16, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475. Regarding claim 1, Tovchigrechko teaches an apparatus for processing one or more images, comprising (see Tovchigrechko, Col 5, Lines 62-66, “The camera of the depth refining system 100 may be configured to capture one or more images 110”): one or more memories configured to store the one or more images; and one or more processors coupled to the one or more memories and configured to (see Tovchigrechko, Col 19, Lines 37-47, “In particular embodiments, processor 1002 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor 1002 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1004, or storage 1006; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 1004, or storage 1006.”): obtain first disparity information associated with a current image of the one or more images (see Tovchigrechko, Fig. 3, and Col 9, Lines 64-67, “The depth estimation unit of the training module 300 receives the captured frame N 310 from the camera and estimates depth measurements 312 of the frame N 310,” depth measurements of frame N 310 is considered to be first disparity information and frame N 310 is considered to be a current image); warp the current image based on the first disparity information to obtain an estimated previous image (see Tovchigrechko, Fig. 3, Col, Lines, “The reprojection unit reprojects 334 the frame N 310 based on the refined depth measurements 320 for the frame N 310 and the pose shift 332 between the frame N−1 350 and the frame N 310, and generates an estimated frame N−1 330,” reprojection unit reprojects 334 is considered to be warp the current image based on first disparity information; estimated frame N-1 330 is considered to be an estimated previous image); determine a confidence map (see Tovchigrechko, Col 14, Lines 31-36, “In particular embodiments, the confidence scores may be associated with each pixel to generate a corresponding per-pixel confidence map for the depth measurements”) and apply the confidence map to the first disparity information to generate updated first disparity information (see Tovchigrechko, Col 6, Lines 57-64, “one or more of the images 110 from which the depth measurements 124 are computed (e.g., the left image of the stereo image pair used for computing depth 124), and the associated confidence scores 126 (e.g., which may be represented by a corresponding confidence map),” and Col 7 Lines 3-11, “For example, the machine-learning model 130 may refine depth measurements 124 which their confidence scores 126 are lower than a specific threshold. By doing so, the machine-learning model 130 can avoid generating a large number of artifacts, and therefore, a smaller convolutional neural network can be applied to the machine-learning model 130 which enables real-time updates.”). Tovchigrechko does not expressively teach determine a confidence map associated with a confidence of the first disparity information based on a difference associated with the estimated previous image However, Xiong in a similar invention in the same field of endeavor teaches determine a confidence map associated with a confidence of the first disparity information based on a difference associated with the estimated previous image (see Xiong, Paragraph [0080], “. To check for spatial consistency, a right image frame feature estimation operation 410 is used to generate predicted features for the right image frame 304 based on the disparity map and the actual features generated for the left image frame 302. A feature comparison operation 412 compares the actual features for the right image frame 304 (as generated by the feature detection and extraction operation 406) with the predicted features for the right image frame 304 (as generated by the feature estimation operation 410) in order to identify the consistencies of the actual and predicted features for the right image frame 304. For instance, the feature comparison operation 412 may identify differences between the actual and predicted features for the right image frame 304. The identified differences are used by a confidence map generation operation 414 to create a confidence map 416,” a confidence map 416 is created based on differences between the actual and predicted features for the right image frame 304 which is considered to be associated with a confidence of the first disparity information based on a difference associated with the estimated previous image; disparity map and actual features generated for the left image frame 302 is considered to be fist disparity information); The combination of Tovchigrechko and Xiong are analogous art because they are both are in the same field of endeavor of estimating depth in images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to check for spatial consistency by comparing actual features with predicted features and the identified differences are used by a confidence map generation to create a confidence map as taught in method of Xiong in the system of Tovchigrechko to generate a refined depth map (see Xiong Abstract). Regarding claim 2, Tovchigrechko in view of Xiong further teaches the apparatus of claim 1, wherein the first disparity information comprises at least one of a first optical flow information estimating a first movement of a first feature to a first destination location in the current image or depth information representing a depth of the first feature (see Tovchigrechko ,Col 6, Lines 23-28,“The depth refining system 100 detects features of object depicted in the images 110 and estimate their depths 124 using stereo-depth estimation techniques (whether passive or active, such as with the assistance of structured light patterns)”). The rationale of claim 1 has been applied herein. Regarding claim 3, Tovchigrechko in view of Xiong further teaches the apparatus of claim 1, wherein the one or more processors are configured to determine the difference between a previous image and the estimated previous image (see Tovchigrechko, Col 10, Lines 65-67-Col 11, Lines 1-4, the loss function unit compares the estimated frame N−1 330 to the captured frame N−1 350, and calculates a first loss 352 between the estimated frame N−1 330 and the captured frame N−1 350 based on a loss function). The rationale of claim 1 has been applied herein. Regarding claim 4, Tovchigrechko in view of Xiong further teaches the apparatus of claim 1, wherein the confidence map comprises a first region corresponding to a first feature that is valid in the first disparity information (see Tovchigrechko, Col 6, Lines 31-38, “a corner of the table may be one of the detected features which have a higher confidence score, e.g., a reliable feature for measuring depth … a corresponding confidence map of the object might not be for every pixel. For example, a detected feature may be a region consisting of a number of pixels”). The rationale of claim 1 has been applied herein. Regarding claim 16, Tovchigrechko in view of Xiong further teaches the apparatus of claim 1, further comprising one or more cameras configured to capture the one or more images (see Tovchigrechko, Col 6, Lines 18-23, “The depth refining system 100 receives one or more images 110 captured by one or more cameras and processes the one or more images 110 to generate depth measurements 124 of objects captured in the images 110”). The rationale of claim 1 has been applied herein. As per claim 19, Claim 19 claims a method of processing one or more images by an image capturing device comprising the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1. Tovchigrechko further teaches, in Col 6, Lines 18-23, “The depth refining system 100 receives one or more images 110 captured by one or more cameras and processes the one or more images 110 to generate depth measurements 124 of objects captured in the images 110.” As per claim 20, claim 20 claims the same limitations as claim 2 and is dependent on a similarly rejected independent claim. Therefore, the rejection and rationale is analogous to that made in Claim 2. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Zatzarinni et al, US 20200327686. Regarding claim 5, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein the confidence map comprises a first region corresponding to a first feature that is a false positive in the first disparity information. However, Zatzarinni in a similar invention in the same field of endeavor teaches wherein the confidence map comprises a first region corresponding to a first feature that is a false positive in the first disparity information (see Zatzarinni, Paragraph [0017], “at least some portion of the depth information contained in the depth map that is incorrect (e.g., reflects an inaccurate indication of the depth or distance) may be associated with a high confidence level in the confidence map (e.g., a false positive).”). The combination of Tovchigrechko, Xiong, and Zatzarinni are analogous art because they are all in the same field of endeavor of estimating depth in images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, for a portion of the depth information contained in the depth map that is incorrect that is associated with a high confidence level in the confidence creating a false positive as taught in the system of Zatzarinni in the system of Tovchigrechko in view of Xiong to determine an enhanced confidence map for the depth map (see Zatzarinni Abstract). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Zatzarinni et al, US 20200327686 in view of Pohl et al, US 20190051007. Regarding claim 6, Tovchigrechko in view of Xiong in view of Zatzarinni does not expressively teach the apparatus of claim 5, wherein the one or more processors are configured to: remove the first disparity information to generate the updated first disparity Information. However, Pohl in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: remove the first disparity information to generate the updated first disparity information (see Pohl, Paragraph [0035], “depth map modifier 130 receives a depth map 300 (e.g., a first depth map) from the depth sensor 128 and generates a new or updated depth map 302 (e.g., a second depth map) having fewer pixels than the depth map 300 … the original depth map 300 may be modified by removing certain pixels and/or altering the pixel information (e.g., a distance value associated with a pixel)”). The combination of Tovchigrechko, Xiong, Zatzarinni, and Pohl are analogous art because they are all in the same field of endeavor of analyzing depth maps. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, generate a new or updated depth map by removing certain pixels as taught in the method of Pohl in the system of Tovchigrechko in view of Xiong in view of Zatzarinni to reduce depth map size in collision avoidance systems (see Pohl Paragraph [0001]). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Hsu et al, US 20230177938. Regarding claim 7, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein the confidence map is determined based on a first threshold at a first time, and wherein the confidence map is determined based on a second threshold at a second time after the first time. However, Hsu in a similar invention in the same field of endeavor teaches wherein the confidence map is determined based on a first threshold at a first time, and wherein the confidence map is determined based on a second threshold at a second time after the first time (see Hsu Paragraph [0028], “analyze a plurality of current images from T1 to T10 to generate a plurality of smoke confidence maps, and carry out pixel confidence value determinations on the plurality of smoke confidence maps to respectively calculate smoke scores Yn1 to Yn10 for the timings T1 to T10. For example, at the timing T1, Yn1=0, so a smoke alarm is not given, and if the processor 110 determines that the number of smoke pixels each having a confidence value greater than the pixel confidence threshold in the smoke confidence map is increased from the timing T1 to the timing T2, that is, the number of smoke pixels in the smoke confidence map for an image corresponding to the timing T2 (the current image) is more than that for an image corresponding to the timing T1 (the previous image), the processor 110 increases the smoke score Yn1 to the smoke score Yn2, but at the moment, Yn2 does not reach a smoke score threshold Ys, so the smoke alarm is not given”). The combination of Tovchigrechko, Xiong, and Hsu are analogous art because they are all in the same field of endeavor of image analysis. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to generate a plurality of confidence maps based on pixel confidence thresholds as taught in the system of Hsu in the system of Tovchigrechko in view of Xiong to determine whether a smoke event occurs in the current image (see Hsu Paragraph [0005]). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Hsu et al, US 20230177938 in view of Lee et al, US 20210241022. Regarding claim 8, Tovchigrechko in view of Xiong in view of Hsu does not expressively teach the apparatus of claim 7, wherein the second threshold comprises a higher confidence than the first threshold. However, Lee in a similar invention in the same field of endeavor teaches wherein the second threshold comprises a higher confidence than the first threshold (see Lee, Paragraph [0068], “The second threshold can correspond to a higher confidence value compared with the first threshold used in the first thresholding for generating the keypoints in the detector heatmap 331”). The combination of Tovchigrechko, Xiong, Hsu, and Lee are analogous art because they are all in the same field of endeavor of image analysis. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, for the second threshold to correspond to a higher confidence value compared with the first threshold as taught in the method of Lee in the system of Tovchigrechko in view of Xiong in view of Hsu for generating the keypoints in the detector heatmap 331 (see Lee Paragraph [0068]). Claim(s) 9-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Lin et al, US 20220101539. Regarding claim 9, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein the one or more processors are configured to: determine a sparsity of a region associated with a first feature in the current image or a previous image; and determine, based on the sparsity, a threshold corresponding to a confidence of the first feature in the first disparity information. However, Lin in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: determine a sparsity of a region associated with a first feature in the current image or a previous image (see Lin Paragraph [0070], “By determining a subset of pixels corresponding to key features within a frame, the disclosed optical flow estimation techniques and systems can optimize sparse optical flow estimation based on the spatial characteristics of the frame”); and determine, based on the sparsity, a threshold corresponding to a confidence of the first feature in the first disparity information (see Lin, Paragraph [0070], “Frames with importance values that meet or exceed (are greater than) a threshold importance value can be designated as key frames for which optical flow estimation is to be performed”). The combination of Tovchigrechko, Xiong, and Lin are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to optimize sparse optical flow estimation and for frames with importance values that meet a threshold can be designated as key frames for optical flow estimation to be performed as taught in the system of Lin in the system of Tovchigrechko in view of Xiong to generate optical flow maps with reduced latency and/or fewer computing resources (see Lin Paragraph [0035]). Regarding claim 10, Tovchigrechko in view of Xiong further teaches the apparatus of claim 1, and determine, based on the first threshold, whether a region in the confidence map associated with the first feature corresponds to an authentic disparity information (see Xiong, Paragraph [0082], “FIG. 5 illustrates an example mechanism 500 for verifying spatial consistency between a stereo image pair of image frames in accordance with this disclosure. This mechanism 500 represents one example way in which the confidence scores of the confidence map 416 of FIG. 4 may be generated,” and Paragraph [0084], “By comparing the predicted values of x.sub.r to the actual values of x.sub.r as generated by the feature detection and extraction operation 406, it is possible for the feature comparison operation 412 to determine whether the depths as contained in the reconstructed depth map 402 for those features are consistent between the two image frames 302, 304. This allows the confidence map generation operation 414 to generate confidence scores for those features based on the determination. For example, if the difference between the actual and predicted values of x.sub.r for a feature is less than a threshold, a highest confidence score may be assigned to that feature. If the difference between the actual and predicted values of x.sub.r for a feature is greater than the threshold, a lowest confidence score may be assigned to that feature”). Tovchigrechko in view of Xiong does not expressively teach wherein the one or more processors are configured to: determine a first movement magnitude associated with a first feature in the current image; determine, based on the first movement magnitude, a first threshold corresponding to a confidence of the first feature within the first disparity information; However, Lin in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: determine a first movement magnitude associated with a first feature in the current image (see Lin, Paragraph [0048], “an optical flow vector can indicate a direction and magnitude of the movement of the pixel”); determine, based on the first movement magnitude, a first threshold corresponding to a confidence of the first feature within the first disparity information (see Lin, Paragraph [0060], “the spatial characteristics can include a spatial confidence associated with the significance and/or relevance of the pixel to overall optical flow estimation. For example, a pixel with a high spatial confidence may be highly significant and/or relevant (e.g., a high amount of movement) to optical flow estimation”); The combination of Tovchigrechko, Xiong, and Lin are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to include a spatial confidence associated with the significance to optical flow estimation as taught in the system of Lin in the system of Tovchigrechko in view of Xiong to generate optical flow maps with reduced latency and/or fewer computing resources (see Lin Paragraph [0035]). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Zweig et al, US 11531863. Regarding claim 11, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein the one or more processors are configured to: determine an attention associated with a first feature, wherein the attention corresponds to an importance of the first feature in association with at least one other feature in the first disparity information; and determine a first threshold corresponding to an authentication of the first disparity information of the first feature based on the attention. However, Zweig in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: determine an attention associated with a first feature, wherein the attention corresponds to an importance of the first feature in association with at least one other feature in the first disparity information (see Zweig, Col 12, Lines 50-57, “The device can localize (e.g., temporally localize) different portions of the data set to identify key features, important features or other forms of noteworthy characteristic of the content within the respective portion (e.g., time period or window) of the data set”); and determine a first threshold corresponding to an authentication of the first disparity information of the first feature based on the attention (see Zweig, Col 12, Line 67 – Col 13, Lines, 1-8, “The attention score can be used to determine a localization (e.g., a characteristic localized to a portion or time period) of the data set. For example, portions of the data set having high attention scores or attention scores over a threshold value can indicate presence of a key feature, important feature or a noteworthy characteristic of content within the respective portion of the data set.”). The combination of Tovchigrechko, Xiong, and Zweig are analogous art because they are all in the same field of endeavor of image analysis. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to localize different portions of the data set to identify a key feature, important feature, or noteworthy characteristic and to use an attention score threshold to indicate presence of a key feature, important feature, or noteworthy characteristic as taught in the system of Zweig in the system of Tovchigrechko in view of Xiong to localize one or more portions of the data based on the attention scores (see Zweig Abstract). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Zweig et al, US 11531863 in view of Wang et al, TW I650711. Regarding claim 12, Tovchigrechko in view of Xiong in view of Zweig does not expressively teach the apparatus of claim 11, wherein the attention comprises information identifying the importance of the first feature within a previous image and the current image as compared to other features within the previous image and the current image. However, Wang in a similar invention in the same field of endeavor teaches wherein the attention comprises information identifying the importance of the first feature within a previous image and the current image as compared to other features within the previous image and the current image (see Wang, Paragraph [0027], “In step S402, the processing unit 110 receives the image from the video capture unit 130 and obtains an optical flow map based on the current frame and the previous frame at each time point in the video. In step S403, at least one optical flow attention with motion characteristics is obtained based on the optical flow graph. In step S404, the processing unit 110 obtains multiple feature blocks corresponding to different important features in each frame through a convolutional neural network, and assigns greater weight to regions with important features to distinguish them from other regions and to serve as visual attention corresponding to the current frame.”). The combination of Tovchigrechko, Xiong, Zweig, and Wang are analogous art because they are all in the same field of endeavor of image analysis. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to obtain multiple feature blocks corresponding to different important features and assign greater weight to regions with important features to distinguish from other regions as taught in the method of Wang in the system of Tovchigrechko in view of Xiong in view of Zweig to serve as visual attention corresponding to the current frame (see Wang Paragraph [0027]). Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Leu et al, US 20220385817. Regarding claim 13, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein the one or more processors are configured to: obtain a second disparity information associated with a previous image, the second disparity information estimating a second movement of a first feature within the current image or the previous image. However, Leu in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: obtain a second disparity information associated with a previous image, the second disparity information estimating a second movement of a first feature within the current image or the previous image (see, Paragraph [0053], “estimate a second global motion model based on the second global motion vector of the BG region. In this case, the optical flow information may include a global motion vector field, a brightness difference, and edge strength information between the previous frame and the current frame”). The combination of Tovchigrechko, Xiong, and Leu are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to estimate a second global motion based the second global motion vector in which the optical flow information includes edge strength information between the previous frame and the current frame as taught in the image processing device of Leu in the system of Tovchigrechko in view of Xiong to blend the motion-compensated previous frame with the current frame (see Leu Abstract). Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Leu et al, US 20220385817 in view of Doshi et al, US 20170345129. Regarding claim 14, Tovchigrechko in view of Xiong in view of Leu does not expressively teach the apparatus of claim 13, wherein the one or more processors are configured to: determine that the first feature is occluded in the current image or the previous image based on the first disparity information and the second disparity information. However, Doshi in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: determine that the first feature is occluded in the current image or the previous image based on the first disparity information and the second disparity information (see Doshi, Paragraph [0047], “if a first image feature at a first depth suddenly becomes visible in an image within a sequence of images (for instance as a result of an occluding second image feature at a second, closer depth moving to a non-occluding position), the first depth may be identified by analyzing subsequent frames within the sequence of images”). The combination of Tovchigrechko, Xiong, Leu, and Doshi are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to determine if a first image feature at a first depth suddenly becomes visible as a result of an occluding second image feature at a second closer depth, the first depth may be identified by analyzing subsequent frames as taught in the method of Doshi in the system of Tovchigrechko in view of Xiong in view of Leu to produce a stitched image (see Doshi Abstract). Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Leu et al, US 20220385817 in view of Doshi et al, US 20170345129 in view of Saphier et al, US 20200404243. Regarding claim 15, Tovchigrechko in view of Xiong in view of Leu further teaches the apparatus of claim 13, apply the second confidence map to the second disparity information to generate updated second disparity information (see Tovchigrechko, Col 6, Lines 57-64, “one or more of the images 110 from which the depth measurements 124 are computed (e.g., the left image of the stereo image pair used for computing depth 124), and the associated confidence scores 126 (e.g., which may be represented by a corresponding confidence map),” and Col 7 Lines 3-11, “For example, the machine-learning model 130 may refine depth measurements 124 which their confidence scores 126 are lower than a specific threshold. By doing so, the machine-learning model 130 can avoid generating a large number of artifacts, and therefore, a smaller convolutional neural network can be applied to the machine-learning model 130 which enables real-time updates,” the process does not change of applying a confidence map to disparity information to generated updated disparity information). Tovchigrechko in view of Xiong in view of Leu does not expressively teach wherein the one or more processors are configured to: warp the previous image based on the second disparity information to obtain an estimated current image; However, Doshi in a similar invention in the same field of endeavor teaches wherein the one or more processors are configured to: warp the previous image based on the second disparity information to obtain an estimated current image (see Doshi, Paragraph [0047], “a warp operation applied to the first object but based on the second depth of the second object,” second depth is considered to second disparity information); The combination of Tovchigrechko, Xiong, Leu, and Doshi are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, for a warp operation to be applied as taught in the method of Doshi in the system of Tovchigrechko in view of Xiong in view of Leu to produce a stitched image (see Doshi Abstract). Tovchigrechko in view of Xiong in view of Leu in view of Doshi does not expressively teach generate a second confidence map associated with the second disparity information based on a difference associated the estimated current image; However, Saphier in a similar invention in the same field of endeavor teaches generate a second confidence map associated with the second disparity information based on a difference associated the estimated current image (see Saphier, Paragraph [0177], “(iii) computing a difference between each estimated depth map and a corresponding respective true depth map to obtain a respective target confidence map corresponding to each estimated depth map as determined by the first neural network module, (iv) inputting to the second neural network module the plurality of confidence-training-stage two-dimensional images, (v) estimating, by the second neural network module, a respective estimated confidence map indicating a confidence level per region of each respective estimated depth map, and (vi) comparing each estimated confidence map to the corresponding target confidence map, and based on the comparison, optimizing the second neural network module to better estimate a subsequent estimated confidence map”); The combination of Tovchigrechko, Xiong, Leu, Doshi, and Saphier are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to compute a difference between each estimated depth map and a corresponding depth map to obtain a target confidence map as taught in the method of Saphier in the system of Tovchigrechko in view of Xiong in view of Leu to better estimate a subsequent estimated confidence map (see Saphier Paragraph [0177]). Claim(s) 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tovchigrechko et al, US 11288543 in view of Xiong et al, US 20240257475 in view of Duggal et al, US 20200302627. Regarding claim 17, Tovchigrechko in view of Xiong does not expressively teach the apparatus of claim 1, wherein, to obtain the first disparity information associated with the current image, the one or more processors are configured to: generate, using one or more machine learning systems, features representing the current image; and generate, based on the features representing the current image, the first disparity information. However, Duggal in a similar invention in the same field of endeavor teaches wherein, to obtain the first disparity information associated with the current image, the one or more processors are configured to: generate, using one or more machine learning systems, features representing the current image (see Duggal, Paragraph [0106], “The one or more machine-learned models can be configured and/or trained to receive input including a current disparity estimation y.sub.cost and left image (e.g., the left perspective image of the pair of stereo images) convolutional features from the one or more machine-learned models used in the feature extraction operations 202”); and generate, based on the features representing the current image, the first disparity information (see Duggal, Paragraph [0106], “The low-level feature information can be used as guidance to reduce noise and improve the quality of the final disparity map”). The combination of Tovchigrechko, Xiong, and Duggal are analogous art because they are all in the same field of endeavor of determining disparity between images. Therefore, it would have been obvious to one of obvious skill in the art before the effective filing date of the claimed invention, to use one or more machine-learned models for feature extraction and the feature information can be used to improve the quality of the final disparity map; one or more machine learning models can be deep neural networks or convolutional neural networks as taught in the system of Duggal in the system of Tovchigrechko in view of Xiong to further improve the accuracy of the disparity map that is generated (see Duggal Paragraph [0106]). Regarding claim 18, Tovchigrechko in view of Xiong in view of Duggal further teaches the apparatus of claim 17, wherein the one or more machine learning systems comprise at least one of a deep neural network (DNN) or a convolutional neural network (CNN) (see Duggal, Paragraph [0161], “As examples, the one or more machine-learned models 1110 can be or can otherwise include various machine-learned models such as, for example, neural networks (e.g., deep neural networks), support vector machines, decision trees, ensemble models, k-nearest neighbors models, Bayesian networks, encoder-decoder models, or other types of models including linear models and/or non-linear models. Example neural networks include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks, or other forms of neural networks.”). The rationale of claim 17 has been applied herein. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Jan 05, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month