Prosecution Insights
Last updated: April 19, 2026
Application No. 18/309,234

STEREO MATCHING METHOD AND IMAGE PROCESSING DEVICE PERFORMING SAME

Non-Final OA §103
Filed
Apr 28, 2023
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/13/2026 has been entered. Response to Arguments Applicant’s amendment filed 01/13/2026 has been entered and made of record. Claims 1-2, 4-5, 10-13 and 15-17 are amended. No New Claim was added. No Claims were cancelled. Claims 1-17 are pending. Applicant’s remarks in view of the newly presented amendments have been considered but are not found to be persuasive for at least the following reasons: The applicant argues on page 10-11 of the remarks filed, the previously cited prior art would not disclose perform stereo matching between a first image captured by the first camera and a second image captured by the second camera, wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image, the search range being determined based on the gaze coordinate information. However, Zhao teaches perform stereo matching between a first image captured by the first camera and a second image captured by the second camera, a search range on the second image, and the search range being determined based on the gaze coordinate information in ¶[0062] disclose performing stereo matching between left stereo image and right stereo image and ¶[0063] discloses performing coarse to fine matching to determine a confidence score for the matching between images and in ¶[0064] discloses the matching is done by limiting a search window based on the gaze coordinates of the right image when matching the points of the images. The Examiner agrees that the cited prior art does not appear to teach the newly amended limitation of wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. However, after further search and consideration, the newly discovered art of Lee et al. US PG-Pub(US 20190156502 A1) would disclose wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image as evident in [0071] “The processor compares a first image patch 311 including a reference pixel of the first image 310 to a search range 312 of the second image 302 to determine a second image patch 313 including a target pixel, and estimate an initial disparity corresponding to a difference in position between the first image patch 311 and the second image patch 313”, as disclosed in ¶¶[0070]-[0071], stereo matching is performed by using the first feature point extracted in a first image and comparing that patch to a search range in the second image. Please see the updated claim rejection under 35 USC § 103 below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 1, 8-10 and 14-15 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kim US Patent(US 9454226 B2) in view of Zhao et al. US PG-Pub(US 20130107207 A1) in view of Lee et al. US PG-Pub(US 20190156502 A1). Regarding Claim 1, Kim teaches an image processing device (Col 3, Lines 25-29, Referring to FIG. 1, an apparatus 100 for tracking a gaze according to an exemplary embodiment of the present invention includes a camera 110, a feature point extractor 120, a calculator 130, and a controller 140.) comprising: a camera system including a first camera and a second camera, the camera system configured to obtain a stereo image(Col 3, Lines 36-39, “The camera 110 may be implemented as a stereo camera or a multi camera. The camera 110 may simultaneously capture at least two face images photographed at different positions.”, in this section of the prior art, a stereo camera is used which would have two cameras a left eye and a right eye to capture a stereo image of the scene );an eye-tracking sensor configured to obtain gaze information of a user(Col 4, Lines 29-31 discloses an apparatus used to track the gaze of the user using images acquired from the stereo camera.); memory storing one or more instructions and processor communicatively coupled to the camera system, the eye- tracking sensor and the memory (Col 5, Lines 29-32, Software codes are stored in a memory unit and may be driven by a processor. The memory unit is disposed inside or outside the processor and may transmit and receive data to and from the processor), wherein the one or more instructions, when executed by the at least one processor individually or collectively, cause the image processing device to:extract feature points from the stereo image and generate gaze coordinate information in which gaze coordinates corresponding to the gaze information of the user are accumulated on the stereo image (Col 4, Lines 48-57, “The calculator 130 of the apparatus 100 for tracking a gaze matches the glasses feature points extracted from each of the at least two face images which are simultaneously captured and then calculates the three-dimensional coordinates (positions) of the glasses feature points (S104). The calculator 130 uses the three-dimensional coordinates of the glasses feature points to calculate the plane equation (S105). That is, the calculator 130 calculates the three-dimensional coordinates of the glasses feature points to calculate the glasses plane.”, as disclosed in this section of the prior art, feature points are extracted from the stereo image and gaze coordinates are determined for the user. ), Kim does not explicitly teach perform stereo matching between a first image captured by the first camera and a second image captured by the second camera for the stereo image based on the feature points, a search range on the second image, and the search range being determined based on the gaze coordinate information. Zhao teaches perform stereo matching between a first image captured by the first camera and a second image captured by the second camera, a search range on the second image, and the search range being determined based on the gaze coordinate information.¶[0062] “In block 810, the known spatial relationship between the right eye gaze point and the features in the right stereo image 608 (that were determined in block 808) is used to estimate the location of the left eye gaze point by interpolation of the matched features in the left stereo image 607 (that were determined in block 809)”[0063] “In block 811, matching integration is performed to determine the estimate of the left eye gaze point on the left 2-D display screen 602 (which is displaying at the time the left stereo image 607) by using one of the region offset determined from the coarse-to-fine region matching 830, the feature offset determined from the feature matching 840, interpolation of previously determined (earlier in time) left eye gaze point estimation results, and the global offset determined in the coarse-to-fine global offset 820, according to availability and confidence scores.” [0064] “In any or all of global matching 820, region matching 830, feature matching 840, and matching integration 811, constraint checks may be applied to determine the validity of results and/or used to simplify a matching process by limiting the search window based upon the constraint and/or by searching for an additional value when the matched point comprises a location outside the constraint”,¶[0062] disclose performing stereo matching between left stereo image and right stereo image and ¶[0063] discloses performing coarse to fine matching to determine a confidence score for the matching between images and in ¶[0064] discloses the matching is done by limiting a search window based on the gaze coordinates of the right image when matching the points of the images.). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim with Zhao in order to perform stereo matching between the left and right image by restricting a search window for feature points. One skilled in the art would have been motivated to modify Kim in this manner in order to improve the reliability of the tracked gaze points and/or the image matching. (Zhao, Abstract) However, Kim and Zhao do not explicitly teach wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. Lee teaches wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. ([0071] “The processor compares a first image patch 311 including a reference pixel of the first image 310 to a search range 312 of the second image 302 to determine a second image patch 313 including a target pixel, and estimate an initial disparity corresponding to a difference in position between the first image patch 311 and the second image patch 313”, as disclosed in ¶¶[0070]-[0071], stereo matching is performed by using the first feature point extracted in a first image and comparing that patch to a search range in the second image. ¶[0073] further discloses in the cited section provided that how the extracted points are searched in a range of the second image. “[0073] The processor compares a reference image patch including the reference pixel with each of candidate image patches respectively corresponding to candidate pixels included in the search range 312. For example, a candidate pixel is a pixel at the same height as the reference pixel in the search range 312 of the second image 302. Although some of the pixels at the same height as the reference pixel are determined as candidate pixels in the second image 302 as shown in FIG. 3,”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Zhao with Lee in order to perform the stereo matching by using extracted points from the first image in a search range of a second image. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to estimating a disparity between the images when stereo matching. (Lee, ¶0002]) Regarding Claim 8, the combination of Kim, Zhao and Lee teach the image processing device of claim 1, where Kim further teaches wherein the instructions, when executed by the at least one processor individually or collectively, further cause the image processing device to: obtain a coordinate pair of gaze coordinates from the stereo image, based on the gaze information obtained by using the eye-tracking sensor and accumulate, in the memory, three-dimensional (3D) gaze coordinates obtained from the coordinate pair; (Col 2, Lines 4-7, “The calculating of the gaze vector may include: calculating three-dimensional coordinates of each of the glasses feature points by matching the glasses feature points extracted from each of the at least two face images”, in this section of the prior art, 3d coordinates are calculated from a stereo image and stored in memory), and generate the gaze coordinate information by re-projecting the accumulated 3D gaze coordinates onto the stereo image. (Col 4, Lines 9-22, “After the calculator 130 calculates the plane equation, a normal vector n: (a, b, c) of the glasses plane becomes a face direction vector. As illustrated in FIG. 2, the calculator 130 calculates normal vectors {right arrow over (h.sub.L)}, and {right arrow over (h.sub.R)} vertical to the left and right glasses lenses (glasses plane) and then corrects a midpoint between the two vectors as a gaze direction vector.Unlike the related art of calculating the face direction by detecting face feature points of eye, nose, mouth, and the like, the exemplary embodiment of the present disclosure uses the plane of the glasses itself to calculate the face direction in the case of the glasses wearer, and therefore may correct the gaze direction vector when the gaze tracking using the face feature points fails”, as disclosed in this section of the prior art, the gaze coordinates are identified from the stereo image and the points are projected onto the user as shown in figure 2.) Regarding Claim 9, the combination of Kim, Zhao and Lee teach the image processing device of claim 1, where Kim further teaches wherein the instructions, when executed by the at least one processor individually or collectively, further cause the image processing device to perform a process of generating the gaze coordinate information in parallel with a process of extracting the feature points. (Col 2, Lines 1-11, “In the extracting of the glasses feature points, the glasses may be detected from each of the face images and then the glasses feature points may be extracted by an edge detection. The calculating of the gaze vector may include: calculating three-dimensional coordinates of each of the glasses feature points by matching the glasses feature points extracted from each of the at least two face images, respectively; calculating an equation of a glasses plane based on the three-dimensional coordinates of the glasses feature points; and estimating the gaze vector by calculating a normal vector of the glasses plane.”, as disclosed in this section of the prior art, the extraction and generating the gaze coordinates are done simultaneously) Regarding Claim 10, Kim teaches a stereo matching method performed by an image processing device, the stereo matching method comprising: obtaining, by the image processing device, a stereo image by using a camera system including a first camera and a second camera(Col 3, Lines 36-39, “The camera 110 may be implemented as a stereo camera or a multi camera. The camera 110 may simultaneously capture at least two face images photographed at different positions.”, in this section of the prior art, a stereo camera is used which would have two cameras a left eye and a right eye to capture a stereo image of the scene );extracting, by the image processing device, feature points from the stereo image;generating, by the image processing device, gaze coordinate information in which gaze coordinates corresponding to gaze information of a user are accumulated on the stereo image (Col 4, Lines 48-57, “The calculator 130 of the apparatus 100 for tracking a gaze matches the glasses feature points extracted from each of the at least two face images which are simultaneously captured and then calculates the three-dimensional coordinates (positions) of the glasses feature points (S104). The calculator 130 uses the three-dimensional coordinates of the glasses feature points to calculate the plane equation (S105). That is, the calculator 130 calculates the three-dimensional coordinates of the glasses feature points to calculate the glasses plane.”, as disclosed in this section of the prior art, feature points are extracted from the stereo image and gaze coordinates are determined for the user. ), Kim does not explicitly teach performing, by the image processing device, stereo matching between a first image captured by the first camera and a second image captured by the second camera for the stereo image based on the feature points, a search range on the second image, and the gaze coordinate information. Zhao teaches performing, by the image processing device, stereo matching between a first image captured by the first camera and a second image captured by the second camera for the stereo image based on the feature points, a search range on the second image, and the gaze coordinate information. (¶[0062] “In block 810, the known spatial relationship between the right eye gaze point and the features in the right stereo image 608 (that were determined in block 808) is used to estimate the location of the left eye gaze point by interpolation of the matched features in the left stereo image 607 (that were determined in block 809)”[0063] “In block 811, matching integration is performed to determine the estimate of the left eye gaze point on the left 2-D display screen 602 (which is displaying at the time the left stereo image 607) by using one of the region offset determined from the coarse-to-fine region matching 830, the feature offset determined from the feature matching 840, interpolation of previously determined (earlier in time) left eye gaze point estimation results, and the global offset determined in the coarse-to-fine global offset 820, according to availability and confidence scores.” [0064] “In any or all of global matching 820, region matching 830, feature matching 840, and matching integration 811, constraint checks may be applied to determine the validity of results and/or used to simplify a matching process by limiting the search window based upon the constraint and/or by searching for an additional value when the matched point comprises a location outside the constraint”,¶[0062] disclose performing stereo matching between left stereo image and right stereo image and ¶[0063] discloses performing coarse to fine matching to determine a confidence score for the matching between images and in ¶[0064] discloses the matching is done by limiting a search window when matching the points of the images.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim with Zhao in order to perform stereo matching between the left and right image by restricting a search window for feature points. One skilled in the art would have been motivated to modify Kim in this manner in order to improve the reliability of the tracked gaze points and/or the image matching. (Zhao, Abstract) However, Kim and Zhao do not explicitly teach wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. Lee teaches wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. ([0071] “The processor compares a first image patch 311 including a reference pixel of the first image 310 to a search range 312 of the second image 302 to determine a second image patch 313 including a target pixel, and estimate an initial disparity corresponding to a difference in position between the first image patch 311 and the second image patch 313”, as disclosed in ¶¶[0070]-[0071], stereo matching is performed by using the first feature point extracted in a first image and comparing that patch to a search range in the second image. ¶[0073] further discloses in the cited section provided that how the extracted points are searched in a range of the second image. “[0073] The processor compares a reference image patch including the reference pixel with each of candidate image patches respectively corresponding to candidate pixels included in the search range 312. For example, a candidate pixel is a pixel at the same height as the reference pixel in the search range 312 of the second image 302. Although some of the pixels at the same height as the reference pixel are determined as candidate pixels in the second image 302 as shown in FIG. 3,”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Zhao with Lee in order to perform the stereo matching by using extracted points from the first image in a search range of a second image. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to estimating a disparity between the images when stereo matching. (Lee, ¶0002]) Regarding Claim 14, the combination of Kim, Zhao and Lee teach the stereo matching method of claim 10, further comprising: obtaining the gaze information by using an eye-tracking sensor; obtaining a coordinate pair of gaze coordinates from the stereo image, based on the gaze information; and accumulating three-dimensional (3D) gaze coordinates obtained from the coordinate pair; (Col 2, Lines 4-7, “The calculating of the gaze vector may include: calculating three-dimensional coordinates of each of the glasses feature points by matching the glasses feature points extracted from each of the at least two face images”, in this section of the prior art, 3d coordinates are calculated from a stereo image and stored in memory), wherein the generating of the gaze coordinate information comprises: generating the gaze coordinate information by re-projecting the accumulated 3D gaze coordinates onto the stereo image. (Col 4, Lines 9-22, “After the calculator 130 calculates the plane equation, a normal vector n: (a, b, c) of the glasses plane becomes a face direction vector. As illustrated in FIG. 2, the calculator 130 calculates normal vectors {right arrow over (h.sub.L)}, and {right arrow over (h.sub.R)} vertical to the left and right glasses lenses (glasses plane) and then corrects a midpoint between the two vectors as a gaze direction vector.Unlike the related art of calculating the face direction by detecting face feature points of eye, nose, mouth, and the like, the exemplary embodiment of the present disclosure uses the plane of the glasses itself to calculate the face direction in the case of the glasses wearer, and therefore may correct the gaze direction vector when the gaze tracking using the face feature points fails”, as disclosed in this section of the prior art, the gaze coordinates are identified from the stereo image and the points are projected onto the user as shown in figure 2.) Regarding Claim 15, Kim teaches a non-transitory computer-readable recording medium storing instructions that, when executed by at least one processor of an image processing device individually or collectively, cause the image processing device to perform operations, the operations comprising: (Col 5, Lines 29-32, Software codes are stored in a memory unit and may be driven by a processor. The memory unit is disposed inside or outside the processor and may transmit and receive data to and from the processor) obtaining, the image processing device, stereo image by using a camera system including a first camera and a second camera (Col 3, Lines 36-39, “The camera 110 may be implemented as a stereo camera or a multi camera. The camera 110 may simultaneously capture at least two face images photographed at different positions.”, in this section of the prior art, a stereo camera is used which would have two cameras a left eye and a right eye to capture a stereo image of the scene ); extracting, the image processing device, feature points from the stereo image and generating, the image processing device, gaze coordinate information in which gaze coordinates corresponding to gaze information of a user are accumulated on the stereo image(Col 4, Lines 48-57, “The calculator 130 of the apparatus 100 for tracking a gaze matches the glasses feature points extracted from each of the at least two face images which are simultaneously captured and then calculates the three-dimensional coordinates (positions) of the glasses feature points (S104). The calculator 130 uses the three-dimensional coordinates of the glasses feature points to calculate the plane equation (S105). That is, the calculator 130 calculates the three-dimensional coordinates of the glasses feature points to calculate the glasses plane.”, as disclosed in this section of the prior art, feature points are extracted from the stereo image and gaze coordinates are determined for the user. ), Kim does not explicitly teach performing, by the image processing device, stereo matching between a first image captured by the first camera and a second image captured by the second camera for the stereo image based on the feature points, a search range on the second image, and the gaze coordinate information. Zhao teaches performing, by the image processing device, stereo matching between a first image captured by the first camera and a second image captured by the second camera for the stereo image based on the feature points, a search range on the second image, and the gaze coordinate information. (¶[0062] “In block 810, the known spatial relationship between the right eye gaze point and the features in the right stereo image 608 (that were determined in block 808) is used to estimate the location of the left eye gaze point by interpolation of the matched features in the left stereo image 607 (that were determined in block 809)”[0063] “In block 811, matching integration is performed to determine the estimate of the left eye gaze point on the left 2-D display screen 602 (which is displaying at the time the left stereo image 607) by using one of the region offset determined from the coarse-to-fine region matching 830, the feature offset determined from the feature matching 840, interpolation of previously determined (earlier in time) left eye gaze point estimation results, and the global offset determined in the coarse-to-fine global offset 820, according to availability and confidence scores.” [0064] “In any or all of global matching 820, region matching 830, feature matching 840, and matching integration 811, constraint checks may be applied to determine the validity of results and/or used to simplify a matching process by limiting the search window based upon the constraint and/or by searching for an additional value when the matched point comprises a location outside the constraint”,¶[0062] disclose performing stereo matching between left stereo image and right stereo image and ¶[0063] discloses performing coarse to fine matching to determine a confidence score for the matching between images and in ¶[0064] discloses the matching is done by limiting a search window when matching the points of the images.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim with Zhao in order to perform stereo matching between the left and right image by restricting a search window for feature points. One skilled in the art would have been motivated to modify Kim in this manner in order to improve the reliability of the tracked gaze points and/or the image matching. (Zhao, Abstract) However, Kim and Zhao do not explicitly teach wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. Lee teaches wherein the stereo matching is performed using a first feature point extracted from the first image and a second feature point searched within a search range on the second image. ([0071] “The processor compares a first image patch 311 including a reference pixel of the first image 310 to a search range 312 of the second image 302 to determine a second image patch 313 including a target pixel, and estimate an initial disparity corresponding to a difference in position between the first image patch 311 and the second image patch 313”, as disclosed in ¶¶[0070]-[0071], stereo matching is performed by using the first feature point extracted in a first image and comparing that patch to a search range in the second image. ¶[0073] further discloses in the cited section provided that how the extracted points are searched in a range of the second image. “[0073] The processor compares a reference image patch including the reference pixel with each of candidate image patches respectively corresponding to candidate pixels included in the search range 312. For example, a candidate pixel is a pixel at the same height as the reference pixel in the search range 312 of the second image 302. Although some of the pixels at the same height as the reference pixel are determined as candidate pixels in the second image 302 as shown in FIG. 3,”) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Zhao with Lee in order to perform the stereo matching by using extracted points from the first image in a search range of a second image. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to estimating a disparity between the images when stereo matching. (Lee, ¶0002]) Claims 2-3, 5, 7, 11, 13 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kim US Patent(US 9454226 B2) in view of Zhao et al. US PG-Pub(US 20130107207 A1) in view of Lee et al . US PG-Pub(US 20190156502 A1) in view of Park et al. US PG-Pub(US 20180211400 A1). Regarding Claim 2, while the combination of Kim, Zhao and Lee teach the image processing device of claim 1, they do not explicitly teach wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: perform the stereo matching by restricting the search range of a second image to be a certain range from a second gaze coordinate corresponding to a first gaze coordinate near the first feature point of a first image. Park teaches wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: perform the stereo matching by restricting the search range of a second image to be a certain range from a second gaze coordinate corresponding to a first gaze coordinate near a first feature point of a first image. (¶[0064] “When SP5 present at the boundary is given as a reference point, the disparity of the pixel P.sub.i may be found in a search range determined based on a triangle including reference points SP1, SP2, and SP5 as vertices. The search range of the disparity of the pixel P.sub.i may be set based on a value of the disparity interpolation between reference points SP1 and SP5. Because reference point SP5 is present in the object, for example, a bowling ball, in which the pixel P.sub.i is present, valid information on the disparity of the pixel Pi may be provided. Thus, an accurate calculation result may be obtained.[0065] In addition, even when the triangle including the pixel P.sub.i does not perfectly fit into the boundary of the object, SP5 may correct and reduce an error caused by using reference point SP3 that is present in a different object.”, as disclosed in ¶[0064]-¶[0065], the prior art is able to set a search range from the images to reduce error in misidentification of an object.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Park in order to set a search range when stereo matching between images. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Regarding Claim 3, the combination of Kim, Zhao, Lee and Park teach the image processing device of claim 2, where Kim further teaches wherein the first gaze coordinate is a gaze coordinate closest to a coordinate of the first feature point, from among the gaze coordinates forming the gaze coordinate information. (Col 3, Lines 52-57, The calculator 130 matches the respective glasses feature points extracted from at least two face images simultaneously captured by the camera 110 to calculate three-dimensional (3D) coordinates (positions) of the respective glasses feature points. The calculator 130 calculates depth information based on camera calibration and matching of stereo images, as disclosed in this section, the prior art determines which points are closest to each other when matching between the stereo images.) Regarding Claim 5, while the combination of Kim, Zhao and Lee teach the image processing device of claim 1, they do not explicitly teach wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: identify a first gaze coordinate near the first feature point of a first image; determine a search range of a second image based on a result of the identification; and obtain a second feature point of the second image corresponding to the first feature point within the search range. Park teaches wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: identify a first gaze coordinate near the first feature point of a first image; determine a search range of a second image([0056] “The stereo matching apparatus 110 estimates a search range based on a reference value that varies depending on pixel groups. For example, a search range for performing stereo matching on a pixel present within the triangle 30 is calculated based on disparities of three reference pixels that constitute the triangle 30. A search range for performing stereo matching on a pixel present within the triangle 20 may be calculated based on disparities of three reference pixels that constitute the triangle 20, ¶[0056] discloses determining a search range in the image.) based on a result of the identification; and obtain a second feature point of the second image corresponding to the first feature point within the search range. ([0057] Here, disparities of reference pixels present on a line segment 40 may correspond to an object, instead of corresponding to a background. In this case, a search range estimated for performing stereo matching on the pixel present within the triangle 30 may not include an actual disparity of the corresponding pixel present within the triangle 20. The stereo matching apparatus 110 may adjust the disparities of the reference pixels present on the line segment 40 such that the search range for performing stereo matching on the pixel present within the triangle 20 includes the actual disparity of the corresponding pixel.”, as disclosed in ¶[0056]-¶[0057], the prior art is able to determine a search range based on the pixel points in the stereo image and matching is performed based on pixel points in the two images.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim and Zhao with Park in order to set a search range when performing stereo matching between points in the images. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Regarding Claim 7, the combination of Kim, Zhao, Lee and Park teach the image processing device of claim 5, where Park further teaches wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: obtain as the second feature point, from among feature points within the search range, a feature point having highest similarity to feature information of the first feature point. (¶[0046], “The stereo matching apparatus 110 may use the sampled reference pixels to generate a polygonal mesh. Under a smoothness constraint that groups similar depth values in one object, pixels having similar depths may be included in a predetermined polygon. Thus, a depth variation may be estimated based on the polygonal mesh, and the stereo matching apparatus 110 may effectively perform stereo matching on the pixels using the polygonal mesh.”, as disclosed in ¶[0046] the prior art uses similar depth values to determine the search range when stereo matching.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Park in order to determine a feature point with the highest similarity compared to the first point. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Regarding Claim 11, while the combination of Kim, Zhao and Lee teach the stereo matching method of claim 10, they do not explicitly teach wherein the performing of the stereo matching comprises: performing the stereo matching by restricting the search range of a second image to a certain range from a second gaze coordinate corresponding to a first gaze coordinate near the first feature point of a first image. Park teaches wherein the performing of the stereo matching comprises: performing the stereo matching by restricting the search range of a second image to a certain range from a second gaze coordinate corresponding to a first gaze coordinate near the first feature point of a first image. (¶[0064] “When SP5 present at the boundary is given as a reference point, the disparity of the pixel P.sub.i may be found in a search range determined based on a triangle including reference points SP1, SP2, and SP5 as vertices. The search range of the disparity of the pixel P.sub.i may be set based on a value of the disparity interpolation between reference points SP1 and SP5. Because reference point SP5 is present in the object, for example, a bowling ball, in which the pixel P.sub.i is present, valid information on the disparity of the pixel Pi may be provided. Thus, an accurate calculation result may be obtained.[0065] In addition, even when the triangle including the pixel P.sub.i does not perfectly fit into the boundary of the object, SP5 may correct and reduce an error caused by using reference point SP3 that is present in a different object.”, as disclosed in ¶[0064]-¶[0065], the prior art is able to set a search range from the images to reduce error in misidentification of an object.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Park in order to set a search range when stereo matching between images. One skilled in the art would have been motivated to modify Kim, Zhao and Lee in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Regarding Claim 13, while the combination of Kim, Zhao and Lee teach the stereo matching method of claim 10, they do not explicitly teach wherein the performing of the stereo matching comprises: identifying a first gaze coordinate near a first feature point of a first image; determining a search range of a second image, based on a result of the identification; and obtaining the second feature point of the second image from the search range, the second feature point corresponding to the first feature point. Park teaches wherein the performing of the stereo matching comprises: identifying a first gaze coordinate near a first feature point of a first image; determining a search range of a second image([0056] “The stereo matching apparatus 110 estimates a search range based on a reference value that varies depending on pixel groups. For example, a search range for performing stereo matching on a pixel present within the triangle 30 is calculated based on disparities of three reference pixels that constitute the triangle 30. A search range for performing stereo matching on a pixel present within the triangle 20 may be calculated based on disparities of three reference pixels that constitute the triangle 20, ¶[0056] discloses determining a search range in the image.), based on a result of the identification; and obtaining the second feature point of the second image from the search range, the second feature point corresponding to the first feature point. ([0057] Here, disparities of reference pixels present on a line segment 40 may correspond to an object, instead of corresponding to a background. In this case, a search range estimated for performing stereo matching on the pixel present within the triangle 30 may not include an actual disparity of the corresponding pixel present within the triangle 20. The stereo matching apparatus 110 may adjust the disparities of the reference pixels present on the line segment 40 such that the search range for performing stereo matching on the pixel present within the triangle 20 includes the actual disparity of the corresponding pixel.”, as disclosed in ¶[0056]-¶[0057], the prior art is able to determine a search range based on the pixel points in the stereo image and matching is performed based on pixel points in the two images.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Park in order to set a search range when performing stereo matching between points in the images. One skilled in the art would have been motivated to modify Kim, Zhao and Lee in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Regarding Claim 16, while the combination of Kim, Zhao and Lee teach the non-transitory computer-readable storage medium of claim 15, they do not explicitly teach the operations further comprising: performing the stereo matching by restricting the search range of the second image to a certain range from a second gaze coordinate corresponding to a first gaze coordinate near the first feature point of the first image Park teaches the operations further comprising: performing the stereo matching by restricting the search range of the second image to a certain range from a second gaze coordinate corresponding to a first gaze coordinate near the first feature point of the first image (¶[0064] “When SP5 present at the boundary is given as a reference point, the disparity of the pixel P.sub.i may be found in a search range determined based on a triangle including reference points SP1, SP2, and SP5 as vertices. The search range of the disparity of the pixel P.sub.i may be set based on a value of the disparity interpolation between reference points SP1 and SP5. Because reference point SP5 is present in the object, for example, a bowling ball, in which the pixel P.sub.i is present, valid information on the disparity of the pixel Pi may be provided. Thus, an accurate calculation result may be obtained.[0065] In addition, even when the triangle including the pixel P.sub.i does not perfectly fit into the boundary of the object, SP5 may correct and reduce an error caused by using reference point SP3 that is present in a different object.”, as disclosed in ¶[0064]-¶[0065], the prior art is able to set a search range from the images to reduce error in misidentification of an object.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Park in order to set a search range when stereo matching between images. One skilled in the art would have been motivated to modify Kim and Zhao in this manner in order to enhance an accuracy of stereo matching by varying a method of determining (estimating) a disparity and a range for searching for a disparity of a related pixel based on a class of each reference point. (Park, ¶[0055]) Claims 4, 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kim US Patent(US 9454226 B2) in view of Zhao et al. US PG-Pub(US 20130107207 A1) in view of Lee et al . US PG-Pub(US 20190156502 A1) in view of Lee et al. US Patent(US 9674503 B2) Regarding Claim 4, while the combination of Kim, Zhao and Lee teach the image processing device of claim 1, they do not explicitly teach wherein the instructions, when executed by the at least one processor-is individually or collectively, further cause the image processing device to: obtain the second feature point of a second image corresponding to a first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of a first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. Lee 2 teaches wherein the instructions, when executed by the at least one processor-is individually or collectively, further cause the image processing device to: obtain the second feature point of a second image corresponding to a first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of a first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. (Col 4, Lines 6-21, “The stereo matching is to re-construct a 3D space from a 2D left image and a 2D right image, where corresponding points are acquired from the two 2D images to estimate 3D information using a mutual geometric relationship.(18) In order to estimate 3D information using the mutual geometric relationship by finding corresponding points from two 2D images, it is important to find a point corresponding to a point of an image at one side (reference image) from the stereo image, from an image at the other side (corresponding image), where the point is located on an epipolar line of a corresponding image relative to the point on the reference image, and a stereo matching can be performed by inspecting only two horizontally spread single scan lines, if calibration to the epipolar line is performed”, as disclosed in this section of the prior art, the stereo matching is only performed at points located on the epipolar line in the stereo image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Lee 2 in order to perform stereo matching on the epipolar line of the image. One skilled in the art would have been motivated to modify Kim, Zhao and Lee in this manner in order to improve distance accuracy by removing streak noise, in a case pixels of left and right images are compared by each line in a stereo matching system. (Lee 2, Col 1, Lines 49-52) Regarding Claim 12, while the combination of Kim, Zhao and Lee teach the stereo matching method of claim 10, they do not explicitly teach wherein the performing of the stereo matching comprises: obtaining a second feature point of the second image corresponding to the first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of the first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. Lee 2 teaches wherein the performing of the stereo matching comprises: obtaining a second feature point of the second image corresponding to the first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of the first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. (Col 4, Lines 6-21, “The stereo matching is to re-construct a 3D space from a 2D left image and a 2D right image, where corresponding points are acquired from the two 2D images to estimate 3D information using a mutual geometric relationship.(18) In order to estimate 3D information using the mutual geometric relationship by finding corresponding points from two 2D images, it is important to find a point corresponding to a point of an image at one side (reference image) from the stereo image, from an image at the other side (corresponding image), where the point is located on an epipolar line of a corresponding image relative to the point on the reference image, and a stereo matching can be performed by inspecting only two horizontally spread single scan lines, if calibration to the epipolar line is performed”, as disclosed in this section of the prior art, the stereo matching is only performed at points located on the epipolar line in the stereo image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Lee 2 in order to perform stereo matching on the epipolar line of the image. One skilled in the art would have been motivated to modify Kim, Zhao and Lee in this manner in order to improve distance accuracy by removing streak noise, in a case pixels of left and right images are compared by each line in a stereo matching system. (Lee 2, Col 1, Lines 49-52) Regarding Claim 17, while the combination of Kim, Zhao and Lee teach the non-transitory computer-readable storage medium of claim 15, they do not explicitly teach the operations further comprising: obtaining the second feature point of the second image corresponding to a first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of the first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. Lee 2 teaches the operations further comprising: obtaining the second feature point of the second image corresponding to a first feature point, based on a restricted range on an epipolar line of the second image corresponding to a coordinate of the first feature point of the first image, and on a certain range from a second gaze coordinate of the second image corresponding to a first gaze coordinate near the first feature point. (Col 4, Lines 6-21, “The stereo matching is to re-construct a 3D space from a 2D left image and a 2D right image, where corresponding points are acquired from the two 2D images to estimate 3D information using a mutual geometric relationship.(18) In order to estimate 3D information using the mutual geometric relationship by finding corresponding points from two 2D images, it is important to find a point corresponding to a point of an image at one side (reference image) from the stereo image, from an image at the other side (corresponding image), where the point is located on an epipolar line of a corresponding image relative to the point on the reference image, and a stereo matching can be performed by inspecting only two horizontally spread single scan lines, if calibration to the epipolar line is performed”, as disclosed in this section of the prior art, the stereo matching is only performed at points located on the epipolar line in the stereo image.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao and Lee with Lee 2 in order to perform stereo matching on the epipolar line of the image. One skilled in the art would have been motivated to modify Kim, Zhao and Lee in this manner in order to improve distance accuracy by removing streak noise, in a case pixels of left and right images are compared by each line in a stereo matching system. (Lee, Col 1, Lines 49-52) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kim US Patent(US 9454226 B2) in view of Zhao et al. US PG-Pub(US 20130107207 A1) in view of Lee et al . US PG-Pub(US 20190156502 A1) in view of Park et al. US PG-Pub(US 20180211400 A1) in further view of Lu et al. US PG-Pub(US 20210374994 A1). Regarding Claim 6, while the combination of Kim, Zhao, Lee and Park teach the image processing device of claim 5, they do not explicitly teach wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: when there is the first gaze coordinate, determine the search range to be within a certain range from a second gaze coordinate in the second image corresponding to the first gaze coordinate; and when there is no first gaze coordinate, determine the search range to be a predefined range. Lu teaches wherein the instructions, when executed by the at least one processor is individually or collectively, further cause the image processing device to: when there is the first gaze coordinate, determine the search range to be within a certain range from a second gaze coordinate in the second image corresponding to the first gaze coordinate; and when there is no first gaze coordinate, determine the search range to be a predefined range. ([0144] “where X, Y, Z are the coordinate on the coordinate axis of the three-dimensional rectangular coordinate system, X.sub.gaze, Y.sub.gaze, Z.sub.gaze are the direction of the eye gaze in the world coordinate system, X.sub.face Y.sub.face Z.sub.face are the three-dimensional rectangular coordinate of the face, and t is the parameter.[0145] In a possible implementation, filter conditions can be set to set the eye gaze area. For example, a point in the space is taken and a circle is made with the distance from the point to the line of sight as the radius, the area within the circle can be the eye gaze area.[0146] In a possible implementation, the range of the eye gaze area can be set according to the visual angle of the eye. For example, the horizontal field of view of the eye can be 180 degrees, and the eye gaze area can be determined according to the horizontal field of view of the eye.”, ¶[0144]-¶[0146] discloses setting a range of the eye gaze when a gaze coordinate can’t be determine.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Kim, Zhao, Lee and Park with Lu in order to set the search range when no coordinate is found. One skilled in the art would have been motivated to modify Kim, Zhao, Lee and Park in this manner in order for the accuracy of tracking the eye gaze in 3D model is improved. (Lu, Abstract) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Jun 10, 2025
Non-Final Rejection — §103
Jul 09, 2025
Interview Requested
Aug 05, 2025
Examiner Interview Summary
Aug 05, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Response Filed
Nov 25, 2025
Final Rejection — §103
Jan 13, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month