Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,228

Video See-Through Augmented Reality

Final Rejection §103
Filed
Apr 18, 2023
Examiner
HE, WEIMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
46%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
60%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
190 granted / 410 resolved
-15.7% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
40 currently pending
Career history
450
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 410 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/9/26 is being considered by the examiner. Response to Amendment The amendment filed on 2/24/26 has been entered and made of record. Claims 1, 9-11 and 13 are amended. Claims 2 and14-15 are cancelled. Claims 1, 3-13 and 16-24 are pending. Response to Arguments Applicant’s arguments with respect to the rejections of independent claims 1, 9, 11 and 13 have been fully considered but they are moot because the arguments do not apply to the references being used in the current rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9,11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Rowell et al. (US 2019/0158813 A1) in view of MCCOMBE et al. (US 2022/0222842 A1) and Yoon et al. (US 11,233,986 B1), further in view of HARALD (WO2023049944A2). As to Claim 1, Rowell teaches A method comprising: accessing an image captured by a see-through camera of a pair of stereoscopic see-through cameras of a video see-through augmented-reality (AR) system comprising (1) the pair of stereoscopic see-through cameras (2) a display and (3) a pair of display lenses for viewing the display (Rowell discloses “Scenes captured by 3D cameras can be used to produce virtual reality (VR) content… The visual experience is displayed on a computer screen or with a virtual reality headset (also referred to as head mounted display or HMD).” in [0003]. It is well-known that HMD can be a see-through AR system.); generating, based on the modification to the image, a real-image transformation map for the camera that captured the accessed image, wherein the transformation map identifies frame-independent transformations to apply for rendering a real scene based on one or more subsequent images captured by that camera (Rowel discloses “One example rectification technique aligns a left and right stereo image pair in three dimensions (e.g., rotationally, vertically, and horizontally) using a set for rectification matrices produced by the image rectification system 123. A set of projection matrices is then used to generate two stereo views perceptible as a 3D image or video frame when viewed on a display 160” in [0043]; “In some embodiments, the stereo camera device can use an auto re-calibration process to generate calibration metadata out of the box using pre-determined baseline values for camera intrinsic calibration parameters and captured images or video frames… auto re-calibration processes can establish new and/or optimize baseline 3D calibration parameters in real time… Additionally, the auto re-calibration processes optimize stereoscopic calibration parameters for actual conditions encountered by users rather than generic factory conditions used in traditional manual calibration methods.” in [0136]; “Other non-limiting example auto recalibration processes determine re-calibration data by comparing portions of objects captured in stereoscopic images and video frames” in [0145]; “The auto re-calibration subsystem 1702 executes one or more auto re-calibration processes described above to transform image data included in captured stereo image frames to re-calibration data” in [0146]); and storing, in a memory associated with the video see-through AR system, a copy of the transformation map for that system (Rowel discloses “In some embodiments, the set of stereoscopic calibration metadata 320 includes a rotation matrix 322 and a translation matrix 324. The rotation matrix 322 describes a rotational correction to align an image captured by one camera module to another image captured by another camera module so that the image planes of the left and right channels are on the same plane. The translation matrix 324 describes a translation operation that ensures the image frames from the left and right channels are vertically aligned.” in [0067]; “The calibration file(s) 903 are stored in memory and read by the data preprocessor 908 as part of one or more routines for determining real time calibration metadata” in [0109]; see also [0130].) Rowell is silent on virtual camera. The combination of MCCOMBE further teaches following limitations: determining a modification to the image comprising: an undistortion of the image based on a model of that see-through camera; a rectification for the image based on a model of the pair of stereoscopic see-through cameras; and a transformation of the image from a perspective of the see-through camera to a perspective of a virtual camera located at a viewing position relative to a display lens, from the pair of display lenses, corresponding to that see-through camera (Rowell discloses “To produce a 3D effect, images and video frames captured by calibrated camera modules 111-115 must be oriented and aligned using a rectification process… One example rectification technique aligns a left and right stereo image pair in three dimensions (e.g., rotationally, vertically, and horizontally) using a set for rectification matrices produced by the image rectification system 123” in [0043]; camera model in [0029, 0238]. MCCOMBE further discloses “This aspect comprises: receiving, as an input, a sample request into a set of respective camera images, ones of the sample requests comprising a UV coordinate into a respective camera image in Rectified, Undistorted (RUD) space and a weighting value; and generating color values defining a color to be displayed on a display device at a respective projected point of the display of the display device, as part of a reconstructed synthetic image drawn on the display of the display device” in [0113]; “The images sourced from cameras 1403 and 1404 are rectified to the epipolar plane” in [0264]; “If the camera images have been pre-transformed into RUD space (Rectified, Undistorted) beforehand, then the texture sample can be performed directly. If instead, the camera images are kept in their native URD space (Unrectified, Distorted), then the given incoming UV coordinate must have the affine rectification transform applied, followed by application of the polynomial lens distortion function, both of which are supplied by the calibration data of the camera system used to the capture the images” in [0421]; “1. Transformation on the camera epipolar plane… b. This can also be used to produce a reconstructed image with the same field of view as the actual/physical cameras, but shifted elsewhere on the epipolar plane. c. This transformation is relatively simple: the projected coordinate is simply the input vertex coordinate in RUD (Rectified, Undistorted) space… 2. Portal projection transformation… and reconstruct an image as it would appear on the display portal from the perspective of a virtual camera…” in [0376-0382]; see also virtual camera in [0009].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell with the teaching of MCCOMBE so as to generate a synthetic reconstructed image of a real object from the perspective of a pair of virtual cameras during a device calibration process. However, the combination of Rowell and MCCOMBE doesn’t explicitly teach a pre-distortion of the image based on a model of the respective display lens corresponding to that see-through camera; and corrects for defects particular to that see-through camera and respective display lens corresponding to that see-through camera. Yoon further discloses “Distortions caused by, e.g., optical elements (e.g., lenses) of a HMD can deform images presented by the HMD and can impair user experience” in C1L19-21”; “Using the images captured by the camera assembly, the controller measures distortion of one or more lenses in the HMD under test… In some embodiments, the measured distortion may be used to pre-distort images presented by the HMD under test to offset certain types of distortion introduced by optical elements of the HMD under test” in C2L1-11; “Using the MTF chart, the distortion measurement engine 367 can measure distortion in the lenses of the HMD under test 310. In some embodiments, the distortion measurement engine 367 takes remedial actions based on measured distortion. For example, the distortion measurement engine 367 pre-distorts the image of the display to account for some or all of the measured distortion” in C8L23-30; see also Fig 4 below: PNG media_image1.png 737 605 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell and MCCOMBE with the teaching of Yoon so as to characterize the distortion of one or more lenses in the HMD and use the measured distortion data to predistort images presented by the HMD to offset certain types of lens distortion. In response to the limitations “generating, based on the model of the respective display lens, a frame-independent virtual-content transformation map only for virtual content generated by the video see-through AR system; and storing a copy of the virtual-content transformation map for that system”, HARALD discloses “Therefore, in this design variant, it is proposed to perform a calibration between the camera and an output device (e.g., a monitor). B. using data glasses)… On the one hand, he can change his own position and direction of view; on the other hand, he can manipulate or transform the first video recording captured by the camera by rotating, enlarging/reducing and/or distorting it. If the job seeker achieves the desired match, he confirms it. If a transformation of the first video recording is necessary to achieve the match, then this transformation will subsequently also be applied to the display of the virtual object. The virtual object is then perceived by the service seeker at the designated position” in [0196], see also [0197, 0553, 0555]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell, MCCOMBE and Yoon with the teaching of HARALD so as to perform a calibration between a camera and a headset by transforming virtual object to display virtual object overlay on the real object. Claim 9 recites similar limitations as claim 1 but in a computer readable storage media form. Therefore, the same rationale used for claim 1 is applied. Claim 11 recites similar limitations as claim 1 but in a system form. Therefore, the same rationale used for claim 1 is applied. Claim 13 is rejected based upon similar rationale as Claim 1. Further Rowell discloses rendering stereo images on a display using the updated pixel position(s) and/or calibration parameter(s) in Fig 18. Claims 1, 3, 5-13, 16 and 18-24 are rejected under 35 U.S.C. 103 as being unpatentable over Rowell et al. (US 2019/0158813 A1) in view of MCCOMBE et al. (US 2022/0222842 A1) and Yoon et al. (US 11,233,986 B1), further in view of Azimi et al. (US 2021/0142508 A1). As to Claim 1, Rowell teaches A method comprising: accessing an image captured by a see-through camera of a pair of stereoscopic see-through cameras of a video see-through augmented-reality (AR) system comprising (1) the pair of stereoscopic see-through cameras (2) a display and (3) a pair of display lenses for viewing the display (Rowell discloses “Scenes captured by 3D cameras can be used to produce virtual reality (VR) content… The visual experience is displayed on a computer screen or with a virtual reality headset (also referred to as head mounted display or HMD).” in [0003]. It is well-known that HMD can be a see-through AR system.); generating, based on the modification to the image, a real-image transformation map for the camera that captured the accessed image, wherein the transformation map identifies frame-independent transformations to apply for rendering a real scene based on one or more subsequent images captured by that camera (Rowel discloses “One example rectification technique aligns a left and right stereo image pair in three dimensions (e.g., rotationally, vertically, and horizontally) using a set for rectification matrices produced by the image rectification system 123. A set of projection matrices is then used to generate two stereo views perceptible as a 3D image or video frame when viewed on a display 160” in [0043]; “In some embodiments, the stereo camera device can use an auto re-calibration process to generate calibration metadata out of the box using pre-determined baseline values for camera intrinsic calibration parameters and captured images or video frames… auto re-calibration processes can establish new and/or optimize baseline 3D calibration parameters in real time… Additionally, the auto re-calibration processes optimize stereoscopic calibration parameters for actual conditions encountered by users rather than generic factory conditions used in traditional manual calibration methods.” in [0136]; “Other non-limiting example auto recalibration processes determine re-calibration data by comparing portions of objects captured in stereoscopic images and video frames” in [0145]; “The auto re-calibration subsystem 1702 executes one or more auto re-calibration processes described above to transform image data included in captured stereo image frames to re-calibration data” in [0146]); and storing, in a memory associated with the video see-through AR system, a copy of the transformation map for that system (Rowel discloses “In some embodiments, the set of stereoscopic calibration metadata 320 includes a rotation matrix 322 and a translation matrix 324. The rotation matrix 322 describes a rotational correction to align an image captured by one camera module to another image captured by another camera module so that the image planes of the left and right channels are on the same plane. The translation matrix 324 describes a translation operation that ensures the image frames from the left and right channels are vertically aligned.” in [0067]; “The calibration file(s) 903 are stored in memory and read by the data preprocessor 908 as part of one or more routines for determining real time calibration metadata” in [0109]; see also [0130].) Rowell is silent on virtual camera. The combination of MCCOMBE further teaches following limitations: determining a modification to the image comprising: an undistortion of the image based on a model of that see-through camera; a rectification for the image based on a model of the pair of stereoscopic see-through cameras; and a transformation of the image from a perspective of the see-through camera to a perspective of a virtual camera located at a viewing position relative to a display lens, from the pair of display lenses, corresponding to that see-through camera (Rowell discloses “To produce a 3D effect, images and video frames captured by calibrated camera modules 111-115 must be oriented and aligned using a rectification process… One example rectification technique aligns a left and right stereo image pair in three dimensions (e.g., rotationally, vertically, and horizontally) using a set for rectification matrices produced by the image rectification system 123” in [0043]; camera model in [0029, 0238]. MCCOMBE further discloses “This aspect comprises: receiving, as an input, a sample request into a set of respective camera images, ones of the sample requests comprising a UV coordinate into a respective camera image in Rectified, Undistorted (RUD) space and a weighting value; and generating color values defining a color to be displayed on a display device at a respective projected point of the display of the display device, as part of a reconstructed synthetic image drawn on the display of the display device” in [0113]; “The images sourced from cameras 1403 and 1404 are rectified to the epipolar plane” in [0264]; “If the camera images have been pre-transformed into RUD space (Rectified, Undistorted) beforehand, then the texture sample can be performed directly. If instead, the camera images are kept in their native URD space (Unrectified, Distorted), then the given incoming UV coordinate must have the affine rectification transform applied, followed by application of the polynomial lens distortion function, both of which are supplied by the calibration data of the camera system used to the capture the images” in [0421]; “1. Transformation on the camera epipolar plane… b. This can also be used to produce a reconstructed image with the same field of view as the actual/physical cameras, but shifted elsewhere on the epipolar plane. c. This transformation is relatively simple: the projected coordinate is simply the input vertex coordinate in RUD (Rectified, Undistorted) space… 2. Portal projection transformation… and reconstruct an image as it would appear on the display portal from the perspective of a virtual camera…” in [0376-0382]; see also virtual camera in [0009].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell with the teaching of MCCOMBE so as to generate a synthetic reconstructed image of a real object from the perspective of a pair of virtual cameras during a device calibration process. However, the combination of Rowell and MCCOMBE doesn’t explicitly teach a pre-distortion of the image based on a model of the respective display lens corresponding to that see-through camera; and corrects for defects particular to that see-through camera and respective display lens corresponding to that see-through camera. Yoon further discloses “Distortions caused by, e.g., optical elements (e.g., lenses) of a HMD can deform images presented by the HMD and can impair user experience” in C1L19-21”; “Using the images captured by the camera assembly, the controller measures distortion of one or more lenses in the HMD under test… In some embodiments, the measured distortion may be used to pre-distort images presented by the HMD under test to offset certain types of distortion introduced by optical elements of the HMD under test” in C2L1-11; “Using the MTF chart, the distortion measurement engine 367 can measure distortion in the lenses of the HMD under test 310. In some embodiments, the distortion measurement engine 367 takes remedial actions based on measured distortion. For example, the distortion measurement engine 367 pre-distorts the image of the display to account for some or all of the measured distortion” in C8L23-30; see also Fig 4 below: PNG media_image1.png 737 605 media_image1.png Greyscale It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell and MCCOMBE with the teaching of Yoon so as to characterize the distortion of one or more lenses in the HMD and use the measured distortion data to predistort images presented by the HMD to offset certain types of lens distortion. In response to the limitations “generating, based on the model of the respective display lens, a frame-independent virtual-content transformation map only for virtual content generated by the video see-through AR system; and storing a copy of the virtual-content transformation map for that system”, Azimi discloses “Accordingly, in some implementations, a calibration procedure may be used to compute a transformation function that enables the OST-HMD to represent virtual objects in the same coordinate system as real-world objects (e.g., by aligning the display coordinate space in which the OSTHMD renders the virtual objects with the real-world coordinate space tracked by the positional tracking device). For example, given a real-world cube and a virtual cube to be overlaid on the real-world cube, the transformation function may be used to move, warp, and/or otherwise adjust a rendering of the virtual cube such that the virtual cube and the real-world cube are aligned” in [0018]; “More particularly, in implementation(s) 200, a calibration platform may perform a calibration procedure to solve a transformation function T(•) that provides a mapping between three-dimensional points in a real-world coordinate system tracked by a positional tracking device and corresponding points in a three-dimensional virtual scene visualized by an HMD (e.g., an OST-HMD) worn by a user.” in [0027]; “Accordingly, the computed transformation, T, may effectively adjust the default internal projection operators used in the OST-HMD to correct misalignments in visualizing virtual objects with respect to a real scene. In other words, the computed elements in the effective projection operators may adjust an original or default calibration associated with the OST-HMD (e.g., with respect to aspect ratio, focal length, extrinsic transformation, and/or the like)” in [0047]; see also [0065, 0073, 0090, 0100]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell, MCCOMBE and Yoon with the teaching of Azimi so as to perform a calibration between a camera and a headset by transforming virtual object to display virtual object overlay on the real object. As to Claim 3, Rowell in view of MCCOMBE, Yoon and Azimi teaches The method of Claim 1, wherein the rectification for the image is further based on one or more of: a model of the pair of stereoscopic cameras of the video see-through AR system; or a model of the virtual camera located at an eye position for viewing content on the video see- through AR system (Rowell discloses “Rectification matrices derived from the intrinsic and extrinsic parameters are used to rectify the right and left images” in [0004]; “The stereoscopic calibration metadata 320 includes a mapping of coordinates between the right and left image channels. From this set of coordinate points, projection matrices, rectification matrices, and a distortion relationship between one lens relative to another lens can be determined. The distortion relationship is used to correct lens distortion and the projection and rectification matrices are used to rectify the images” in [0065]; see also [0085, 0095, 0220]. MCCOMBE, [0009, 0163].) As to Claim 5, Rowell in view of MCCOMBE, Yoon and Azimi teaches The method of Claim 1, wherein the transformation of the image from a perspective of the see-through camera to a perspective of the virtual camera located at the viewing position relative to the display lens, from the pair of display lenses, corresponding to that see-through camera comprises a transformation of a coordinate system of the image from a viewpoint of the camera to a viewpoint of a virtual camera located at an eye position for viewing content on the video see- through AR system based on: (1) a model of the virtual camera that is based on one or more parameters of the virtual camera and (2) a model of a rotation and a translation of the virtual camera relative to the camera of the pair of stereoscopic cameras (Rowell discloses rotation matrix and translation matrix in [0067, 0084]. MCCOMBE also discloses “and generating, based on the calibration data, image data representative of the camera images and at least one native disparity map, a synthetic, reconstructed image of the scene from the perspective of a virtual camera having a selected virtual camera position, the selected virtual camera position being unconstrained by the physical position of any of the cameras having an actual view of the scene” in [0009]. Yoon also discloses “And different positions (e.g., orientations) of the characterization camera could correspond to different gaze angles of a human eye. Additionally, in some embodiments where there are two characterization cameras to mimic the left and right eyes of a user, the two characterization cameras are able to translate relative to each other to, e.g., measure effects of inter-pupillary distance (IPD) on the device under test… In some embodiments, a characterization camera may translate away from or closer to the device under test” in C5L1-13.) As to Claim 6, Rowell in view of MCCOMBE, Yoon and Azimi teaches The method of Claim 1, wherein determining a modification to the image further comprises determining, using a model of a display lens of the video see-through AR system, a correction to a distortion of the image caused by that lens (Rowell discloses “From this set of coordinate points, projection matrices, rectification matrices, and a distortion relationship between one lens relative to another lens can be determined. The distortion relationship is used to correct lens distortion and the projection and rectification matrices are used to rectify the images” in [0065].) Claim 7 is rejected based upon similar rationale as Claims 3 & 5-6. As to Claim 8, Rowell in view of MCCOMBE, Yoon and Azimi teaches The method of Claim 1, wherein determining the modification to the image further comprises: determining a mesh representing a plurality of coordinates of the image; and determining, based on each of one or more models of one or more components of the video see-through AR system, a modification to the mesh (Rowell discloses “In one example interpolation method for determining calibration parameters for camera systems having two or more camera settings, a quadratic or triangular mesh grid containing values for calibration parameters mapped to calibration points associated with two or more camera settings is assembled from reading calibration file(s). The mesh grid may comprise a multi-dimensional space with one camera setting along each axis or dimension. The position of the real time camera setting values within the mesh grid is then located along with the three or four calibration points having the most proximate location within the mesh grid space (i.e. the most similar camera settings)” in [0092]; “The real time camera settings are used to locate the area of the mesh grid containing the real time camera position and the most proximate calibration points” in [0101]; “Interpolation processes leveraging a mesh grid having calibration points and a real time camera position arranged by their camera setting values may use the data preprocessor 908 to construct the mesh grid space” in [0111], see also [0130].) Claim 9 recites similar limitations as claim 1 but in a computer readable storage media form. Therefore, the same rationale used for claim 1 is applied. Claim 10 is rejected based upon similar rationale as Claim 7. Claim 11 recites similar limitations as claim 1 but in a system form. Therefore, the same rationale used for claim 1 is applied. Claim 12 is rejected based upon similar rationale as Claim 7. Claim 13 is rejected based upon similar rationale as Claim 1. Further Rowell discloses rendering stereo images on a display using the updated pixel position(s) and/or calibration parameter(s) in Fig 18. Claim 16 is rejected based upon similar rationale as Claim 3. Claim 18 is rejected based upon similar rationale as Claim 5. Claim 19 is rejected based upon similar rationale as Claim 6. Claim 20 is rejected based upon similar rationale as Claim 7. Claim 21 is rejected based upon similar rationale as Claim 8. Claim 22 is rejected based upon similar rationale as Claim 5. Claim 23 is rejected based upon similar rationale as Claim 6. Claim 24 is rejected based upon similar rationale as Claim 8. Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Rowell in view of MCCOMBE, Yoon and Azimi, further in view of Tankovich et al. (US 2019/0287259 A1). As to Claim 4, Rowell in view of MCCOMBE, Yoon and Azimi teaches The method of Claim 3, wherein the model of the pair of stereoscopic cameras of the video see-through AR system is used for (1) transforming stereo image pair captured by the stereo camera pair onto a plane and (2) making corresponding epipolar lines in the stereo image pair collinear; and the model of virtual camera maps image coordinates onto an image plane shared by the stereo camera pair (Rowell discloses” Using re-calibration data 1708 describing calibration parameters for one or more camera modules, the rendering system 141 corrects calibration errors by projecting right and left stereo image frames on a rectified image frame having image planes of the left and right stereo image frames on a common image plane oriented in an alignment that satisfies an epipolar geometry” in [0146]; “To satisfy an epipolar geometry, the right and left stereo image frames may be aligned in vertical, horizontal, rotational, and/or scalar directions. In most examples, the rotation matrix is responsible for mapping the image planes of the left and right frames to the common rectification image plane. One or more projection matrices are used to ensure that the left and right images are aligned and satisfy an epipolar geometry” in [0153]. Rowell is silent on epipolar lines in the stereo image pair collinear. Tankovich further discloses “Given the pair of stereo images 110, 114, rectification determines a transformation of each image plane such that pairs of conjugate epipolar lines become collinear and parallel to one of the image axes. Accordingly, for each pixel p=(x, y) in one image (e.g., the left image 110) has a correspondence match to its corresponding pixel p' in the other image of its stereo pair (e.g., right image 114) which lies on the same y-axis but at a different x-axis coordinate” in [0020].) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Rowell, MCCOMBE, Yoon and Azimi with the teaching of Tankovich so that each pixel in one image has a correspondence match to its corresponding pixel in the other image of its stereo pair (Tankovich, [0020]). Claim 17 is rejected based upon similar rationale as Claim 4. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached Monday-Friday, 8:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Weiming He/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
Mar 04, 2025
Non-Final Rejection — §103
May 02, 2025
Interview Requested
May 15, 2025
Examiner Interview Summary
May 15, 2025
Applicant Interview (Telephonic)
Jun 09, 2025
Response Filed
Jun 15, 2025
Final Rejection — §103
Jul 16, 2025
Interview Requested
Aug 11, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Examiner Interview Summary
Sep 17, 2025
Request for Continued Examination
Sep 18, 2025
Response after Non-Final Action
Nov 19, 2025
Non-Final Rejection — §103
Nov 19, 2025
Examiner Interview (Telephonic)
Dec 16, 2025
Interview Requested
Jan 07, 2026
Examiner Interview Summary
Jan 07, 2026
Applicant Interview (Telephonic)
Feb 24, 2026
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567135
MULTIMEDIA PLAYBACK MONITORING SYSTEM AND METHOD, AND ELECTRONIC APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561876
System and method for an audio-visual avatar creation
2y 5m to grant Granted Feb 24, 2026
Patent 12514672
System, Method And Software Program For Aiding In Positioning Of Objects In A Surgical Environment
2y 5m to grant Granted Jan 06, 2026
Patent 12494003
AUTOMATIC LAYER FLATTENING WITH REAL-TIME VISUAL DEPICTION
2y 5m to grant Granted Dec 09, 2025
Patent 12468949
SYSTEMS AND METHODS FOR FEW-SHOT TRANSFER LEARNING
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
46%
Grant Probability
60%
With Interview (+13.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 410 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month