Prosecution Insights
Last updated: April 19, 2026
Application No. 17/991,576

DETERMINING POINT SPREAD FUNCTION FROM CONSECUTIVE IMAGES

Non-Final OA §103
Filed
Nov 21, 2022
Examiner
GEBRESLASSIE, WINTA
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Varjo Technologies OY
OA Round
2 (Non-Final)
76%
Grant Probability
Favorable
2-3
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
101 granted / 133 resolved
+13.9% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
53 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 133 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-17 are still pending for consideration. Response to Arguments Applicant’s arguments see the “Remarks” filed on Nov 07, 2025. Applicant on page 1 asserts “Applicant respectfully submits that the combination of Mizukura and Mar fails to disclose or suggest; determining a point spread function for the at least one camera, based on a correlation between pixels of at least the part of the first image and respective pixels of the corresponding part of the second image, and a first focussing distance range covered by a depth of field of the at least one camera around the first focussing distance”. Response: Upon further review, the 103 rejection was not sufficiently supported with respect to the limitation. In particular, secondary reference Mar does not sufficiently teach determining a point spread function for the at least one camera, based on a correlation between pixels of at least the part of the first image and respective pixels of the corresponding part of the second image, and a first focussing distance range covered by a depth of field of the at least one camera around the first focussing distance. While Mar discusses blur characterization and estimation of relative point spread functions between image patches, they do not teach or suggest using a camera-determined PSF as an image-reconstruction kernel to correct a separate image for extended depth of field. The cited disclosures treat PSF estimation as an analytical or measurement tool for depth or blur characterization, rather than as an operational component applied to reconstruct or correct an image. Moreover, the references acknowledge limitations and inaccuracies associated with PSF assumptions (e.g., Gaussian PSFs, windowing artifacts), further underscoring that they do not contemplate PSF-based image correction. The reference does not show how a PSF determined from the disclosed techniques would be applied to a third image, and perform extended depth-of-field reconstruction using a PSF. Accordingly, applicant’s argument is persuasive. Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Ogura (US 20150281554 A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-3, 5-6, 8-11, 13-14, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Mizukura et al. (US 20190158803 A1) in view of Ogura (US 20150281554 A1). Regarding claim 1, Mizukura et al. teaches a computer-implemented method comprising: obtaining at least one sequence of images of a real-world environment captured consecutively using at least one camera (see para [0025]; “FIG. 11 is a schematic illustration showing an example of preparing two sheets of images having been photographed by shifting in-focus positions by photographing by shifting time around each frame so as to change an in-focus position” Note: a camera at shifted focus distances captures multiple stereo images, so the “real-world environment” language met), wherein an optical focus of the at least one camera is switched between different focussing distances whilst capturing consecutive images of said sequence for a given pair of a first image and a second image that are captured consecutively in said sequence by adjusting the optical focus of the at least one camera at a first focussing distance and a second focussing distance, respectively (see para [0028]; “performing image processing to make a state such that a difference between left and right images becomes as small as possible except a difference in blur”, see also [0074]; “a right image A and a left image B that are different in focusing position, in Step S10 of FIG. 2. As shown in FIG. 3, in the right image A, a photographic subject a at the center is in-focus, and a photographic subject b at the periphery (background) is blurred. On the other hand, in the right image B, a photographic subject a at the center is blurred, and a photographic subject b at the periphery is in-focus”, and para [0103]; “in the vicinity of a central photographic subject, the image information of the right image A is almost used, and with regard to the background, the image information of the left image B is used” Note: distinguishes sharp and blurred regions across paired images), and for a third image of the real-world environment captured by adjusting the optical focus of the at least one camera at a third focussing distance (see para [0091]; “as shown in FIG. 11, in the case of photographing by changing in-focus positions A and B while shifting time for each frame, it becomes possible to prepare two sheets of images having been photographed by shifting the in-focus position”, Note; capturing multiple images while switching focus positions, which necessarily includes images beyond the first two. Any subsequent image in the sequence constitutes the claimed third image), applying an extended depth-of-field correction to at least one segment of the third image that is out of focus by using the point spread function determined for the at least one camera (see para [0085]; “The depth-of-field extension correction processing section 220 performs the EDoF processing on the basis of distance information and PSF information. At this time, the parallax information and the distance information may be calculated on the camera head 100 side, or may be calculated on the depth-of-field-extension-correction-processing-section 220 side. In the case where the information in FIG. 6 and FIG. 7 has been acquired, the amount of blur corresponding to a depth position can be known. Accordingly, by performing the reverse PSF filtering processing (deconvolution) to a blurred portion in a captured image, it is possible to obtain an EDoF image in which the blur has been removed”, see also para [0010]; “creates a depth-of-field extended image by synthesizing the image for a left eye and the image for a right eye in each of which the depth-of-field has been extended”). However, Mizukura et al. does not teach assuming that at least a part of the first image is in focus, whilst a corresponding part of the second image is out of focus; and determining a point spread function for the at least one camera, based on a correlation between pixels of at least the part of the first image and respective pixels of the corresponding part of the second image and a first focussing distance range covered by a depth of field of the at least one camera around the first focussing distance. In the same field of endeavor Ogura teaches assuming that at least a part of the first image is in focus, whilst a corresponding part of the second image is out of focus (see para [0032]; “DFD processor 161 is disposed in image processor 160, and performs the DFD calculation to produce a depth map. To be more specific, DFD processor 161 uses two images: observed image PA and reference image PB having different defocusing amounts” Note: this inherently requires one image region to be more in focus than the other ), and determining a point spread function for the at least one camera, based on a correlation between pixels of at least the part of the first image and respective pixels of the corresponding part of the second image (see para [0016]; “A number of methods for measuring an object distance, a distance from an image-capturing apparatus to an object includes a depth from Defocus (DFD) method that utilizes correlation values of defocusing amounts generated in image captured with a camera. In general, a defocusing amount is uniquely determined for each image-capturing apparatus in response to a relation between a focal position and the object distance. In the DFD method utilizing the above characteristics, two images having different defocus amounts are produced, and the object distance is measured based on a point-spread function (PSF) and a difference in the defocusing amounts”, see also para [0036]; “DFD processor 161 produces plural observed pixels CA by convolutions of plural PSFs with observed pixels SA”, and para [0038]; “DFD processor 161 then compares observed pixels CA1 to CA16 with reference pixel SB, and selects observed pixel CAn that has the smallest difference from reference pixel SB among observed pixels CA1 to CA16 …DFD processor 161 determines the object distance corresponding to the point spread function for convolution”), and a first focussing distance range covered by a depth of field of the at least one camera around the first focussing distance (see para [0032]; “To be more specific, DFD processor 161 uses two images: observed image PA and reference image PB having different defocusing amounts produced intentionally by changing focal positions. DFD processor 161 produces the depth map based on observed image PA, reference image PB, and point spread functions (PSFs). The depth map indicates object distances at respective ones of pixels of observed image PA (reference image PB)”). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify general method of observation by switching between a stereoscopic vision image and a depth-of-field extended image correspondingly to a situation of Mizukura et al. in view of the use of an image-capturing apparatus performs a convenient focusing operation of Ogura in order to improve blur modeling accuracy (see para [0016]). Regarding claim 2, the rejection of claim 1 is incorporated herein. Ogura in the combination further teach wherein the step of assuming comprises assuming that an entirety of the first image is in focus, whilst an entirety of the second image is out of focus, and wherein the point spread function is determined based on a correlation between pixels of the first image and respective pixels of the second image, and the first focussing distance range (see para [0016]; “two images having different defocus amounts are produced, and the object distance is measured based on a point-spread function (PSF) and a difference in the defocusing amounts. The image-capturing apparatus in accordance with this embodiment measures the object distance by utilizing the DFD calculation to perform an auto-focus control”, see also para [0032]; “To be more specific, DFD processor 161 uses two images: observed image PA and reference image PB having different defocusing amounts produced intentionally by changing focal positions. DFD processor 161 produces the depth map based on observed image PA, reference image PB, and point spread functions (PSFs). The depth map indicates object distances at respective ones of pixels of observed image PA (reference image PB)”). Regarding claim 3, the rejection of claim 1 is incorporated herein. Mizukura et al. in the combination further teach further comprising: obtaining a plurality of depth maps captured corresponding to the images in said sequence (see para [0082]; “In this method, used are spatial distance information (depth map: depth map) as shown in FIG. 6 and the change characteristic, depending on a distance, of a PSF (Point Spread Function) showing how a lens blurs as shown in FIG. 7”, see also para [0084]; “Moreover, FIG. 7 shows a situation that blur occurs correspondingly to a depth relative to an in-focus position, and as a distance from the in-focus position increases, blur becomes large. As shown in FIG. 7, the amount of blur according to a depth can be approximated with a blur function by Pill Box Function and a blur function by a two-dimensional Gauss function”). Ogura in the combination further teach and identifying at least one image segment of the first image and a corresponding image segment of the second image in which the at least one image segment of the first image is in focus whilst the corresponding image segment of the second image is out of focus, wherein the step of determining the point spread function is performed, based on a correlation between pixels of the at least one image segment of the first image and respective pixels of the corresponding image segment of the second image, and respective optical depths in at least one segment of a first depth map corresponding to the at least one image segment of the first image, wherein said part of the first image comprises the at least one image segment of the first image (see para [0036]; “DFD processor 161 produces plural observed pixels CA by convolutions of plural PSFs with observed pixels SA. DFD processor 161 compares plural observed pixels CA with reference pixels SB located at the same coordinates as pixels CA on the image”, see also para [0032]; “observed image PA and reference image PB having different defocusing amounts produced intentionally by changing focal positions. DFD processor 161 produces the depth map based on observed image PA, reference image PB, and point spread functions (PSFs). The depth map indicates object distances at respective ones of pixels of observed image PA (reference image PB)” Note: It determines depth map and corresponding PSF values at localized pixel regions across the image plane, which necessarily constitutes identifying image segments where the focus condition differs between images). Regarding claim 5, the rejection of claim 1 is incorporated herein. Mizukura et al. in the combination further teach further wherein the at least one sequence of images comprises two sequences of images, one of the two sequences comprising left images for a left eye of a user, another of the two sequences comprising right images for a right eye of the user (see Fig. 2, para [0006]; “a photographing situation acquiring section that acquires information with regard to a photographing situation of an image for a left eye or an image for a right eye; and a determining section that determines correspondingly to the information with regard to the photographing situation whether to perform stereoscopic vision image processing for the image for a left eye or the image for a right eye”), the at least one camera comprising a left camera and a right camera (see para [0061]; “it is possible to acquire a distance to an object from an image by utilizing parallax between right and left cameras”), and wherein the step of applying the extended depth-of-field correction comprises applying the extended depth-of-field correction to the left images and the right images in an alternating manner (see para [0080]; “In the depth-of-field extension correction processing section 220, the EDoF processing can be performed independently for each of the right image A and the left image B. In this case, the EDoF is performed with only one sheet of an image for one eye of either left or right. In the EDoF processing using one sheet of an image for one eye, there are a plurality of variations shown in the below”). Regarding claim 6, the rejection of claim 1 is incorporated herein. Mizukura et al. in the combination further teach wherein the optical focus of the at least one camera is switched between N different focussing distances whilst capturing consecutive images of said sequence, the N different focussing distances comprising fixed focussing distances (see para [0025]; “FIG. 11 is a schematic illustration showing an example of preparing two sheets of images having been photographed by shifting in-focus positions by photographing by shifting time around each frame so as to change an in-focus position”, see also para [0047]; “The stereoscopic vision/depth-of-field extension switching determining section 210 includes a photographing situation acquiring section 212 that acquires information with regard to photographing situations, such as an optical zoom value, an electronic zoom value, operation information by a user, parallax information, and distance information, and a determining section 214 that determines correspondingly to information with regard to a photographing situation”). Regarding claim 8, the rejection of claim 1 is incorporated herein. Mizukura et al. in the combination further teach wherein the third image is any one of: a previous image in said sequence, a subsequent image in said sequence, the second image (see para [0080]; “In the depth-of-field extension correction processing section 220, the EDoF processing can be performed independently for each of the right image A and the left image B. In this case, the EDoF is performed with only one sheet of an image for one eye of either left or right. In the EDoF processing using one sheet of an image for one eye, there are a plurality of variations shown in the below. In this case, since the EDoF is completed by only one eye, the preparation of two sheets of images different in in-focus position becomes a dummy. Accordingly, another one sheet of an image may be made a copy of an image having been subjected to the EDoF”). Regarding claim 9, the scope of claim 9 is fully encompassed by the scope of claim 1, accordingly, the rejection analysis of claim 1 is equally applicable here (see also para [0047]; “The image processing device 200 includes a stereoscopic vision/depth-of-field extension switching determining section 210, a depth-of-field extension correction processing section 220” of Mizukura et al.). Regarding claim 10, the rejection of claim 9 is incorporated herein. Ogura in the combination further teach wherein the at least one server configured to assume that an entirety of the first image is in focus, whilst an entirety of the second image is out of focus, and wherein the point spread function is determined based on a correlation between pixels of the first image and respective pixels of the second image, and the first focussing distance range (see para [0016]; “two images having different defocus amounts are produced, and the object distance is measured based on a point-spread function (PSF) and a difference in the defocusing amounts. The image-capturing apparatus in accordance with this embodiment measures the object distance by utilizing the DFD calculation to perform an auto-focus control”, see also para [0032]; “To be more specific, DFD processor 161 uses two images: observed image PA and reference image PB having different defocusing amounts produced intentionally by changing focal positions. DFD processor 161 produces the depth map based on observed image PA, reference image PB, and point spread functions (PSFs). The depth map indicates object distances at respective ones of pixels of observed image PA (reference image PB)”). Regarding claim 11, the rejection of claim 9 is incorporated herein. Mizukura et al. in the combination further teach wherein the at least one server configured to: obtaining a plurality of depth maps captured corresponding to the images in said sequence (see para [0082]; “In this method, used are spatial distance information (depth map: depth map) as shown in FIG. 6 and the change characteristic, depending on a distance, of a PSF (Point Spread Function) showing how a lens blurs as shown in FIG. 7”, see also para [0084]; “Moreover, FIG. 7 shows a situation that blur occurs correspondingly to a depth relative to an in-focus position, and as a distance from the in-focus position increases, blur becomes large. As shown in FIG. 7, the amount of blur according to a depth can be approximated with a blur function by Pill Box Function and a blur function by a two-dimensional Gauss function”). Ogura in the combination further teach and identifying at least one image segment of the first image and a corresponding image segment of the second image in which the at least one image segment of the first image is in focus whilst the corresponding image segment of the second image is out of focus, wherein the step of determining the point spread function is performed, based on a correlation between pixels of the at least one image segment of the first image and respective pixels of the corresponding image segment of the second image, and respective optical depths in at least one segment of a first depth map corresponding to the at least one image segment of the first image, wherein said part of the first image comprises the at least one image segment of the first image (see para [0036]; “DFD processor 161 produces plural observed pixels CA by convolutions of plural PSFs with observed pixels SA. DFD processor 161 compares plural observed pixels CA with reference pixels SB located at the same coordinates as pixels CA on the image”, see also para [0032]; “observed image PA and reference image PB having different defocusing amounts produced intentionally by changing focal positions. DFD processor 161 produces the depth map based on observed image PA, reference image PB, and point spread functions (PSFs). The depth map indicates object distances at respective ones of pixels of observed image PA (reference image PB)” Note: It determines depth map and corresponding PSF values at localized pixel regions across the image plane, which necessarily constitutes identifying image segments where the focus condition differs between images). Regarding claim 13, the rejection of claim 9 is incorporated herein. Mizukura et al. in the combination further teach further wherein the at least one sequence of images comprises two sequences of images, one of the two sequences comprising left images for a left eye of a user, another of the two sequences comprising right images for a right eye of the user (see Fig. 2, para [0006]; “a photographing situation acquiring section that acquires information with regard to a photographing situation of an image for a left eye or an image for a right eye; and a determining section that determines correspondingly to the information with regard to the photographing situation whether to perform stereoscopic vision image processing for the image for a left eye or the image for a right eye”), the at least one camera comprising a left camera and a right camera (see para [0061]; “it is possible to acquire a distance to an object from an image by utilizing parallax between right and left cameras”), and wherein the step of applying the extended depth-of-field correction comprises applying the extended depth-of-field correction to the left images and the right images in an alternating manner (see para [0080]; “In the depth-of-field extension correction processing section 220, the EDoF processing can be performed independently for each of the right image A and the left image B. In this case, the EDoF is performed with only one sheet of an image for one eye of either left or right. In the EDoF processing using one sheet of an image for one eye, there are a plurality of variations shown in the below”). Regarding claim 14, the rejection of claim 9 is incorporated herein. Mizukura et al. in the combination further teach wherein the optical focus of the at least one camera is switched between N different focussing distances whilst capturing consecutive images of said sequence, the N different focussing distances comprising fixed focussing distances (see para [0025]; “FIG. 11 is a schematic illustration showing an example of preparing two sheets of images having been photographed by shifting in-focus positions by photographing by shifting time around each frame so as to change an in-focus position”, see also para [0047]; “The stereoscopic vision/depth-of-field extension switching determining section 210 includes a photographing situation acquiring section 212 that acquires information with regard to photographing situations, such as an optical zoom value, an electronic zoom value, operation information by a user, parallax information, and distance information, and a determining section 214 that determines correspondingly to information with regard to a photographing situation”). Regarding claim 16, the rejection of claim 9 is incorporated herein. Mizukura et al. in the combination further teach wherein the third image is any one of: a previous image in said sequence, a subsequent image in said sequence, the second image (see para [0080]; “In the depth-of-field extension correction processing section 220, the EDoF processing can be performed independently for each of the right image A and the left image B. In this case, the EDoF is performed with only one sheet of an image for one eye of either left or right. In the EDoF processing using one sheet of an image for one eye, there are a plurality of variations shown in the below. In this case, since the EDoF is completed by only one eye, the preparation of two sheets of images different in in-focus position becomes a dummy. Accordingly, another one sheet of an image may be made a copy of an image having been subjected to the EDoF”). Regarding claim 17, the rejection of claim 1 is incorporated herein Mizukura et al. in the combination further teach a computer program product comprising a non-transitory machine- readable data storage medium having stored thereon program instructions that, when executed by a processor, cause the processor to execute steps of a computer-implemented method (see Fig. 1, para [0023]-[0029]; “Image processor 160 processes the image data produced by CMOS image sensor 140 to produce image data to be displayed on monitor display 220 and to produce image data to be stored in memory card 200 …Internal memory 240.. stores a control program that controls entire digital video camera 100. Internal memory 240 also stores point spread functions (PSFs)… An instruction entering through touch panel 220B as a touch action is supplied to controller 180 to be processed”). Claims 4, 7, 12, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mizukura et al. in view of Ogura as applied in claim 1, and 9 above and further in view of Ackerman et al. (US 20150003819 A1). Regarding claim 4, the rejection of claim 1 is incorporated herein. Mizukura et al. in the combination further teach and applying the extended depth-of-field correction to the at least one image segment of the third image that is out of focus, only when the at least one image segment of the third image overlaps with the gaze region (see para [0082]; “used are spatial distance information (depth map: depth map) as shown in FIG. 6 and the change characteristic, depending on a distance, of a PSF (Point Spread Function) showing how a lens blurs as shown in FIG. 7”, see also para [0084]; “As shown in FIG. 7, the amount of blur according to a depth can be approximated with a blur function by Pill Box Function and a blur function by a two-dimensional Gauss function” Note: disclose EDoF correction using PSF and depth information). However, the combination of Mizukura et al. and Ogura does not teach further comprising obtaining information indicative of a gaze direction of a user, determining a gaze region in the third image, based on the gaze direction of the user. In thew same field of endeavor Ackerman et al. teach further comprising obtaining information indicative of a gaze direction of a user, determining a gaze region in the third image, based on the gaze direction of the user (see para [0005]; “An eye gaze of a user is tracking using an eye tracking system. A vector that corresponds to a direction in which an eye of a user is gazing at a point in time is determined based on the eye tracking. The direction is in a field of view of a camera. A distance is determined based on the vector and a location of a lens of the camera. The lens is automatically focused based on the distance”), (see para [0021]; “FIG. 6 is a flowchart of one embodiment of a process of focusing a camera based on a depth map of locations gazed at by a user” Note: obtaining gaze direction and applying correction (focus adjustment) specifically to the region that overlaps the gaze). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify general method of observation by switching between a stereoscopic vision image and a depth-of-field extended image correspondingly to a situation of Mizukura et al. in view of the use of an image-capturing apparatus performs a convenient focusing operation of Ogura and automatically focus a camera based on eye tracking of Ackerman et al. in order for selective correction in gaze region to improve efficiency and user experience (see para [0005]). Regarding claim 7, the rejection of claim 1 is incorporated herein. Ackerman et al. in the combination further teach further teach wherein the optical focus of the at least one camera is switched between N different focussing distances whilst capturing consecutive images of said sequence, wherein the N different focussing distances correspond to optical depths at which N users are gazing (see para [0007]; “An eye gaze of a user is tracking using an eye tracking system. A vector that corresponds to a direction in which an eye of a user is gazing at a point in time is determined based on the eye tracking. The direction is in a field of view of a camera. A distance is determined based on the vector and a location of a lens of the camera. The lens is automatically focused based on the distance” Note: teaches dynamically adjusting the optical focus based on gaze position, and by extension, sequentially switching focus between depths when multiple gaze positions are detected). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify general method of observation by switching between a stereoscopic vision image and a depth-of-field extended image correspondingly to a situation of Mizukura et al. in view of the use of an image-capturing apparatus performs a convenient focusing operation of Ogura and a technology automatically focus a camera based on eye tracking of Ackerman et al. in order to ensure the captured images correspond to the actual gaze depths of one or more users (see para [0005]). Regarding claim 12, the rejection of claim 9 is incorporated herein. Mizukura et al. in the combination further teach and applying the extended depth-of-field correction to the at least one image segment of the third image that is out of focus, only when the at least one image segment of the third image overlaps with the gaze region (see para [0047]; “Moreover, the depth-of-field extension correction processing section 220 includes a captured image acquiring section 22 that acquires an image for a left eye and an image for a right eye, and a depth-of-field extension processing section 224 that extends a depth-of-field for each of an image for a left eye and an image for a right eye and creates a depth-of-field extended image by synthesizing an image for a left eye and an image for a right eye in each of which a depth-of-field has been extended”). Ackerman et al. in the combination further teach further teach comprising obtaining information indicative of a gaze direction of a user, determining a gaze region in the third image, based on the gaze direction of the user (see para [0005]; “An eye gaze of a user is tracking using an eye tracking system. A vector that corresponds to a direction in which an eye of a user is gazing at a point in time is determined based on the eye tracking. The direction is in a field of view of a camera. A distance is determined based on the vector and a location of a lens of the camera. The lens is automatically focused based on the distance”), see para [0021]; “FIG. 6 is a flowchart of one embodiment of a process of focusing a camera based on a depth map of locations gazed at by a user” Note: obtaining gaze direction and applying correction (focus adjustment) specifically to the region that overlaps the gaze). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify general method of observation by switching between a stereoscopic vision image and a depth-of-field extended image correspondingly to a situation of Mizukura et al. in view of the use of an image-capturing apparatus performs a convenient focusing operation of Ogura and a technology automatically focus a camera based on eye tracking of Ackerman et al. in order for selective correction where the user is looking reduces computational load and improves perceived image quality (see para [0005]). Regarding claim 15, the rejection of claim 9 is incorporated herein. Ackerman et al. in the combination further teach further teach wherein the optical focus of the at least one camera is switched between N different focussing distances whilst capturing consecutive images of said sequence, wherein the N different focussing distances correspond to optical depths at which N users are gazing (see para [0007]; “An eye gaze of a user is tracking using an eye tracking system. A vector that corresponds to a direction in which an eye of a user is gazing at a point in time is determined based on the eye tracking. The direction is in a field of view of a camera. A distance is determined based on the vector and a location of a lens of the camera. The lens is automatically focused based on the distance” Note: teaches dynamically adjusting the optical focus based on gaze position, and by extension, sequentially switching focus between depths when multiple gaze positions are detected). Accordingly, it would have been obvious to one of ordinary skill in the art before the effecting filling date of the invention to modify general method of observation by switching between a stereoscopic vision image and a depth-of-field extended image correspondingly to a situation of Mizukura et al. in view of the use of an image-capturing apparatus performs a convenient focusing operation of Ogura and a technology automatically focus a camera based on eye tracking of Ackerman et al. in order to ensure the captured images correspond to the actual gaze depths of one or more users (see para [0005]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINTA GEBRESLASSIE whose telephone number is (571)272-3475. The examiner can normally be reached Monday-Friday9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at 571-270-5180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WINTA GEBRESLASSIE/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Nov 21, 2022
Application Filed
Aug 24, 2025
Non-Final Rejection — §103
Nov 07, 2025
Response Filed
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579683
IMAGE VIEW ADJUSTMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12573238
BIOMETRIC FACIAL RECOGNITION AND LIVENESS DETECTOR USING AI COMPUTER VISION
2y 5m to grant Granted Mar 10, 2026
Patent 12530768
SYSTEMS AND METHODS FOR IMAGE STORAGE
2y 5m to grant Granted Jan 20, 2026
Patent 12524932
MACHINE LEARNING IMAGE RECONSTRUCTION
2y 5m to grant Granted Jan 13, 2026
Patent 12511861
DETECTION OF ANNOTATED REGIONS OF INTEREST IN IMAGES
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+24.7%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 133 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month