Prosecution Insights
Last updated: April 19, 2026
Application No. 18/242,339

Temporally Stable Perspective Correction

Final Rejection §103§112
Filed
Sep 05, 2023
Examiner
ZALALEE, SULTANA MARCIA
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
346 granted / 488 resolved
+8.9% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§103 §112
DETAILED ACTION Response to Arguments Applicant's arguments filed 10/23/2025 regarding the 35 USC 102/103 rejections with respect to the amended limitations of claims 1-20 have been considered but are moot in view of the new ground(s) of rejection necessitated by the amendment. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “closer” in the amended limitation of “transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a first perspective of the image sensor and a second perspective closer to an eye of a user in one or more directions than the first perspective; and displaying, on the display, the transformed image.” is a relative term which renders the claim indefinite. The term “closer” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claims also do not describe any eye tracking or determining the position or perspective of the user eye. Therefore the limitation “a second perspective closer to an eye of a user” is ambiguous. For examination purpose, perspective from the user’s left and right eye are considered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Chernov et al (US 20170046868 A1), in view of Bescos et al (Bescos, Berta, et al. "DynaSLAM: Tracking, mapping, and inpainting in dynamic scenes." IEEE robotics and automation letters 3.4 (2018): 4076-4083.), and further in view of Zhu et al (US 20190101758 A1). RE claim 1, Chernov teaches A method comprising: at a device including an image sensor, a display, one or more processors, and non-transitory memory (abstract, Fig 1 A, 18, [0159]): capturing, using the image sensor, an image of a physical environment (Fig 1, [0005], [0047]); obtaining a depth map including a plurality of depths respectively associated with a plurality of pixels of the image of the physical environment, wherein the depth map includes, a particular depth corresponding to a distance between the image sensor and a static object in the physical environment (Fig 2B, [0005]); transforming, using the one or more processors, the image of the physical environment based on the depth map; and displaying, on the display, the transformed image (Fig 2, [0048]-[0049]). Chernov is silent RE: wherein the depth map includes, for a particular pixel at a particular pixel location representing a dynamic object in the physical environment, the particular depth corresponding to a distance between the image sensor and a static background/occluded object in the physical environment behind the dynamic object. However Bescos teaches in Figs 1-5, abstract, page 2 col 2, page 5 col 1 to reconstruct occluded objects behind dynamic objects in a dynamic scene. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov a system and method wherein the depth map includes, for a particular pixel at a particular pixel location representing a dynamic object in the physical environment, the particular depth corresponding to a distance between the image sensor and a static object in the physical environment behind the dynamic object, as suggested by Bescos, to reconstruct occluded objects behind dynamic objects in a dynamic scene, and thereby increasing system effectiveness and user experience. Chernov as modified by Bescos is silent RE transforming the image of the physical environment based on a difference between a first perspective of the image sensor and a second perspective closer to an eye of a user in one or more directions than the first perspective. However Zhu teaches in Fig 13, abstract, [0102], [0108] [0117], claim 16 etc to correct the image due to the difference between the camera and user perspective. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov as modified by Bescos a system and method based on a three-dimensional model of the physical environment, as suggested by Zhu, to correct the image due to the difference between the camera and user perspective, and thereby increasing system effectiveness and user experience. RE claim 2, Chernov as modified by Bescos and Zhu teaches wherein obtaining the depth map includes determining the particular depth via interpolation using depths of locations surrounding the particular pixel location (Bescos Figs 1-5, abstract, page 5 col 1 and Chernov [0113]). RE claim 3, Chernov as modified by Bescos and Zhu teaches wherein obtaining the depth map includes determining the particular depth at a time the dynamic object was not represented at the particular pixel location (Bescos Figs 1-5, abstract, page 5 col 1). RE claim 13, Chernov teaches wherein transforming the image of the physical environment based on the depth map further includes smoothing the depth map ([0116] ). RE claim 14, Chernov teaches wherein transforming the image of the physical environment based on the depth map further includes clamping the depth map ([0119]). Claims 4-10 are rejected under 35 U.S.C. 103 as being unpatentable over Chernov as modified by Bescos and Zhu, and further in view of Harviainen et al (US 20210084278 A1). RE claim 4, Chernov as modified by Bescos and Zhu teaches wherein obtaining the depth map includes determining the particular depth based on excluding the dynamic object (Bescos Figs 1-5, abstract, page 5 col 1). Chernov as modified by Bescos and Zhu is silent RE based on a three-dimensional model of the physical environment. However Harviainen teaches in [0104], [0116]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov as modified by Bescos and Zhu a system and method based on a three-dimensional model of the physical environment, as suggested by Harviainen, to effectively determine the depth value of the occluded object, and thereby increasing system effectiveness and user experience. RE claim 5, Chernov as modified by Bescos, Zhu and Harviainen teaches further comprising generating the three-dimensional model (Chernov [0058]). RE claim 6, Chernov as modified by Bescos, Zhu and Harviainen teaches wherein generating the three-dimensional model includes adding one or more points to the three-dimensional model at one or more locations in a three-dimensional coordinate system of the physical environment corresponding to one or more static objects in the physical environment (Bescos Figs 1-5, abstract, page 3 col 2, page 5 col 1). RE claim 7, Chernov as modified by Bescos, Zhu and Harviainen teaches wherein adding the one or more points to the three-dimensional model includes determining that the one or more points correspond to the one or more static objects in the physical environment (Bescos Figs 1-5, abstract, page 3 col 2, page 5 col 1). RE claim 8, Chernov as modified by Bescos, Zhu and Harviainen teaches wherein determining that a particular one of the one or more points corresponds to the one or more static objects in the physical environment includes performing semantic segmentation on one or more images (Bescos Figs 1-5, abstract, page 3 cols1- 2, page 5 col 1). RE claim 9, Chernov as modified by Bescos, Zhu and Harviainen teaches wherein determining that a particular one of the one or more points corresponds to the one or more static objects in the physical environment includes detecting the particular point at a same location at least a threshold number of times over a time period (Bescos Figs 1-5, abstract, page 3 cols1- 2, page 5 col 1). RE claim 10, Chernov as modified by Bescos, Zhu and Harviainen teaches wherein determining that a particular one of the one or more points corresponds to the one or more static objects in the physical environment includes determining that surrounding points correspond to the one or more static objects in the physical environment (Bescos Figs 1-5, abstract, page 3 cols1- 2, page 5 col 1). Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Chernov as modified by Bescos, Zhu and Harviainen, and further in view of Ren et al (US 20230319218 A1). RE claim 11, Chernov as modified by Bescosv, Zhu and Harviainen is silent RE wherein determining the particular depth based on a three-dimensional model includes rasterizing the three-dimensional model. However Ren teaches in [0396]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov as modified by Bescos, Zhu and Harviainen a system and method wherein determining the particular depth based on a three-dimensional model includes rasterizing the three-dimensional model, as suggested by Ren, to render the 3D model, and thereby increasing system effectiveness and user experience. RE claim 12, Chernov as modified by Bescos, Zhu and Harviainen is silent RE wherein determining the particular depth based on the three-dimensional model includes ray tracing based on the three-dimensional model. However Ren teaches in [0396]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov as modified by Bescos, Zhu and Harviainen a system and method wherein determining the particular depth based on the three-dimensional model includes ray tracing based on the three-dimensional model, as suggested by Ren, to render the 3D model, and thereby increasing system effectiveness and user experience. Claims 15 and 17 are rejected under 103 as being unpatentable over Harviainen et al (US 20210084278 A1), and further in view of Zhu et al. RE claim 15, Harviainen teaches A device comprising: an image sensor; a display; a non-transitory memory; and one or more processors (Fig 13B, abstract ) to: capture, using the image sensor, an image of a physical environment; obtain a three-dimensional model of the physical environment ([0099]- [0100]); obtain, based on the three-dimensional model, a depth map including a plurality of depths respectively associated with a plurality of pixels of the image of the physical environment (Fig 11, [0104], [0116]); transform, using the one or more processors, the image of the physical environment based on the depth map; and display, on the display, the transformed image (abstract, [0097]-[0098], [0102], [0107], [0169]-[0170] etc). Harviainen is silent RE transforming the image of the physical environment based on a difference between a first perspective of the image sensor and a second perspective closer to an eye of a user in one or more directions than the first perspective. However Zhu teaches in Fig 13, abstract, [0102], [0108] [0117], claim 16 etc to correct the image due to the difference between the camera and user perspective. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Harviainen a system and method based on a three-dimensional model of the physical environment, as suggested by Zhu, to correct the image due to the difference between the camera and user perspective, and thereby increasing system effectiveness and user experience. RE claim 17, Harviainen teaches wherein the three-dimensional model is based on objects in the physical environment determined to be static (Fig 11, [0104], [0116]). Claim 16 rejected under 35 U.S.C. 103 as being unpatentable over Harviainen as modified by Zhu, and further in view of Chernov et al. RE claim 16, Harviainen as modified by Zhu is silent RE wherein the three-dimensional model includes a three-dimensional mesh. However Chernov teaches in abstract, [0048]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Harviainen as modified by Zhu a system and method wherein the three-dimensional model includes a three-dimensional mesh, as suggested by Chernov, to generate the 3D model geometry as mesh, and thereby increasing system effectiveness and user experience. Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chernov as modified by Bescos and Zhu, and further in view of Ren. Claims 18-19 recite limitations similar in scope with limitations of claim 1 and therefore rejected under the same rationale. In addition Chernov teaches A non-transitory computer-readable memory having instructions encoded thereon ([0012]), wherein the depth map excludes depths based on distances between the image sensor and dynamic objects (Bescos abstract). Chernov as modified by Bescos and Zhu is silent RE a temporally stable depth map. However Ren teaches in [0069], [0156]. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Chernov as modified by Bescos and Zhu a temporally stable depth map, as suggested by Ren, to effectively generate the 3D model with temporal stability, and thereby increasing system effectiveness and user experience. RE claim 20, Chernov as modified by Bescos, Zhu and Ren teaches wherein obtaining the temporally stable depth map is based on a three-dimensional model of the physical environment (Ren [0112]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sultana M Zalalee/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
Jul 22, 2025
Non-Final Rejection — §103, §112
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 23, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602876
ANNOTATION TOOLS FOR RECONSTRUCTING THREE-DIMENSIONAL ROOF GEOMETRY
2y 5m to grant Granted Apr 14, 2026
Patent 12592035
Fused Bounding Volume Hierarchy for Multiple Levels of Detail
2y 5m to grant Granted Mar 31, 2026
Patent 12586146
PROGRESSIVE MATERIAL CACHING
2y 5m to grant Granted Mar 24, 2026
Patent 12573150
POLYGON CORRECTION METHOD AND APPARATUS, POLYGON GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561908
TOPOLOGICALLY CONSISTENT MULTI-VIEW FACE INFERENCE USING VOLUMETRIC SAMPLING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.1%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month