Prosecution Insights
Last updated: April 19, 2026
Application No. 18/694,355

Warping a Frame based on Pose and Warping Data

Final Rejection §102§103
Filed
Mar 21, 2024
Examiner
CASCHERA, ANTONIO A
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
889 granted / 1019 resolved
+25.2% vs TC avg
Moderate +8% lift
Without
With
+7.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
21 currently pending
Career history
1040
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
34.2%
-5.8% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1019 resolved cases

Office Action

§102 §103
DETAILED ACTION Preliminary Remarks The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application is a 371 of PCT/US2022/042897 filed 09/08/2022 which claims the benefit of 63/247,938 filed 09/24/2021. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 3, 5, 8-12, 14-18 and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Melkote Krishnaprasad et al. (U.S. Publication 20190333263) (known herein as Melkote). In reference to claim 1, Melkote discloses a method (see paragraphs 12 and 29 and Figure 1 wherein Melkote discloses systems and methods for split-rendering that permit time and space warping to correct for movement of head position and scene motion from their state in the last fully rendered frame.) comprising: at a device including an environmental sensor, a display, a non-transitory memory and one or more processors coupled with the environmental sensor, the display and the non-transitory memory (see paragraphs 41, 45-54, 65, 99-101 and Figures 2-3 wherein Melkote discloses the system comprising a wearable HMD device comprising one or more displays, a head tracker in the form of a sensor, a render device comprised of multiple “processors” and software applications executing on the processors. Further, Melkote also discloses that the render device and HMD may be implemented in an all-in-one device. Melkote discloses the invention implemented via code stored on a computer-readable medium.): generating, at a first time, intermediate warping data for a warping operation to be performed on an application frame, wherein the intermediate warping data is generated while the application frame is being generated (see paragraphs 65-67, 86-87, #303, 307, 309 of Figure 3, #403, 407, 413 of Figure 4 and Figure 5 wherein Melkote discloses the process for warping a rendered frame involving identifying a region of interest via received metadata while rendering an eye-buffer/z-buffer. Note, the Examiner interprets such ROI data functionally equivalent to the claims’ “intermediate warping data for a warping operation.” Melkote then modifying one or more pixels values of a rendered frame using the pose information and determined region of interest, to generate a warped rendered frame of data discloses.); obtaining, at a second time that occurs after the first time, via the environmental sensor, environmental data that indicates a pose of the device within a physical environment of the device (see paragraphs 41, 86-87 and Figures 1 & #510 of Figure 5 wherein Melkote discloses after identifying the ROI (e.g. at a “second time that occurs after the first time”) determining display pose data from a head tracker via a sensor which generates the head pose/position of a user’s head in the user’s environment.); generating, after the application frame has been generated, a warped application frame by warping the application frame in accordance with the pose of the device and the intermediate warping data (see paragraph 87, #311 of Figure 3, #421 of Figure 4 and #512 of Figure 5 wherein Melkote discloses modifying one or more pixels values of a rendered frame using the pose information to generate a warped rendered frame of data. It is clear from at least Figures 3 and 4 that in order to warp a frame of data requires the rendered frame to be performed beforehand since the rendered frame relies upon at least the region of interest as disclosed above.); and displaying the warped application frame on the display (see paragraph 87 and Figure 2, #514 of Figure 5 wherein Melkote discloses outputting the warped rendered frame data to be displayed on one or more displays of the HMD.). (see Response to Arguments below). In reference to claims 3 and 21, Melkote discloses all of the claim limitations as applied to claims 1 and 18 respectively. Melkote discloses the region of interest identification to further entail depth approximations of the 3D space of which an optimal approximation is computed using a mean of the pixel depths of the scene (see at least paragraph 68) of which the Examiner interprets as inherently comprising a “lower threshold resolution” than the actual pixel scene depth data. In reference to claim 5, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses the region of interest identification to further entail depth approximations of the 3D space of which an optimal approximation is computed using a mean of the pixel depths of the scene (see at least paragraph 68) of which the Examiner interprets as inherently comprising a “lower threshold precision” than the actual pixel scene depth data. Further since the warped rendered frame data generated by Melkote explicitly involves one or more pixel values of an eye buffer based on a head pose (e.g. user’s field of view), the Examiner interprets the warped rendered frame functionally equivalent as performing a “POV adjustment.” In reference to claims 8 and 9, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses after identifying the ROI (e.g. at a “second time that occurs after the first time”) determining display pose data from a head tracker via a sensor which generates the head pose/position of a user’s head orientation in the user’s environment (see paragraphs 41, 67, 86-87 and Figures 1 & #510 of Figure 5). Melkote explicitly discloses the pose information to comprise of orientation and position (see paragraph 41). Note, as determined by the language in claim 1, the Examiner interprets the head tracker/sensor of Melkote functionally equivalent to a “device based on environmental data.” In reference to claim 10, Melkote discloses all of the claim limitations as applied to claim 1 above. Since Melkote explicitly discloses the invention utilizing depth information reacting to 6DOF change in pose of the user (see at least paragraph 42), the Examiner interprets the device to at least inherently comprise of a “depth sensor” or the like. In reference to claims 11 and 16, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote further explicitly discloses the sensors could be of sensor or camera type (see at least paragraph 60). In reference to claim 12, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses the region of interest identification to further entail depth approximations of the 3D space of which an optimal approximation is computed using a mean of the pixel depths of the scene (see at least paragraph 68). In reference to claim 14, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses that the region of interest is identified after the rendered eye buffer frame data is received and before the pose data is determined (see #506, 508 and 510 of Figure 5). In reference to claim 15, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses outputting the warped rendered frame for display after identifying the region of interest and determining the pose data and therefore is at least inherently at a “third time” (see #508, 510 vs 514 of Figure 5) which there in between can be considered “processing” time for performing the warping of the frame. In reference to claim 17, claim 17 is similar in scope to claim 1 and is therefore rejected under like rationale. In addition to the rationale applied in the rejection of claim 1 above, claim 17 further recites, “A device comprising…” Melkote discloses the system comprising a wearable HMD device comprising one or more displays and a render device comprised of multiple “processors” and software applications executing on the processors while further explicitly disclosing that the HMD and render device could be implemented in an all-in-one device (see paragraphs 45-54, 65, 99-101 and Figures 2-3). In reference to claim 18, claim 18 is similar in scope to claim 1 and is therefore rejected under like rationale. In addition to the rationale applied in the rejection of claim 1 above, claim 18 further recites, “A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device including a display and an environmental sensor…” Melkote discloses the system comprising a wearable HMD device comprising one or more displays and a render device comprised of multiple “processors” and software applications executing on the processors while further explicitly disclosing that the HMD and render device could be implemented in an all-in-one device (see paragraphs 45-54, 65 and Figures 2-3). Melkote discloses the invention implemented via code stored on a computer-readable medium (see paragraphs 99-101). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Melkote Krishnaprasad et al. (U.S. Publication 20190333263) (known herein as Melkote). In reference to claim 4, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote discloses the region of interest identification to further entail depth approximations of the 3D space of which an optimal approximation is computed using a mean of the pixel depths of the scene (see at least paragraph 68) of which the Examiner interprets as inherently comprising a “lower threshold resolution” than the actual pixel scene depth data. Melkote does not however perform occlusion operations based on the depth approximations in order to occlude physical objects in the 3D space. It is well known in the art of augmented or mixed reality applications to implement an occlusion processing using depth information of a 3D scene. Performing occlusion type processing in this context allows for the sense of realism, in combination with the warping of rendering data, to create a believable augmented/mixed reality experience (Official Notice). It would have been obvious to one of ordinary skill in the art for Melkote who teaches processing 3D space using pixel depths of the space, to use occlusion type processing, because it is well known in the art of augmented/mixed reality application so perform occlusion culling to create a greater sense of realism in the experience by ensuring objects behind vs. in front of other objects are currently displayed/not displayed. Claim(s) 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Melkote Krishnaprasad et al. (U.S. Publication 20190333263) (known herein as Melkote) and Balachandreswaran et al. (WO 2017/117675). In reference to claims 6 and 13, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote does not explicitly disclose warping the frame data involving modifying color data via a tone mapping processing based upon an average color value or basing the environment upon color data. Balachandreswaran et al. discloses a head mounted device for displaying augmented reality and virtual reality (see paragraph 1). Balachandreswaran et al. discloses the invention comprising a camera system and a method for pose tracking and mapping in a real environment (see paragraph 6). Balachandreswaran et al. discloses rendering virtual elements based thereupon whereby a processor can approximate virtual elements using real environment lighting conditions and color values (RGB) captured in the real environment (see paragraphs 103-104). Balachandreswaran et al. explicitly discloses calculating a mean of RGB, HSB, HSL or HSV color and intensity values for a frameset and utilizing such values in a surface shader to modify the surface appearance of virtual objects (see paragraph 104) of which the Examiner interprets functionally equivalent to Applicant’s “tone mapping” element. It would have been obvious to one of ordinary skill in the art at the time of filing of the invention to implement the virtual object color and intensity computations of Balachandreswaran et al. with the frame warping processing techniques of Melkote in order to create a greater immersion into the virtual scene mimicking real-life elements by applying real-life color/intensity to such objects (see paragraph 103 of Balachandreswaran et al.). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Melkote Krishnaprasad et al. (U.S. Publication 20190333263) (known herein as Melkote) and Kornmann (U.S. Patent 8,767,011). In reference to claim 7, Melkote discloses all of the claim limitations as applied to claim 1 above. Melkote does not explicitly disclose the warping of frame data to involve computing a quadtree of the physical environment. Kornmann discloses rendering a three-dimensional environment by culling node representations (see column 1, lines 25-32). Kornmann discloses representing a 3D environment using nodes in a quadtree data structure and determining whether a node in the quadtree is visible from a virtual camera perspective (see column 3, lines 47-54). It would have been obvious to one of ordinary skill in the art at the time of filing of the invention to implement the node quadtree virtual environment processing techniques of Kornmann with the frame warping processing techniques of Melkote in order to create a more efficient processing of virtual perspective environment/object data by determining which objects are within a drawable payload and rendering those objects (see column 2, lines 25-38 of Kornmann). Allowable Subject Matter Claims 22 and 23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art does not teach the intricacies of limitations found in independent claims 1 and 17, from which claims 22 and 23 depend upon respectively, including the mixture of now requiring a pose-independent portion of the warping process that does not rely on pose of the device. Response to Arguments The cancellation of claims 2 and 20 and addition of claims 22-23 are noted. Applicant’s arguments, see page 9 of Applicant’s Remarks, filed 01/23/26, with respect to the 35 USC 112 rejection of claim 9 has been fully considered and are persuasive. The 35 USC 112 rejection of the claims has been withdrawn since amendments remedy the previous issues. Applicant's arguments filed 01/23/26 have been fully considered but they are not persuasive. In reference to claims 1, 3-18 and 21, Applicant argues the prior art rejection of the claims based upon the Melkote reference stating that the cited prior fails to teach the newly amended limitations of, ”generating, at a first time, intermediate warping data for a warping operation to be performed on an application frame, wherein the intermediate warping data is generated while the application frame is being generated,” and “generating, after the application frame has been generated, a warped application frame…” found in the independent claims (see pages 9-12 of Applicant’s Remarks). Applicant, in particular agues the Examiner’s interpretation of the cited prior art Melkote in contesting that the region-of-interest (ROI) and associated metadata is not the data actually used to warp a frame, which instead is a single depth metadata together with display pose data of which are only generated after a rendered frame and not during frame generation (see pages 10-11 of Applicant’s Remarks). Applicant goes on to further cite Melkote for suggesting that the ROI is NOT actually used to warp the frame but instead is, “…used only as an input to compute the single depth metadata” and that it is this “metadata” not the region-of-interest data that is used to perform the warping operation (see page 11, 2nd paragraph of Applicant’s Remarks). In response, the Examiner is not persuaded. Although Melkote may directly generate the warp portion of the rendered frame using the single depth data, Melkote explicitly discloses the single depth data computed from the region-of-interest (ROI) as further pointed out in Applicant’s Remarks. The claims simply recite the intermediate warping data is generated “while the application frame is being generated,” of which Melkote clearly teaches when he discloses that the region-of-interest is determined (#303 of Figure 3) while the rendering of the eye buffer is performed (#307) (two boxes of the Figure occurring in parallel in Figure 3). Since the single depth data is computed from the region-of-interest, which is generated while rendering the eye buffer, and single depth data used to generate the warping therefore, the region-of-interest is at least indirectly used to warp the rendered data. In other words, if no region-of-interest is determined, there would not be any single depth computation and therefore warping would not be performed. For at least these reasons, the Examiner deems the interpretation and application of Melkote of the present claim language, to be just. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Antonio Caschera whose telephone number is (571) 272-7781. The examiner can normally be reached Monday-Friday between 6:30 AM and 2:30 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Said Broome, can be reached at (571) 272-2931. Any response to this action should be mailed to: Mail Stop ____________ Commissioner for Patents P.O. Box 1450 Alexandria, VA 22313-1450 or faxed to: 571-273-8300 (Central Fax) See the listing of “Mail Stops” at http://www.uspto.gov/patents/mail.jsp and include the appropriate designation in the address above. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the Technology Center 2600 Customer Service Office whose telephone number is (571) 272-2600. /Antonio A Caschera/ Primary Examiner, Art Unit 2612 3/31/26
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Oct 21, 2025
Non-Final Rejection — §102, §103
Dec 16, 2025
Examiner Interview Summary
Dec 16, 2025
Applicant Interview (Telephonic)
Jan 23, 2026
Response Filed
Mar 31, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602858
Rendering Method and Apparatus, and Device
2y 5m to grant Granted Apr 14, 2026
Patent 12602849
IMAGE GENERATION USING ONE-DIMENSIONAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12586157
Methods and Systems for Modifying Hair Characteristics in a Digital Image
2y 5m to grant Granted Mar 24, 2026
Patent 12573328
Display device and display calibration method
2y 5m to grant Granted Mar 10, 2026
Patent 12562141
DISPLAY DEVICE, DISPLAY SYSTEM, AND DISPLAY DRIVING METHOD
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.9%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 1019 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month