Prosecution Insights
Last updated: April 19, 2026
Application No. 18/430,370

MULTI-SAMPLING POSES DURING REPROJECTION

Non-Final OA §103
Filed
Feb 01, 2024
Examiner
GUILLERMETY, JUAN M
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
83%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
430 granted / 597 resolved
+10.0% vs TC avg
Moderate +11% lift
Without
With
+10.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
6.4%
-33.6% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 597 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 – 22 are pending in this application. Information Disclosure Statement The information disclosure statements (IDS) submitted on 02/01/2024 and 05/12/2025 were filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Applicants have not provided an explanation of relevance of cited document(s) discussed below. Ozguner et al. (U.S PreGrant Publication No. 2019/0051229 A1) provides techniques for post-rendering image transformation including outputting an image frame including a plurality of first pixels by sequentially generating and outputting multiple color component fields including a first color component field and a second color component field by applying one or more two-dimensional (2D) image transformations to at least one portion of the plurality of source pixels by first, second, and third image transformation pipelines, to generate transformed pixel color data for the first color component field and the second color component field. Anderson et al. (U.S PreGrant Publication No. 2022/0165014 A1) teaches an apparatus and method for efficient image reprojection in a virtual reality system. For example, one embodiment of an apparatus comprises: a sensor interface to collect motion data from one or more sensors during a virtual reality session; graphics circuitry to execute graphics program code to render an image frame during the virtual reality session; a processor to generate motion transform data using the motion data, the motion transform data specifying how the image frame is to be adjusted prior to display; a reprojection engine to perform an in-line reprojection of the frame using the motion transform data to generate a reprojected image frame; and display circuitry to display the reprojected frame. Zobel et al. (U.S PreGrant Publication No. 2023/0216999 A1) teaches an imaging system that receives depth data (corresponding to an environment) from a depth sensor and first image data (a depiction of the environment) from an image sensor. The imaging system also generates, based on the depth data, first motion vectors corresponding to a change in perspective of the depiction of the environment in the first image data. The imaging system also generates, using grid inversion based on the first motion vectors, second motion vectors that indicate respective distances moved by respective pixels of the depiction of the environment in the first image data for the change in perspective. The imaging system generates second image data by modifying the first image data according to the second motion vectors. The second image data includes a second depiction of the environment from a different perspective than the first image data. Some image reprojection applications (e.g., frame interpolation) can be performed without the depth data. Xiong et al. (U.S PreGrant Publication No. 2023/0245396 A1) provides a method includes receiving depth data of a real-world scene from a depth sensor, receiving image data of the scene from an image sensor, receiving movement data of the depth and image sensors from an IMU, and determining an initial 6DOF pose of an apparatus based on the depth data, image data, and/or movement data. The method also includes passing the 6DOF pose to a back end to obtain an optimized pose and generating, based on the optimized pose, image data, and depth data, a three-dimensional reconstruction of the scene. The reconstruction includes a dense depth map, a dense surface mesh, and/or one or more semantically segmented objects. The method further includes passing the reconstruction to a front end and rendering, at the front end, an XR frame. The XR frame includes a three-dimensional XR object projected on one or more surfaces of the scene. Lawson et al. (U.S PreGrant Publication No. 2017/0018121 A1 corresponding to WO 2018064287 A1) teaches a virtual reality display system that generates display images in two phases: the first phase renders images based on a predicted pose at the time the display will be updated; the second phase re-predicts the pose using recent sensor data, and corrects the images based on changes since the initial prediction. The second phase may be delayed so that it occurs just in time for a display update cycle, to ensure that sensor data is as accurate as possible for the revised pose prediction. Pose prediction may extrapolate sensor data by integrating differential equations of motion. It may incorporate biomechanical models of the user, which may be learned by prompting the user to perform specific movements. Pose prediction may take into account a user's tendency to look towards regions of interest. Multiple parallel pose predictions may be made to reflect uncertainty in the user's movement. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 - 7, 10 - 17 and 20 - 22 are rejected under 35 U.S.C. 103 as being unpatentable over Frommhold et al. (U.S PreGrant Publication No. 2023/0377241 A1, cited in an IDS filed on 05/12/2025, hereinafter ‘Frommhold’) in view of Melkote Krishnaprasad et a. (U.S PreGrant Publication No. 2023/0039100 A1, also cited in the IDS filed on 05/12/2025, hereinafter ‘Melkote’) and further in view of Weiss et al. (U.S PreGrant Publication No. 2025/0005965 A1, hereinafter ‘Weiss’). With respect to claim 1, Frommhold teaches a method for reprojecting images (e.g. a method for reprojecting, ¶0005, ¶0030), comprising: receiving a frame rendered based on a first head pose of a user of a display device (e.g., receiving a rendered image based on an initial pose of a user of a Head Mounted Display (HMD), ¶0018, ¶0044); determining reprojection information for the frame based on a second head pose of the user of the display device, wherein the second head pose is obtained after the first head pose is obtained (e.g., the method further comprises receiving a pose of the HMD which was used to render the received rendered frame and using the received pose of the HMD and a current predicted pose of the HMD as input to an early stage reprojection. In this way the early stage reprojection is able to achieve high quality performance, ¶0030); updating the reprojection information based on a third head pose of the user of the display device to generate updated reprojection information (e.g., the reprojection process is updated from multiple successive poses (Pi, Pe1, Pe2, Pe3, Pen), ¶0044, ¶0047, Fig. 3A); applying the updated reprojection information to the frame to generate a reprojected frame (e.g., a late stage reprojection is carried out (applied) at a specific time per field and the late stage reprojection used a predicted pose of the HMD for each of the fields, the predicted pose computed by the HMD, ¶0032, ¶0051 - ¶0053, ¶0059); and outputting the reprojected frame for display (e.g., displaying the reprojected frame, ¶0026, ¶0047, ¶0055 & ¶0080); but fails to teach a) that a portion is within reprojection information; and b) wherein said third pose is obtained after said second head pose is obtained. However, with respect to above difference (a), the mentioned claim limitation is well-known in the art as evidenced by Melkote. In particular Melkote, in the same field of endeavor of reprojecting image(s), teaches a portion is within the reprojection information (e.g., determining an object, as a portion, of reprojection information and reprojecting said object to be further displayed, ¶0008, ¶0043, ¶0048, & ¶0108, Fig. 11). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the method of Frommhold as taught by Melkote since Melkote suggested in ¶0008 and ¶0043 that such modification would reduce registration/modeling errors for all objects in order to simplify registration with respect to all objects. Frommhold, modified by Melkote, fails to teach the difference (b). However, also in the same field of endeavor of receiving frame(s) based on pose(s), updating reprojecting information and display device, Weiss teaches obtaining a third head pose after a second head pose (e.g., a third pose is acquired after acquiring a second pose, ¶0013, ¶0151, Fig. 7). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the method of Frommhold as taught by Melkote since Melkote suggested in ¶0008 and ¶0043 that such modification would improve more computationally efficient determination of sequence of poses of human that corresponds to movement of a person in order to save processing time and resources by eliminating the need for correction of such inaccuracies. With respect to claim 2, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 1, wherein Melkote teaches the portion of the reprojection information includes at least one of a reprojection matrix, an output pixel bounding box, or a raster correction matrix (e.g., at least reprojection matrix or bounding box are involved in the reprojection of each pose, ¶0071 - ¶0073, ¶0077, Fig. 9). With respect to claim 3, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 2, wherein Melkote teaches the portion of the reprojection information includes the reprojection matrix, and wherein determining a portion of the reprojection information comprises determining a pre-calculated reprojection matrix based on the second head pose (e.g., wherein the object of the reprojection use the reprojection matrix, ¶0071 - ¶0073; and wherein determining another object of the reprojection includes using the projection matrix again based on subsequent pose(s), ¶0043 - ¶0046, Fig. 9). With respect to claim 4, Frommhold in view of Melkote and further in view of Weiss teaches The method of claim 3, wherein updating the portion of the reprojection information comprises: determining a rotation matrix based on the second head pose and the third head pose; and applying the rotation matrix to the pre-calculated reprojection matrix (e.g. performing several transformations (e.g. rotations and/or translations using homographic transformation) with commonly performed using matrices; and arranging different transformation during early and late reprojection stages according to computational costs, ¶0045 - ¶0047 and/or ¶0053). With respect to claim 5, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 2, wherein Melkote teaches the portion of the reprojection information includes the output pixel bounding box, wherein determining a portion of the reprojection information comprises estimating an output bounding box for an object in the frame, and wherein applying the updated reprojection information comprises rotating the output bounding box based on a difference between the second head pose and the third head pose (e.g., using bounding boxes to selectively apply reprojection allowing to appropriately correct the visual presentation of different object in the displayed frame, ¶0008, ¶0073 - ¶0074, ¶0078). With respect to claim 6, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 5, wherein Melkote teaches the output bounding box is estimated based on corners of the object (e.g. design choice: it corresponds to vertices of the object(s) for the defined bounding box, ¶0075 - ¶0078). With respect to claim 7, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 2, wherein the portion of the reprojection information includes the raster correction matrix, wherein determining a portion of the reprojection information comprises: determining a raster correction matrix for the frame based on the second head pose; and determining an updated raster correction matrix for the frame based on the third head pose; wherein applying the updated reprojection information comprises applying the updated raster correction matrix to the frame as a part of generating the reprojected frame (As an approach: it keeps reprojecting a sequence of operations until a updated pose prediction is completed from initial pose to a late pose, ¶0017, ¶0025, ¶0044, ¶0049, ¶0055, Fig. 3A). With respect to claim 10, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 1, wherein the portion of the reprojection information is heuristically updated (“Heuristically updated” refers to a process of modifying, refining, or updating information, models, or algorithms based on experience, trial-and-error, or "rules of thumb" rather than relying on strict, comprehensive, or mathematically exact calculations. Therefore, the reprojection from Frommhold is constantly updated based on calculation and/or error, ¶0017 - ¶0018 and/or ¶0048). With respect to claims 11 - 17, these are apparatus claims corresponding to the method claims 1 – 7, respectively. Therefore, these are rejected for the same reasons as the method claims 1 – 7, respectively. With respect to claim 20, Frommhold in view of Melkote and further in view of Weiss teaches the apparatus of claim 11, wherein the apparatus comprises the display device (e.g., head mounted display (HMD), abstract, ¶0026). With respect to claim 21, this is an apparatus claim corresponding to the method claim 10. Therefore, this is rejected for the same reasons as the method claim 10. With respect to claim 22, Frommhold notes that the invention may be realized through the execution by a CPU (e.g. a processor, ¶0038/¶0060 with ¶0106) of instruction codes (e.g., a software embodied, ¶0106) stored in a non-transitory computer readable storage medium (embodied on a computer readable medium, ¶0106). The further limitations are met by the teachings as previously discussed with respect to claim 1. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Frommhold, Melkote and Weiss in view of Zhou (U.S PreGrant Publication No. 2006/0077211 A1, hereinafter ‘Zhou’). With respect to claim 8, Frommhold in view of Melkote and further in view of Weiss teaches the method of claim 7, wherein Frommhold, combined with Melkote and Weiss, teaches all the limitations of claim 8, except for a Taylor series approximation of a sine rotation and cosine of the rotation. However, in the same field of endeavor of rotation transformation and display processing, Zhou teaches a matrix is determined based on a Taylor series approximation of a sine of a rotation value and a cosine of the rotation (e.g., wherein a rotation mechanism approximates values of a sine and a cosine functions of a matrix transformation using Taylor series, ¶0026 - ¶0027, ¶0030 - ¶0031 & ¶0040). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the method of Frommhold in view of Melkote and further in view of Weiss as taught by Zhou since Zhou suggested within ¶0026 - ¶0031 that such modification determining a matrix based on a Taylor series approximation would simplifies complex dynamic systems by approximating non-polynomial functions with polynomials in order to enable faster and/or more accurate computation; thereby making it more accurate. With respect to claim 18, this is an apparatus claim corresponding to the method claim 8. Therefore, this is rejected for the same reasons as the method claim 8. Allowable Subject Matter Claims 9 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. With respect to claim 9, none of the cited references teaches the method of claim 1, further comprising: determining that a fourth head pose of the user of the display device differs from the second head pose by more than a threshold amount, wherein the fourth head pose is obtained after the second head pose is obtained; entering a higher power state to determine the reprojection information; and generating the reprojected frame based on the reprojection information. With respect to claim 19, none of the cited references teaches the apparatus of claim 11, wherein the at least one processor is further configured to: determine that a fourth head pose of the user of the display device differs from the second head pose by more than a threshold amount, wherein the fourth head pose is obtained after the second head pose is obtained; enter a higher power state to determine the reprojection information; and generate the reprojected frame based on the reprojection information. Conclusion The prior art made of record and not relied upon are considered pertinent to applicant's disclosure: Selan (U.S PG Publication No. 2020/0225473)1 1This reference may be another closest prior art, wherein teaches a head-mounted display (HMD) with a rolling illumination display panel can dynamically target a render time for a given frame based on eye tracking. Using this approach, re-projection adjustments are minimized at the location of the display(s) where the user is looking, which mitigates unwanted, re-projection-based visual artifacts in that “region of interest.” For example, logic of the HMD may predict a location on the display panel where a user will be looking during an illumination time period for a given frame, determine a time, within that illumination time period, at which an individual subset of the pixels that corresponds to the predicted location will be illuminated, predict a pose that the HMD will be in at the determined time, and send pose data indicative of this predicted pose to an application for rendering the frame. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUAN M GUILLERMETY whose telephone number is (571)270-3481. The examiner can normally be reached 9:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q TIEU can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUAN M GUILLERMETY/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Feb 01, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103
Apr 03, 2026
Interview Requested
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603964
INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM, AND INFORMATION PROCESSING METHOD FOR MANAGING MAINTENANCE OPERATION PERMISSION
2y 5m to grant Granted Apr 14, 2026
Patent 12602756
Method and system for analytical X-ray calibration, reconstruction and indexing using simulation
2y 5m to grant Granted Apr 14, 2026
Patent 12602838
System, Device, and Method for Improved Image Encoding that Non-Iteratively Targets and Achieves a Visual-Quality Threshold and a Compression Efficiency Threshold
2y 5m to grant Granted Apr 14, 2026
Patent 12588971
METHOD FOR ANALYZING A DENTAL SITUATION OF A PATIENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591646
MULTIMODAL BIOMETRIC FUSION BASED AUTHENTICATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
83%
With Interview (+10.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 597 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month