Prosecution Insights
Last updated: April 19, 2026
Application No. 18/657,491

LOW LATENCY FRAME DELIVERY

Non-Final OA §103
Filed
May 07, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 8-11, filed 12/23/2025 with respect to 35 USC §103 rejection of independent claims 1 and 24 have been fully considered but were not persuasive. Applicant has amended claims and argues previously cited references does not disclose amended claim language. Applicant argues Zhang discloses sensors may include one or more scene cameras 220 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment that may be used to provide the user 290 with a virtual view of their real environment. Applicant points to Petrangeli to disclose augmented reality system 102 can identify and/or predict a field of view for a client device, that can determine fields of view for timestamps between a current time and a prediction horizon. Applicant argues combination of Zhang and Petrangeli fails to describe or make obvious an apparatus comprising at least one processor, at least, "receive, from an image frame buffer, a first portion of first image data in response to a determination that the first portion has been stored in the image buffer, wherein the first portion of the first image data stored in the image frame buffer includes at least a predetermined amount of the first image data for display" and, "based on the first portion of first image data including at least the predetermined amount of the first image data stored in the image frame buffer, process the first portion of the first image data as a second portion of the first image data is stored in the image buffer". Applicant argues Zhang id not related to amended claim language. In response, examiner note argued image frame buffer as known by one of ordinary skill in the art is a memory for storing data. Applicant’s specification also support this definition in paragraph [0178] as apparatus comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured to: store, in an image frame buffer. Based on this definition, examiner points to paragraph [0032] of Zhang to disclose base station includes hardware including graphics processing units (GPUs) and memory configured to generate and render frames that include virtual content based at least in part on the sensor information received from the device. Examiner points to paragraph [0044] of Zhang to disclose slice-based rendering reduces latency, and also reduces the amount of memory needed for buffering, which reduces the memory footprint on the chip(s) or processor(s) as well as power requirements. Paragraph [0141] of Zhang also discloses writing current frame to frame buffer. Based on these paragraphs, Zhang teach argued image frame buffer. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al (US 20200058152 A1) in view of Petrangeli et al (US 20210304706 A1) Regarding claim 1, Zhang discloses an apparatus for image processing ([0032] methods and apparatus for providing mixed reality views to users through wireless connections), the apparatus comprising: at least one memory ([0071] Memory 330 may include any type of memory, such as dynamic random access memory (DRAM); at least one processor coupled to the at least one memory ([00069] uniprocessor system including one processor, or a multiprocessor system), the at least one processor configured to: receive, from an image frame buffer, a first portion of first image data ([0063] content of the images in a region around the location at which the user's eyes are currently looking may be rendered with more detail) in response to a determination that the first portion has been stored in the image frame buffer ([0057] sensors may include one or more scene cameras 220 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment that may be used to provide the user, , [0141] write the compressed current frame to a previous frame buffer and pass the compressed current frame to the current frame decoder to decompress and process the current frame); wherein the first portion of the first image data includes at least a predetermined amount of the first image data stored in the image frame buffer includes at least a predetermined amount of the first image data for display ([0081] , the controller of the base station may render frame portions (a frame portion may include an entire frame or a slice of a frame)) based on the first portion of first image data including at least the predetermined amount of the first image data stored in the image frame buffer ([0082] rather than rendering entire frames in the base station and transmitting the rendered frames to the device, the base station may render parts of frames (referred to as slices) and transmit the rendered slices to the device as they are read), output the processed first portion for display ([0057] rendered frames may then be compressed and transmitted to the device via the wireless connection for display to the user ); and an image processor configured to: receive second image data, the second image data being different from the first image data ([0063] content of images in regions at which the user is not looking may be compressed more than content of the region around the point at which the user is currently looking); and process the second image data ([0040] the peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system) Petrangeli discloses process the first portion of the first image data as a second portion of the first image data stored in the image frame buffer ([0072] augmented reality system can identify relative positions of objects within a first field of view at a first time and relative positions of augmented reality objects within a second field of view at a second time) Zhang and Petrangeli are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify mixed reality views of Zhang to include process the first portion of the first image data as a second portion of the first image data stored in the image frame buffer as described by Petrangeli. The motivation for doing so would have been for low-latency adaptive streaming of augmented reality scenes (Zhang, [0004]). Therefore, it would have been obvious to combine Zhang and Petrangeli to obtain the invention as specified in claim 1. Regarding claim 2, Zhang discloses wherein the first image data is a first image frame and wherein the second image data is a second image frame ([0055] base station 260 may render frames (each frame including a left and right image) that include virtual content based at least in part on the various inputs obtained from the sensors via the wireless connection). Regarding claim 3, Zhang discloses wherein the at least one processor is configured to: determine the first portion of the first image data has been stored in the image frame buffer based on metadata in a region of the image frame buffer ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 4, Zhang discloses wherein, to determine the first portion of the first image data has been stored in the image frame buffer based on the metadata in the region of the image frame buffer, the at least one processor is configured to: determine the first portion of the first image data has been stored in the image frame buffer based on detecting a lack of metadata in the region of the image frame buffer ([0063] content of images in regions at which the user is not looking). Regarding claim 5, Zhang discloses wherein, to determine the first portion of the first image data has been stored in the image frame buffer based on the metadata in the region of the image frame buffer, the at least one processor is configured to: determine the region stores additional data that is different than the metadata ([0040] peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system, for example by filtering high frequency information and/or increasing color compression). Regarding claim 6, Zhang discloses wherein the at least one processor is configured to: determine that the first portion of the first image data includes at least a predetermined amount of the first image data based on determining the region stores the additional data that is different than the metadata ([0040] peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system, for example by filtering high frequency information and/or increasing color compression); and output the first portion of the first image data corresponding to the predetermined amount of the first image data ([0040] Pre-filtering of the peripheral region may result in improved compression of the frame. Alternatively, a higher compression ratio may be used in the peripheral region). Regarding claim 7, Zhang discloses wherein the additional data is the first portion of the first image data ([0040] peripheral region) Regarding claim 8, Zhang discloses wherein the metadata includes a pattern of colors ([0040] peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system, for example by filtering high frequency information and/or increasing color compression). Regarding claim 9, Zhang discloses wherein the metadata includes a frame identifier associated with the first image data ([0082] latency and memory impact as each frame needs to be completed, stored, and then transmitted to the next stage of the mixed reality system). Regarding claim 10, Zhang discloses wherein the image processor includes a first portion and a second portion, and wherein, to process the second image data, the image processor is configured to: process the second image data using the first portion of the image processor to generate processed second image data ([0093] the resolution of the rendered frame outside of the foveated region (referred to as the peripheral region) may be reduced, for example by applying a filter (e.g., a band pass filter) to the peripheral region); and process the processed second image data using the second portion of the image processor to generate further processed second image data ([0094] Since the user does not resolve the peripheral region as well as the foveated region, it may be possible to update the peripheral region less frequently than the foveal region without the user noticing much difference.). Regarding claim 11, Zhang discloses wherein the image processor is configured to: process the first portion of the first image data using the first portion of the image processor ([0091] . A region of the frame that corresponds to the fovea (referred to as the foveated region 702) may be estimated from the determined gaze direction) and output the processed first portion of the first image data to the second portion of the image processor for processing ([0091] The human eye 792 can perceive higher resolution at the fovea 794 than in the peripheral region). Regarding claim 12, Zhang discloses wherein the image processor is configured to: output the further processed second image data for display ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 13, Zhang discloses wherein the at least one processor is configured to: receive, from the image frame buffer, the second portion of the first image data after processing at least part of the first portion of the first image data ([0098] monitor the output frame rate, an output buffer amount for a frame being rendered may be monitored to insure that the rendering application is going to complete the frame in time for the frame to be transmitted to the device and displayed by the device in sufficient time); process the second portion of the first image data ([0040] peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system, for example by filtering high frequency information and/or increasing color compression); and output the processed second portion for display ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 14, Zhang discloses wherein the at least one processor is configured to: cause the first image data to be displayed based on outputting the processed second portion for display ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 15, Zhang discloses wherein the at least one processor is configured to: cause the first portion of the first image data to be displayed prior to display of the second portion of the first image data ([0091] the peripheral region 704 outside the foveated region 702 of the frames may be transmitted over the wireless connection at a lower frame rate than the foveated region). Regarding claim 16, Zhang discloses further comprising: a display configured to display the first image data ([0047] base station 160 may render frames for display by the device). Regarding claim 17 Zhang discloses wherein: to process the first portion of the first image data, the at least one processor is configured to composite virtual content with the first portion of the first image data ([0047] base station 160 may render frames for display by the device 100 that include virtual content 110 based at least in part on the various information obtained from the sensors). Regarding claim 18 Zhang discloses further comprising an image sensor configured to obtain the first image data, wherein the at least one processor is configured to: render the virtual content based on a pose of the image sensor ([0060] more than one world mapping sensor may be used, and world mapping sensor(s) may be positioned at other locations). Regarding claim 19 Zhang discloses wherein, to output the processed first portion for display, the at least one processor is configured to output the processed first portion to a display buffer ([0070] a GPU may be configured to render objects to be displayed into a frame buffer). Regarding claim 20 Zhang discloses wherein, to process the first portion of the first image data, the at least one processor is configured to modify at least some of the first portion of the first image data using at least one of a distortion, a distortion compensation, or a warping ([0038] warp space rendering method, instead of the rendering engine of the base station performing a rectilinear projection when rendering a frame, which tends to oversample the edges of the image especially in wide FOV frames, a transform is applied that transforms the frame into a warp space.). Regarding claim 21 Zhang discloses further comprising: an image sensor configured to obtain the first image data ([0057] one or more scene cameras 220 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment). Regarding claim 22 Zhang discloses further comprising: the image frame buffer ([0070] a GPU may be configured to render objects to be displayed into a frame buffer). Regarding claim 23 Zhang discloses wherein the apparatus is a head-mounted display ([0046] Device 100 may, for example be a head-mounted device (HMD) such as a headset, helmet, goggles, or glasses that may be worn by a user) Regarding claim 24 Zhang discloses a method of image processing ([0032] methods and apparatus for providing mixed reality views to users through wireless connections), the method comprising: receiving, by at least one processor from an image frame buffer, a first portion of first image data ([0063] content of the images in a region around the location at which the user's eyes are currently looking may be rendered with more detail) in response to a determination that the first portion has been stored in the image frame buffer ([0057] sensors may include one or more scene cameras 220 (e.g., RGB (visible light) video cameras) that capture high-quality video of the user's environment that may be used to provide the user, [0141] write the compressed current frame to a previous frame buffer and pass the compressed current frame to the current frame decoder to decompress and process the current frame); wherein the first portion of the first image data stored in the image frame buffer includes at least a predetermined amount of the first image data for display ([0081] , the controller of the base station may render frame portions (a frame portion may include an entire frame or a slice of a frame)) based on the first portion of first image data including at least the predetermined amount of the first image data stored in the image frame buffer ([0082] rather than rendering entire frames in the base station and transmitting the rendered frames to the device, the base station may render parts of frames (referred to as slices) and transmit the rendered slices to the device as they are read), output the processed first portion for display ([0057] rendered frames may then be compressed and transmitted to the device via the wireless connection for display to the user );; receiving, by an image processor, second image data, the second image data being different from the first image data; ([0063] content of images in regions at which the user is not looking may be compressed more than content of the region around the point at which the user is currently looking) and processing, by the image processor, the second image data ([0040] the peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system). Petrangeli discloses processing, by the at least one processor, the first portion of the first image data as a second portion of the first image data is stored in the image frame buffer ([0072] augmented reality system can identify relative positions of objects within a first field of view at a first time and relative positions of augmented reality objects within a second field of view at a second time) Zhang and Petrangeli are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify mixed reality views of Zhang to include processing, by the at least one processor, the first portion of the first image data as a second portion of the first image data is stored in the image frame buffer as described by Petrangeli. The motivation for doing so would have been for low-latency adaptive streaming of augmented reality scenes (Zhang, [0004]). Therefore, it would have been obvious to combine Zhang and Petrangeli to obtain the invention as specified in claim 24. Regarding claim 25, Zhang discloses wherein the first image data is a first image frame and wherein the second image data is a second image frame ([0055] base station 260 may render frames (each frame including a left and right image) that include virtual content based at least in part on the various inputs obtained from the sensors via the wireless connection). Regarding claim 26, Zhang discloses determining the first portion of the first image data has been stored in the image frame buffer based on metadata in a region of the image frame buffer. ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 27, Zhang discloses wherein the image processor includes a first portion and a second portion, and wherein processing the second image data by the image processor comprises: processing the second image data using the first portion of the image processor to generate processed second image data ([0093] the resolution of the rendered frame outside of the foveated region (referred to as the peripheral region) may be reduced, for example by applying a filter (e.g., a band pass filter) to the peripheral region); and processing the processed second image data using the second portion of the image processor to generate further processed second image data ([0094] Since the user does not resolve the peripheral region as well as the foveated region, it may be possible to update the peripheral region less frequently than the foveal region without the user noticing much difference.). Regarding claim 28, Zhang discloses processing the first portion of the first image data using the first portion of the image processor ([0091] . A region of the frame that corresponds to the fovea (referred to as the foveated region 702) may be estimated from the determined gaze direction) and outputting the processed first portion of the first image data to the second portion of the image processor for processing ([0091] The human eye 792 can perceive higher resolution at the fovea 794 than in the peripheral region). Regarding claim 29, Zhang discloses outputting the further processed second image data for display ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Regarding claim 30, Zhang discloses receiving, by the at least one processor from the image frame buffer, the second portion of the first image data after processing at least part of the first portion of the first image data ([0098] monitor the output frame rate, an output buffer amount for a frame being rendered may be monitored to insure that the rendering application is going to complete the frame in time for the frame to be transmitted to the device and displayed by the device in sufficient time) processing, by the at least one processor, the second portion of the first image data ([0040] peripheral region may be pre-filtered to reduce information based on knowledge of the human vision system, for example by filtering high frequency information and/or increasing color compression); and outputting the processed second portion for display ([0093] foveated region may be estimated from the determined gaze direction and known parameters (e.g., eye parameters and distance from the eye to the display)). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

May 07, 2024
Application Filed
Jun 06, 2025
Non-Final Rejection — §103
Sep 04, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103
Dec 23, 2025
Response after Non-Final Action
Jan 15, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Mar 04, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month