Prosecution Insights
Last updated: April 19, 2026
Application No. 18/435,133

GRAPHICS RENDERING APPARATUS AND METHOD

Final Rejection §103
Filed
Feb 07, 2024
Examiner
TUNG, KEE M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
2 (Final)
8%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
18%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
15 granted / 189 resolved
-54.1% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims Claims 12 and 14-32 are currently pending in this application. Claims 1-11 and 13 have been cancelled. Claims 14-32 have been added. Response to Amendments The applicant amended independent claim 12 and added independent Claims 24 and 32 to recite features similar to “determining, based on gaze tracking data, a region of a display screen gazed upon by a user; selecting, from among a plurality of different computational processes, one or more particular computational processes to perform on one or more different regions of the display screen based on the determined region of the display screen gazed upon by the user; and executing, the one or more particular computational processes on the one or more different regions of the display screen.” The applicant cancelled claims 1-11 and the interpretation of claim 1 with invoke of 35 USC 112(f) has been withdrawn. Response to Arguments Applicant’s arguments filed on November 25, 2025 have been fully considered but they are directed toward newly amended claims and are believed to be answered and therefore not persuasive in view of the new ground(s) of rejection presented below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 12, 14, 16, 19, 24-25, 27, 30 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al. (2021/0035264) in view of Selker et al. (2018/0299953; IDS). Regarding claim 12, Li teaches a method (e.g., The present invention provides a method and device for adjusting resolution of a Head-Mounted Display (HMD) apparatus. Li: Abstract L.1-3) comprising: determining, based on gaze tracking data, a region of a display screen gazed upon by a user (e.g., determining, according to a gaze direction of user eyes and/or working condition of user eyes, and in combination with the saliency information, the importance level of each display content in the multimedia information. Li: [0032]; when displaying in a single-eye visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the multimedia information both in a left-eye display region and a right-eye display region of an HMD apparatus. Li: [0052]; when displaying in a stereo visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the first type of multimedia information in a left-eye display region and the second type of multimedia information in a right-eye display region, respectively; Li: [0056]. It is obvious multimedia information with different resolution is displayed at different display regions. See 12_1 below); selecting, from among a plurality of different computational processes, one or more particular computational processes to perform on one or more different regions of the display screen based on the determined region of the display screen gazed upon by the user (e.g., As a preferred scheme, as shown in FIG. 5, when two neighbor display contents after adjusting resolution have different resolution, a resolution transition region, defined as a first transition region, is provided between the two neighbor display contents. The first transition region comprises a plurality of resolution values arranged according to a preset trend, for example, displaying a picture with resolution of 500 ppi (ppi is an abbreviation of pixels per inch, which is also called pixel density and represents pixel amount per inch, so the higher the value of ppi is, the higher resolution can be used by a display screen to display). In a neighbor display region of this display region, a picture is displayed with resolution of 300 ppi; in order to enable a video picture to be more smooth and not appear abrupt layered, there is a resolution transition region, with a certain width, between the above two neighbor regions; in this transition region, the resolution is transitioned, with 50 ppi as a resolution difference, from 500 ppi to 300 ppi, that is, a region of 450 ppi is directly adjacent to a region of 500 ppi, then a region of 400 ppi is adjacent to a region of 450 ppi, finally a region of 350 ppi is adjacent to a region of 400 ppi, thus the resolution is transitioned to the region of 300 ppi. Other values such as 20 ppi or 100 ppi can also selected as a preset trend, and adjustment may not be performed according to a uniform and disciplinary resolution difference, for example, during the above transition from 500 ppi to 300 ppi, these resolution differences of 50 ppi, 20 ppi or 30 ppi can be used in combination. The specific resolution transition display effect can be seen from a picture example in FIG. 6. Li: [0137] and Figs. 5 and 6; reproduced below for reference. PNG media_image1.png 374 520 media_image1.png Greyscale PNG media_image2.png 398 504 media_image2.png Greyscale ); and executing, the one or more particular computational processes on the one or more different regions of the display screen (e.g., when displaying in a single-eye visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the multimedia information both in a left-eye display region and a right-eye display region of an HMD apparatus. Li: [0052]; when displaying in a stereo visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the first type of multimedia information in a left-eye display region and the second type of multimedia information in a right-eye display region, respectively; Li: [0056]). While Li does not explicitly teach, Selker teaches: (12_1). based on gaze tracking data (e.g., Accordingly, in an example system that is configured to apply image processing to the output of an optical sensor, such as an eye tracking camera, the system may be configured to perform the image processing in response to a determination that the output of an EOG sensor has changed by a sufficiently large amount (e.g., at stage 318B). In some examples, a system including an eye tracking camera can be configured to capture an image, and perform image processing on that image (e.g., to determine eye gaze) in response to a determination that the output of an EOG sensor has changed by a sufficiently large amount. Selker: [0041] L.9-19. Therefore, the images of eye tracking camera is taken as the data to determine the eye gaze); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Selker into the teaching of Li so that the gaze is determined from the images captured by the eye tracking camera. Regarding claim 14, the combined teaching of Li and Selker teaches the method of claim 12, wherein executing the one or more particular computational processes on the one or more different regions of the display screen comprises generating adjustments to a resolution to the one or more different regions of the display screen (e.g., when displaying in a mixed visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the first type of multimedia information in a part region of a left-eye display region of an HMD apparatus and the second type of multimedia information in the residual region of the left-eye display region of the HMD apparatus, and simultaneity displaying the second type of multimedia information in a right-eye display region of the HMD apparatus. Li: [0057]). Regarding claim 16, the combined teaching of Li and Selker teaches the method of claim 12, wherein executing the one or more particular computational processes on the one or more different regions of the display screen comprises at least one of (i) adding a mesh of at least a part of a virtual element to image content on the one or more different regions of the display screen, (ii) removing a mesh of at least a part of a virtual element from the image content on the one or more different regions of the display screen, (iii) modifying at least one of a mesh or texture associated with at least a part of the virtual element within the image content, or (iv) modifying a location of a virtual element within the image content (e.g., In examples in which the display presents a 3D environment, such as in a virtual reality or augmented reality system, a virtual object may appear at the precise location in the 3D environment at which the user is currently looking, enhancing a user's sense of immersion or control. Selker: [0046] L.14-19. In examples in which the display presents a 3D environment, such as in a virtual reality or augmented reality system, it may be desirable for virtual objects to inconspicuously enter or exit the environment, or to change a state of a virtual object (such as the resolution of an asset used to render the object) without the user noticing. Selker: [0047] L.7-13). Regarding claim 19, the combined teaching of Li and Selker teaches the method of claim 12, wherein the one or more particular computational processes comprises at least one of generating a draw call (e.g., The graphics module 428 can include various known software components for rendering, animating and displaying graphical objects on one or more display surfaces. Selker: [0072] L.1-4. a VR rendering engine reads these information and distributes computation resources during the process of rendering; for example, in a video model, such as on each geometry primitive (for example, 3D point or triangular patch) for storage, the semantic saliency information of the geometry primitive is added. Li: [0145] L.16-21. The rendering implicitly generating sequence of draw calls), loading at least one of a mesh or texture, performing a garbage collection process, or performing a frame rate synchronization. Regarding claims 24-25, 27 and 30, the claims are system claims of method claims 12, 14, 16 and 19 respectively. The claims are similar in scope to claims 12, 14, 16 and 19 respectively and they are rejected under similar rationale as claims 12, 14, 16 and 19 respectively. Li teaches that “The present invention relates to the technical field of three-dimensional display, and in particular to a method for adjusting resolution of a Head-Mounted Display (HMD) apparatus, an HMD apparatus for resolution adjustment and a device for adjusting resolution of an HMD apparatus.” (Li: [0001]) and “A device for adjusting resolution of a Head-Mounted Display (HMD) apparatus, comprising: a communication module; and a processing module configured to: acquire multimedia information to be displayed in an HMD apparatus, determine saliency information of display contents in multimedia information, adjust, according to the saliency information, resolution corresponding to each display content in the multimedia information, and transmit the resolution-adjusted multimedia information to the HMD apparatus.” (Li: Claim 21). Regarding claim 32, the claim is a one or more non-transitory computer storage media claim of method claim 12. The claim is similar in scope to claim 12 and it is rejected under similar rationale as claim 12. Selker further teaches that “Example system 400 includes one or more computer-readable mediums 401, processing system 404, I/O subsystem 406, wireless communications circuitry (e.g., RF circuitry) 408, audio devices (e.g., speaker, microphone) 410, and sensors 411. These components may be coupled by one or more communication buses or signal lines 403.” (Selker: [0064] L.18-24). Claim(s) 15 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Selker as applied to claim 12 (24) and further in view of LaFayette et al. (2021/0358216; IDS). Regarding claim 15, Li teaches the method of claim 12, wherein executing the one or more particular computational processes on the one or more different regions of the display screen (when displaying in a single-eye visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the multimedia information both in a left-eye display region and a right-eye display region of an HMD apparatus. Li: [0052]; when displaying in a stereo visual model, displaying, according to the adjusted resolution corresponding to each display content in the multimedia information, the first type of multimedia information in a left-eye display region and the second type of multimedia information in a right-eye display region, respectively; Li: [0056]) comprises adapting a level of detail of at least a part of a mesh of a virtual element within the one or more different regions of the display screen (see 15_1 below). While the combined teaching of Li and Selker does not explicitly teach, LaFayette teaches: (15_1). comprises adapting a level of detail of at least a part of a mesh of a virtual element within the one or more different regions of the display screen (e.g., In particular embodiments, an object's position relative to the user's foveal focus point may determine the level of detail that is appropriate for representing the object. For example, since the person's 410 torso is within the foveal focus point 401, the level of the detail of the mesh geometry used to represent the person's 410 torso. Since the person's 410 head and lower legs are in the second-farthest region 402 from the center of the foveal focus point 401, they may be represented using more simplified mesh geometry. The dog 420 and cat 430 may be represented with even more simplified mesh geometry since they are in the farthest region 403. LaFayette: [0034]. Therefore, the level of details of objects is highest at the foveal focus point 401 and lesser for objects in second-farthest region 402 and the least in the farthest region 403. In particular embodiments, the screen coverage size of the object may be additionally used with either or both the approaches described above (i.e., foveal focus point or lens characteristics) to determine the geometry simplification level of the object. For example, in FIG. 4, the screen coverage of the dog 420 is greater than the screen coverage of the cat 430 since more pixels would be needed to display the dog 420. However, if the dog 420 were to walk farther away from the camera or viewer, then the dog 420 will appear smaller and would need fewer pixels on the screen. In particular embodiments, the rendering system may set a threshold screen pixel size. If the dog 420 continues to walk farther away and eventually has a screen-coverage size that is less than the threshold, the rendering system may selectively reduce the mesh geometry of the dog 420 since any degradation in the level of detail would not be observable given how small the dog 420 would appear. LaFayette: [0036] and Fig. 4; reproduced below for reference. PNG media_image3.png 564 782 media_image3.png Greyscale Therefore, as the dog walked farther away from the camera or viewer, it appears smaller and need fewer pixels on the screen and hence lower resolution to render the reduced mesh geometry of the dog). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of LaFayette into the combined teaching of Li and Selker so that objects at the foveal region are displayed with higher resolution and with higher level of details as compared to objects displayed in the second-farther region and farthest region. Regarding claim 26, the claim is a system claim of method claim 15. The claim is similar in scope to claim 15 and it is rejected under similar rationale as claim 15. Claim(s) 17-18 and 28-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Selker as applied to claim 12 and further in view of Marwecki et al. (“Mise-Unseen: Using Eye-Tracking to Hide Virtual Reality Scene Changes in Plain Sight”, User Interface Software and Technology, p. 777-789, October 17, 2019; IDS). Regarding claim 17, the combined teaching of Li and Selker teaches the method of claim 12, wherein executing the one or more particular computational processes on the one or more different regions of the display screen comprises adjusting a frame rate of image content on the one or more different regions of the display screen (see 17_1 below). While the combined teaching of Li and Selker does not explicitly teach, Marwecki teaches: (17_1). executing the one or more particular computational processes on the one or more different regions of the display screen comprises adjusting a frame rate of image content on the one or more different regions of the display screen (e.g., Figure 7 shows and describes this application (video contained in auxiliary material). In contrast to the black masks that are commonly used to eliminate motion in the peripheral vision, we maintain a full field of view. We remove peripheral motion by reducing the frame rate outside the fovea to 1Hz and blending the renderings of foveated and peripheral areas together. We hide the reduced frame rate in the peripheral area by interpolating between frames, feathering the edge to the foveated area, and adding a motion blur. Marwecki: p. 782 c.2 para. 2 and Figure 7; reproduced below for reference. PNG media_image4.png 788 594 media_image4.png Greyscale The different regions are the foveated and peripheral areas). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Marwecki into the combined teaching of Li and Selker so that instead of masking out peripheral motion, a reduced frame rate in the peripheral vision to 1 Hz is used to blur the outside of the user’s fovea. Regarding claim 18, the combined teaching of Li and Selker teaches the method of claim 12, wherein executing the one or more particular computational processes is based on whether at least a part of the determined region of the display screen falls within a predefined region of the display screen (see 18_1 below). While the combined teaching of Li and Selker does not explicitly teach, Marwecki teaches: (18_1). at least a part of the determined region of the display screen falls within a predefined region of the display screen (e.g., Figure 8 shows and describes a demonstration. Past work has demonstrated how to change a presentation or personalize an experience using gaze [11,62]. The gallery builds on this idea and hides the changes. This illustrates how adaptive content benefits from our approach as obvious and unbelievable transitions can reduce immersion. Marwecki: p. 783 c.1 para. 3 and Figure 8; reproduced below for reference. PNG media_image5.png 524 578 media_image5.png Greyscale It can be seen that the determined region (the painting on the left in b) is changed to impressionist painting (in c) and the painting on the left of d)) in accordance with the modern art paintings (Interest-driven content)). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Marwecki into the combined teaching of Li and Selker so that predefined region of the original painting can be changed to different painting of different style. Regarding claims 28-29, the claims are system claims of method claims 17-18 respectively. The claims are similar in scope to claims 17-18 respectively and they are rejected under similar rationale as claims 17-18 respectively. Claim(s) 20-23 and 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Selker as applied to claim 12 and further in view of Publicover et al. (EP3140719; IDS) and Young (2017/0285736; IDS). Regarding claim 20, the combined teaching of Li and Selker teaches the method of claim 12, comprising: predicting, based on the gaze tracking data, a future time the user is likely to blink (see 20_1 below); and in response to predicting the future time, executing the one or more particular computational processes within a time period of the future time during which the user is likely to blink (see 20_2 below). While the combined teaching of Li and Selker does not explicitly teach, Publicover teaches: (20_1). predicting, based on the gaze tracking data, a future time the user is likely to blink (e.g., movement of the eye lids and/or eye lashes can be used to anticipate that a blink is about to occur. As a blink is initiated, the system can anticipate that the user will be functionally blind for the duration of a blink (normally from 0.3 to 0.4 seconds). During this time, power can be conserved by reducing frame rate and/or interactables and/or other objects can introduced in a manner that does not attract attention. Furthermore, the functional ability to select or activate within the eye-signal language can be placed on "pause." This mode can be used to adjust timing considerations for certain operations. Publicover: [0466] L.1-6); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Publicover into the combined teaching of Li and Selker so that any movement of the eye lids can be taken as starting of a blink. While the combined teaching of Li, Selker and Publicover does not explicitly teach, Young teaches: (20_2). in response to predicting the future time, executing the one or more particular computational processes within a time period of the future time during which the user is likely to blink (e.g., FIG. 2 shows an example method 200 wherein a system could adjust the compression or transmission of graphics transmitted to a user in ways that take into account saccades and/or blinks by a viewer. In this example, gaze tracking data 202 is obtained as discussed with respect to FIGS. 1A-1B. The eye tracking data may then be analyzed to detect a saccade and/or blink, as indicated at 204. If, at 206 no saccade or blink is detected normal transmission of image data to the display may take place, as indicated at 210A followed by presentation of images, as indicated at 212. The normal transmission takes place with normal transmission parameters and/or data compression parameters. If instead, a saccade and/or blink is detected at 206 the transmission of image data may be disabled at 210B for a period that accounts for the nature of the saccade/blink, which may potentially include determining the duration of the saccade blink at 208 through analysis of the gaze tracking data. Determining the saccade/blink duration at 208 may also include predicting when the saccade/blink will end by utilizing historical gaze tracking data of the user. When the system determines that the saccade/blink is ending, normal compression/transmission of the image data resumes, and the resulting images may then be presented at 212. Young: [0040]. Therefore, during the blink/saccade period, the normal compression/transmission of the image data is disabled). It would have been obvious to a person of ordinary skill in the art before the effective of the claimed invention to combine the teaching of Young into the combined teaching of Li, Selker and Publicover so that during the blinking or saccade period, the normal compression/transmission is disabled and processing power can be saved for other operations. Regarding claim 21, the combined teaching of Li, Selker, Publicover and Young teaches the method of claim 20, wherein predicting the future time the user is likely to blink based on an average elapsed time between occurrences of the user blinking (e.g., Determining the saccade/blink duration at 208 may also include predicting when the saccade/blink will end by utilizing historical gaze tracking data of the user. When the system determines that the saccade/blink is ending, normal compression/transmission of the image data resumes, and the resulting images may then be presented at 212. Young: [0040] L.18-24). Regarding claim 22, the combined teaching of Li, Selker, Publicover and Young teaches the method of claim 20, wherein predicting the future time the user is likely to blink based on one or more contractions of one or more facial muscles of the user prior to blinking (e.g., Blinks take even longer periods of time, 20 requiring a complex series of muscle contractions. The minimum time for a blink is about 0.3 to 0.4 seconds. Publicover: [0191] L.6-7. movement of the eye lids and/or eye lashes can be used to anticipate that a blink is about to occur. As a blink is initiated, the system can anticipate that the user will be functionally blind for the duration of a blink (normally from 0.3 to 0.4 seconds). Publicover: [0466] L.1-3. It is obvious that movements of eye lids and lashes involves contractions of series of muscles). Regarding claim 23, the combined teaching of Li, Selker, Publicover and Young teaches the method of claim 20, wherein predicting the future time the user is likely to blink based on one or more stimuli associated with image content displayed on the region of the display screen (e.g., interactables that are the target of smooth pursuit eye movements must first be perceived before a motion can be initiated and subsequently maintained at velocities well below maximum values (30° per second) for physiological smooth pursuit. Pursuit objects that initially appear within a region of perception can avoid intervening saccadic movement(s) when placed sufficiently close to (e.g., well within the foveal view region of 1 ° to 3°) or even within the structure of a target interactable (see FIG. 13). A saccade (taking up unnecessary time) may be forced to take place if, for instance, a pursuit object has moved some distance (e.g., outside the foveal view region of 1 ° to 3°) away from the selected interactable prior to perception. Thus, the timing of initial display(s), any delays before movement begins, and the rate(s) of movement are all critical for eye signal control using smooth pursuit mechanisms. Timing must take into account the physiology of anticipated eye movements and optimally include self-adaptive components the can be tuned to each device user including as experience is gained. Publicover: [0245] L.1-10). Regarding claim 31, the claim is a system claim of method claim 20. The claim is similar in scope to claim 20 and it is rejected under similar rationale as claim 20. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SING-WAI WU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Aug 22, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Feb 04, 2026
Final Rejection — §103
Mar 24, 2026
Interview Requested
Mar 31, 2026
Applicant Interview (Telephonic)
Mar 31, 2026
Examiner Interview Summary
Apr 08, 2026
Request for Continued Examination
Apr 10, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597174
METHOD AND APPARATUS FOR DELIVERING 5G AR/MR COGNITIVE EXPERIENCE TO 5G DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12591304
SYSTEMS AND METHODS FOR CONTEXTUALIZED INTERACTIONS WITH AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586311
APPARATUS AND METHOD FOR RECONSTRUCTING 3D HUMAN OBJECT BASED ON MONOCULAR IMAGE WITH DEPTH IMAGE-BASED IMPLICIT FUNCTION LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12537877
MANAGING CONTENT PLACEMENT IN EXTENDED REALITY ENVIRONMENTS
2y 5m to grant Granted Jan 27, 2026
Patent 12530797
PERSONALIZED SCENE IMAGE PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
8%
Grant Probability
18%
With Interview (+10.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month