Prosecution Insights
Last updated: April 19, 2026
Application No. 18/488,419

PERIPHERAL LUMINANCE OR COLOR REMAPPING FOR POWER SAVING

Final Rejection §103
Filed
Oct 17, 2023
Examiner
WANG, YI
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
6 (Final)
76%
Grant Probability
Favorable
7-8
OA Rounds
2y 7m
To Grant
91%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
368 granted / 481 resolved
+14.5% vs TC avg
Moderate +15% lift
Without
With
+14.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
64.1%
+24.1% vs TC avg
§102
10.3%
-29.7% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 481 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is in response to applicant’s amendment/response filed on 12/31/2025, which has been entered and made of record. Claims 1, 3, 10, 1and 14-15 have been amended. No Claim is newly added. Claims 1-16 and 19-22 are pending in the application. Response to Arguments Applicant’s arguments (Remarks, p. 7-10) with respect to the independent claims 1, 10, and 15, and the dependent claims have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Applicant’s arguments have been addressed in the detail rejection below with reference by Keiya. The arguments regarding dependent claims for the virtue of their dependency are moot because the independent claims are not allowable. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-10, 12-17, and 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thunström (US 20180224935 A1), and in view of Jarvenpaa et al. (US 20210174768 A1), and further in view of Zhang et al. (WO 2019067157 A1), and further in view of Keiya et al. (US 20210397340 A1). Regarding Claim 1, Thunström discloses An electronic device (ABST reciting “a system for presenting graphics on a display device” Fig. 1) comprising: a display (Fig. 1 showing display device 110); and processing circuitry configured to: (Fig. 1 showing graphics processing device 130. ¶33 reciting “Graphics processing device 130 employed by various embodiments of the invention may be for causing an image to be displayed on display device 110. Graphics processing device 130 may modify what image is displayed on display device 110 based at least in part on the gaze point of the user on display device 110, or a change in the gaze point of the user on display device 110, as determined by eye tracking device 120.”) preparing first image data having a default foveated region before receiving an indication of a gaze of a user from an eye tracker; and sending the first image data to the display to cause presentation of the virtual image content having the default foveated region. (¶65 reciting “In FIG. 4, display device 110 is shown, and an initial user gaze point 410 is shown thereon. Prior to any change in initial gaze point 410, embodiments of the invention may provide increased graphics quality in area 420.”, where the initial foveated region corresponds to a default foveated region before receiving an indication of a gaze change. Fig. 5 step 510 showing display an image before receiving information indicative of gaze point, and ¶74 reciting “At step 510, method 500 may include displaying an image on display device 110.”) prepare a second image frame after the first image frame with dynamic foveation based on a condition being satisfied. (Fig. 13 showing a flow for updating a foveated region, i.e. a dynamic foveation. At operation 1306, a first frame with a foveated setting is determined. Further, ¶105 reciting “At operation 1310, the computing device determines a second location of the fixation position on the graphical user interface based on the gaze data. The second location is associated with a change to the user gaze. For example, the gaze data indicates that the user's gaze on the graphical user interface changed resulting in a change to the location of the fixation position. The fixation position may be maintained in single position for a period of time for a portion of the change to the user gaze (e.g., until the user changes his or her gaze point by certain amount).” The period of time reads on a condition.) However, Thunström does not explicitly disclose the default foveated region based on movement of virtual image content corresponding to the default foveated region; and preparing second image data based on adjusting the size of the default foveated region. It is well known in the art that salient regions being areas with movement. In addition, Jarvenpaa teaches “an apparatus, method, computer program and system for use in gaze dependent foveated rendering” (¶1). More specifically, ¶77-¶79 teaches a sub-optimal operational condition of the gaze dependent foveated rendering process may be determined by detecting the object of interest; and ¶79 teaches object of interest is determined based on movement of image content, and recites “The position of the point/object of interest may be determined from . . . image analysis of content (i.e. to detect areas of moving parts of dynamic image content/video . . .”. Further, ¶94 recites “FIG. 10, schematically illustrates an example of a recovery action responsive to a determination of a sub-optimal operational condition, namely moving the position of the foveation region 701 to a position of a point or object of interest 706 of the image 700.” In other words, Jarvenpaa teaches determining a default foveated region based on movement of the image content. In addition, ¶154 recites “The image/content which is the subject of foveated rendering may be any type of suitable content, not least for example: an image, visual content (dynamic or static), audio/visual content, video and 3D content.”; and ¶155 recites “Where the content is 3D visual content, the gaze position may correspond to 3D coordinates within a displayed virtual 3D image.” In other words, the image content is virtual image content. In addition, Jarvenpaa teaches preparing second image data based on adjusting, at a rate, the size of the default foveated region, and recites “controlling a rate of change of: position, shape, size and/or quality of one or more of the regions.” (¶106). It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Thunström) to prepare a default foveated region based on movement of virtual image content (taught by Jarvenpaa). The suggestions/motivations would have been “to provide improved gaze dependent foveated rendering” (¶3), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results. However, Thunström in view of Jarvenpaa does not explicitly disclose a first image frame with static foveation. Zhang teaches switching between utilizing dynamic foveation and static foveation in Fig. 17. Further, ¶73 recites “More specifically, frames 292, 294, and 298 correspond to frames for which eye tracking data was collected, whereas frames 296 and 300 correspond to frames during which eye tracking did not occur.”; and ¶74 recites “More specifically, in the illustrated embodiment, the frames 296 and 300 are indicative of static foveation being performed.” In other words, the transition from frame 296 to frame 298 reads on a transition from a stative foveation (the first image frame) to a dynamic foveation (the second image frame). It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Thunström in view of Jarvenpaa) to transition from a static foveation to a dynamic foveation (taught by Zhang). The suggestions/motivations would have been “to mitigating visual artifacts that may occur when dynamic foveation is performed.” (¶2), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results. However, Thunström in view of Jarvenpaa and Zhang does not explicitly disclose to wherein the rate changes based on a type of content being displayed, a size of one or more foveated areas, or any combination thereof. Keiya teaches “a change to a display screen, display of a region” (¶9). More specifically, Keiya teaches a displayed region size change rate being based on a size of one or more regions, and recites “The region position control section 62 obtains a size change rate of the application window by calculating the ratio between a length of the diagonal line of the application window which has been acquired, as a window display data, before the size change and the above-described length of the diagonal line which is acquired after the size change.” (¶59). It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Thunström in view of Jarvenpaa and Zhang) to obtain a region size change rate based on a size of one or more regions (taught by Keiya). The suggestions/motivations would have been to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results. Regarding Claim 2, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the processing circuitry is configured to prepare the second image frame with an adjusted foveated region at least in part by preparing a third image frame configured to be presented between the first image frame and the second image frame, wherein the third image frame comprises one or more additional foveated areas evenly distributed between the default foveated region and the adjusted foveated region. (Jarvenpaa, ¶53 reciting “upon detection of a sub-optimal operational condition, additional foveation regions may be provided [e.g. so as to encompass/cover one or more particular positions in the image, such as both a last known gaze position and a position of a point of interest as illustrated and discussed further below with respect to FIG. 12]”. Further, ¶99 reciting “The position of such additional foveation regions may be adjusted to correspond to and/or encompass particular positions in the displayed content, e.g. last known gaze position, centre of display, and positions/objects of interest.” The feature “evenly distributed between the default foveated region and the adjusted foveated region” is merely a normal design option which someone with ordinary skilled in the art would select, in accordance with circumstances, without the exercise of inventive skill, in order to solve the problem posed. The suggestions/motivations would have been to “enable a user not to visually perceive anything being amiss following the occurrence of a sub-optimal condition nor notice the fact that there has been an error.” (¶54).) Regarding Claim 4, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the processing circuitry is configured to prepare the second image frame at least in part by: accessing a luminance function of a plurality of luminance functions based on a spatial frequency of the second image frame; and preparing the second image data based on the luminance function, wherein the second image data is configured to be presented via a peripheral region outside of the default foveated region. (Thunström, ¶74 reciting “At step 530, method 500 may further include causing graphics processing device 130 to modify the image displayed on display device 110 based at least in part on . . . the change in the gaze point of the user on display device 110. Step 530 may include, at step 533, increasing the quality of the image in an area around the gaze point of the user, relative to outside the area. Step 530 may also include, at step 536, decreasing the quality of the image outside an area around the gaze point of the user, relative to inside the area.” In addition, ¶36-38 disclosing increasing the quality of the image including resolution and brightness.) Regarding Claim 5, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 4, wherein the processing circuitry is configured to: determine a target position associated with a gaze of a user during a target frame; and expand the default foveated region of the display based on the target position. (Thunström, ¶69 reciting “the system may determine a gaze point 210 is located outside the sub-area 810 however it may perform no action (such as relocating the area 800) until a predetermined number of gaze points 210 are located outside the sub-area (for example 2, 5, 10, 50). Alternatively, the system could temporarily enlarge area 800 until it is certain the gaze point 210 is located within a certain area. Additionally, predefined time periods may be established to determine if gaze points 210 have moved outside of sub-area 810 for at least those time periods prior to enlarging or changing are 800.”) Regarding Claim 6, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 4, wherein the processing circuitry is configured to adjust the default foveated region based on expanding, at the rate, the default foveated region in a direction of a gaze of a user. (Thunström, Figs. 4, 7A, 7B. ¶65 disclosing the foveated region adjusted based on the direction of the saccade, See Claims 4, 5 rejections for detailed analysis.) Regarding Claim 7, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the default foveated region corresponds to an area of expected interest of the virtual image content corresponding to the first image data. (Thunström , ¶77 reciting “data associated with an image may inform the systems and methods described herein to allow prediction of which areas of an image may likely be focused on next by the user. This data may supplement data provided by eye tracking device 120 to allow for quicker and more fluid adjustment of the quality of the image in areas likely to be focused on by a user. For example, during viewing of a sporting event, a picture-in-picture of an interview with a coach or player may be presented in a corner of the image. Metadata associated with the image feed may inform the systems and methods described herein of the likely importance, and hence viewer interest and likely focus, in the sub-portion of the image.”) Regarding Claim 8, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the default foveated region comprises a first foveated region and a second foveated region, wherein the first foveated region is positioned within the second foveated region, and wherein a resolution of the first foveated region tapers from a higher resolution and a higher luminance at an edge of the first foveated region to a lower resolution and a lower luminance at an edge of the second foveated region. (Thunström , Figs. 3A and 3B. ¶58 reciting “the increase in quality of the image may be greatest at the center of the area (i.e., proximate to the gaze point), and decrease towards the edges of the area (i.e., distal to the gaze point), perhaps to match the quality of the image surrounding the area. To demonstrate, FIG. 3A shows how image quality may decrease in a linear or non-liner continuous manner from the center of a gaze area outward, while FIG. 3B shows how image quality may decrease in a stepped manner from the center of a gaze area outward.” In addition, ¶36-38 disclosing increasing the quality of the image including resolution and brightness.) Regarding Claim 9, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the default foveated region occurs away from a center of the display. (Thunström , Fig. 4 showing a default foveated region 410 occurs away from a center of the display.) Regarding Claim 10, Thunström in view of Jarvenpaa, Zhang and Keiya discloses A non-transitory, tangible, computer-readable medium storing instructions that, when executed by processing circuitry, cause the processing circuitry to: (Thunström, ¶8 reciting “a non-transitory machine readable medium having instructions thereon for presenting graphics on a display device is provided.”) When eye tracking via an eye tracker is not available, prepare a first image frame with static foveation at least in part by: generating first image data having a default foveated region based on a saliency by effect of movement of virtual image content of the first image data to be presented on a display; and transmitting the first image data to the display to cause presentation of the virtual image content that includes the default foveated region; and (Thunström , ¶65 reciting “In FIG. 4, display device 110 is shown, and an initial user gaze point 410 is shown thereon. Prior to any change in initial gaze point 410, embodiments of the invention may provide increased graphics quality in area 420.”, where the initial foveated region corresponds to a default foveated region before receiving an indication of a gaze change. Fig. 5 step 510 showing display an image before receiving information indicative of gaze point, and ¶74 reciting “At step 510, method 500 may include displaying an image on display device 110.” In addition, Jarvenpaa teaches a salient region of an area with movement. Zhang teaches “frames 296 and 300 correspond to frames during which eye tracking did not occur” (¶73) and “the frames 296 and 300 are indicative of static foveation being performed. For example, when the viewer's eyes cannot be tracked” (¶74). See Claim 1 rejections for detailed analysis.) when eye tracking via the eye tracker is available, preparing a second image frame after the first image frame with dynamic foveation based on a condition being satisfied at least in part by, generating second image data based on adjusting, at a rate, a size of the default foveated region, wherein the rate is based on a type of content being displayed, a size of one or more foveated areas, or any combination thereof. (See Claim 1 rejections for detailed analysis.) Regarding Claim 12, see Claim 2 rejections for detailed analysis. Regarding Claim 13, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The non-transitory, tangible, computer-readable medium of claim 12, wherein the instructions cause the processing circuitry to determine the rate based on a movement of a gaze of a user from the default foveated region to a second location. (Jarvenpaa, ¶106 reciting “Other parameters of gaze dependent foveated rendering that may be adjusted in response to a determination of a sub-optimal operational condition include controlling a rate of change of: position, shape, size and/or quality of one or more of the regions. The rate of change of such parameters may be dependent on the confidence value of the determined gaze position. For example, a position of the first region may be gradually altered over a period of time to correspond to move to a particular position. A position of the foveation region may gradually (over time) be shifted from a last known gaze position to the display centre or to a closest object of interest.” The suggestions/motivations would have been the same as that of Claim 1 rejections.) Regarding Claim 14, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The non-transitory, tangible, computer-readable medium of claim 13, wherein the instructions cause the processing circuitry to adjust, at the rate, the size of the default foveated region of the display in a direction of the movement of the gaze of the user. (See Claim 6 rejections for detailed analysis) Regarding Claim 15, Thunström in view of Jarvenpaa, Zhang and Keiya discloses A method comprising: preparing a first image frame with static foveation at least in part by: determining a salient area of image content likely to draw a focal point of a user gaze due to movement of the image content in the salient area compared to non-movement elsewhere; (Jarvenpaa, ¶79 reciting “The position of the point/object of interest may be determined from . . . image analysis of content (i.e. to detect areas of moving parts of dynamic image content/video . . .”) before receiving an indication of movement of a gaze of a user from an eye tracker, preparing first image data based on the image content having a default foveated region corresponding to the salient area of the image content; and sending the first image data to a display to cause presentation of image content having the default foveated region; and preparing a second image frame after the first image frame with dynamic foveation based on a condition being satisfied at least in part by adjusting, at a rate, a size of default foveated region, wherein the rate is based on a type of content being displayed, a size of one or more foveated areas, or any combination thereof. (See Claim 1 rejections for detailed analysis) Regarding Claim 16, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The method of claim 15, wherein preparing the second image frame with dynamic foveation comprises: determining a rate based on a speed of the gaze of the user, the default foveated region, a type of content being displayed, a size of the display, or any combination thereof; and preparing second image data that adjusts, at the rate, the default foveated region of the display. (Jarvenpaa, ¶106 reciting “Other parameters of gaze dependent foveated rendering that may be adjusted in response to a determination of a sub-optimal operational condition include controlling a rate of change of: position, shape, size and/or quality of one or more of the regions. The rate of change of such parameters may be dependent on the confidence value of the determined gaze position. For example, a position of the first region may be gradually altered over a period of time to correspond to move to a particular position. A position of the foveation region may gradually (over time) be shifted from a last known gaze position to the display centre or to a closest object of interest.” In addition, ¶72-74 teaching the confidence value is determined based on a speed of the eye movement. The suggestions/motivations would have been the same as that of Claim 1 rejections.) Regarding Claim 19, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The method of claim 15, wherein preparing the second image frame with dynamic foveation comprises: receiving the indication of the gaze of the user from the eye tracker based on the presentation of the image content; and preparing second image data having the default foveated region adjusted based on the indication of the gaze of the user from the eye tracker, wherein the second image data comprises a first foveated region having a first luminance and a second foveated region having a second luminance. (Thunström , ¶33 reciting “Graphics processing device 130 employed by various embodiments of the invention may be for causing an image to be displayed on display device 110. Graphics processing device 130 may modify what image is displayed on display device 110 based at least in part on the gaze point of the user on display device 110, or a change in the gaze point of the user on display device 110, as determined by eye tracking device 120.” Further, Fig. 5 step 530 showing adjusting the foveated region based on the indication of the gaze, and ¶74 reciting At step 520, method 500 may also include receiving information from eye tracking device 120 indicative of at least one of a gaze point of a user on display device 110, or a change in the gaze point of the user on display device 110. “At step 530, method 500 may further include causing graphics processing device 130 to modify the image displayed on display device 110 based at least in part on . . . the change in the gaze point of the user on display device 110. Step 530 may include, at step 533, increasing the quality of the image in an area around the gaze point of the user, relative to outside the area. Step 530 may also include, at step 536, decreasing the quality of the image outside an area around the gaze point of the user, relative to inside the area.” In addition, ¶36-38 disclosing increasing the quality of the image including resolution and brightness.) Regarding Claim 20, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The method of claim 19, wherein preparing the second image data with dynamic foveation comprises preparing the second image data having the default foveated region adjusted based on an indication of a direction and speed of movement of the gaze of the user from the eye tracker. (Thunström , ¶63 disclosing determine an anticipated gaze point based on a speed of movement of the gaze. ¶65 disclosing the foveated region adjusted based on the direction of the saccade.) Regarding Claim 21, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the electronic device comprises a virtual-reality headset. (Thunström, ¶20 reciting “FIG. 9 illustrates an example of components of a VR headset, according to an embodiment;” ¶31 reciting “the eye tracking device 120 may be provided integral to, or in addition to, a wearable headset such as a Virtual Reality (VR)”) Regarding Claim 22, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1, wherein the condition being satisfied corresponds to a time elapsed since an indication of motion from an eye tracker satisfied a time threshold. (Thunström, ¶105 reciting “At operation 1310, the computing device determines a second location of the fixation position on the graphical user interface based on the gaze data. The second location is associated with a change to the user gaze. For example, the gaze data indicates that the user's gaze on the graphical user interface changed resulting in a change to the location of the fixation position. The fixation position may be maintained in single position for a period of time for a portion of the change to the user gaze (e.g., until the user changes his or her gaze point by certain amount).” The period of time reads on a condition.) Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Thunström (US 20180224935 A1) in view of Jarvenpaa, Zhang and Keiya, and further in view of Rao et al. (US 10528818 B1). Regarding Claim 3, Thunström in view of Jarvenpaa, Zhang and Keiya discloses The electronic device of claim 1. However, Thunström in view of Jarvenpaa , Zhang and Keiya does not explicitly disclose wherein the default foveated region comprises an only area of the virtual image content that moves in relation to a previous frame. Rao teaches “The system includes a salience module that receives a video stream having one more pairs of frames (each frame having a background and a foreground) and detects salient regions in the video stream to generate salient motion estimates. The salient regions are regions that move differently than dominant motion in the pairs of video frames.” (ABST). Further, col. 9, ln. 19-23 recites “Pairs of raw video frames (i.e., video stream 400) are input into the ActInfo Saliency modules 402, which detects salient regions as those that move differently than the dominant motion in the pairs of video frames.” It would have been obvious to one with ordinary skill, before the effective filing date of the claimed invention, to modify the device (taught by Thunström in view of Jarvenpaa, Zhang and Keiya) to determine the salient region comprising the area of the image content that moves between consecutive frames (taught by Rao). The suggestions/motivations would have been “combining ActInfo and SLRD for efficiently and simultaneously modeling a moving background and detecting multiple moving objects does not require foreground-less training sequences, obtains more compact representation of scene than treating each video frame independently, and obtains more accurate segmentation of moving objects, even for low-resolution videos and faraway targets, despite nonlinear nuisances.” (col. 9 ln 67 – col. 10 ln 7), and to apply a known technique to a known device (method, or product) ready for improvement to yield predictable results. Regarding Claim 11, Thunström in view of Jarvenpaa, Zhang and Keiya and Rao discloses The non-transitory, tangible, computer-readable medium of claim 10, wherein the instructions cause the processing circuitry to determine the default foveated region based on determining an area of expected visual interest of the virtual image content, (Thunström, ¶77 reciting “data associated with an image may inform the systems and methods described herein to allow prediction of which areas of an image may likely be focused on next by the user. This data may supplement data provided by eye tracking device 120 to allow for quicker and more fluid adjustment of the quality of the image in areas likely to be focused on by a user. For example, during viewing of a sporting event, a picture-in-picture of an interview with a coach or player may be presented in a corner of the image. Metadata associated with the image feed may inform the systems and methods described herein of the likely importance, and hence viewer interest and likely focus, in the sub-portion of the image.”) based on movement of the virtual image content compared to a previous frame (See Claim 3 rejections for detailed analysis). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YI WANG whose telephone number is (571)272-6022. The examiner can normally be reached 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YI WANG/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Oct 17, 2023
Application Filed
May 17, 2024
Non-Final Rejection — §103
Jun 07, 2024
Interview Requested
Aug 21, 2024
Examiner Interview Summary
Aug 21, 2024
Applicant Interview (Telephonic)
Aug 22, 2024
Response Filed
Sep 07, 2024
Final Rejection — §103
Oct 14, 2024
Interview Requested
Oct 22, 2024
Applicant Interview (Telephonic)
Oct 22, 2024
Examiner Interview Summary
Nov 11, 2024
Response after Non-Final Action
Dec 10, 2024
Request for Continued Examination
Dec 13, 2024
Response after Non-Final Action
Dec 14, 2024
Non-Final Rejection — §103
Feb 19, 2025
Interview Requested
Feb 28, 2025
Examiner Interview Summary
Feb 28, 2025
Applicant Interview (Telephonic)
Mar 19, 2025
Response Filed
May 17, 2025
Final Rejection — §103
Jun 25, 2025
Interview Requested
Jul 03, 2025
Examiner Interview Summary
Jul 03, 2025
Applicant Interview (Telephonic)
Aug 14, 2025
Response after Non-Final Action
Aug 21, 2025
Request for Continued Examination
Aug 22, 2025
Response after Non-Final Action
Sep 29, 2025
Non-Final Rejection — §103
Dec 08, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Examiner Interview Summary
Dec 31, 2025
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579758
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR INTERACTING WITH VIRTUAL OBJECTS USING HAND GESTURES
2y 5m to grant Granted Mar 17, 2026
Patent 12579752
SYSTEM AND METHOD FOR CREATING AND FURNISHING DIGITAL MODELS OF INDOOR SPACES
2y 5m to grant Granted Mar 17, 2026
Patent 12579708
CHARACTER DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573009
IMAGE PROCESSING METHOD, IMAGE GENERATING METHOD, APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12562084
AUGMENTED REALITY WINDOW
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
76%
Grant Probability
91%
With Interview (+14.7%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 481 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month