Prosecution Insights
Last updated: April 19, 2026
Application No. 19/277,969

VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS

Final Rejection §101§102§103
Filed
Jul 23, 2025
Examiner
VU, KHOA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Magic Leap Inc.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
234 granted / 345 resolved
+5.8% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
372
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect amended claims 1, 4, 11, 14 and filed on 01/29/2026 have been considered but they are not persuasive. However, examiner found some amended limitations are taught by references previous introduced. In Remark page 9, lines 13-17, applicant argued that Edgren does not disclose or suggest "receiv[ing] the rendered imagery that includes virtual content to be presented through the display, wherein the rendered imagery further includes control information embedded in the rendered imagery, and wherein the control information is embedded in a portion of the rendered imagery that is separate from the virtual content to be presented through the display" as recited by the amended independent claims” The examiner respectfully disagrees with Applicant’s argument. In fact, in paragraph [0018], Edgren discloses “The shadow property may for instance relate to relevant refinements of the graphic visualization to embed a virtual shadow resembling a shadow of a corresponding fictitious physical element subjected to the corresponding surrounding light conditions” and [0019] “by taking the intensive directed light direction property, and subsequently direction of determined intensive directed light, into consideration when determining the shadow property in the rendering process as suggested by said embodiment, the graphic visualization may be adjusted such that the embedded virtual shadow resemble the shadow of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light” Edgren teaches receive the rendered imagery includes a virtual content (e.g., a virtual shadow) to be presented through the display, further includes control information embedded in the rendered imagery (e.g., the graphic visualization is adjusted by the embedded virtual shadow to resemble the shadow of the fictitious physical element under the intensive directed light). In Remark page 10, lines 7-9, applicant argued that Edgren still fails to teach or suggest that "the control information is embedded in a portion of the rendered imagery that is separate from the virtual content to be presented through the display," as recited by the amended independent claims. The examiner respectfully disagrees with Applicant’s argument. In fact, in Fig. 3, paragraph [0056], Edgren discloses “the adaptation of the graphic visualization 13 may be based on at least the shadow property, such that a virtual shadow of the virtual element 14 is applied in the graphic visualization 13 to resemble a shadow of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light 17…when the shadow property additionally is based on at least the obstacle distance property and/or the obstacle shape property, be based on the shadow property such that a virtual shadow 42 of the virtual element 14 is applied in the graphic visualization 13 to resemble a shadow 22 of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light 17” Edgren teaches the control information is embedded in a portion of the rendered image e.g., the shadow property of a virtual shadow 42, (Fig. 3), a finger shadow of the virtual element 14 (a portion of dash board in rendered imagery, Fig. 5), is separated from the virtual content (the virtual shadow of the virtual element 14, Fig. 5e) to be presented through the display under the directed light 17 (Fig. 3). Independent claims 6 and 19 recite similar elements as independent claim 1 and are rejected as the same above explanation. The new claims are rejected by Edgren et al. (U.S. 2014/0160100 A1) in view of Spring et al. (U.S. 2020/0312026 A1) and further in view of Spring et al. (U.S. 2020/0312026 A1) as below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 19 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 19 recites one or more computer-readable storage media. The broadest reasonable interpretation of a claim drawn to a computer readable storage medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. 101 as covering both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable storage medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. $ 101 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. Applicant’s specification in paragraphs [0132] and [0199] do not exclude the media from being signal carrier wave... therefore, it's subject to 101. As an additional note, a non-transitory computer readable medium having executable programming instructions stored thereon is considered statutory as non-transitory computer readable media excludes transitory data signals. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-10, 19 are rejected under 35 U.S.C. 102 (a) (1) as being unpatentable by Edgren (U.S. 2014/0160100 A1). Regarding Claim 1 (Currently amended), Edgren discloses a virtual or augmented reality display system (Edgren, [0007] “a user interface system for adapting a graphic visualization of a virtual element on a display” Edgren teaches a virtual display system comprising: a sensor configured to determine one or more characteristics of ambient lighting (Edgren, [0012] “the user interface system determines surrounding light characteristic of surrounding light associated with the display, the surrounding light from one or several sensors” and [0015] “the surrounding light may comprise ambient light” Edgen teaches a sensor determines one or more characteristics of ambient lighting (a surrounding light); a display configured to present rendered imagery (Edgren, ([0040] “adapting a graphic visualization of a virtual element on a display to render a corresponding fictitious physical element, there will be disclosed that rendering may change dynamically with changing surrounding light conditions” Edgren teaches a display to a corresponding fictitious physical element on image; a processor display controller (Edgren, ([0041] “a display controller node” Edgren teaches a controller) configured to: receive the rendered imagery that includes virtual content to be presented through the display, wherein the rendered imagery further includes control information embedded in the rendered imagery (Edgren, [0018] “The shadow property may for instance relate to relevant refinements of the graphic visualization to embed a virtual shadow resembling a shadow of a corresponding fictitious physical element subjected to the corresponding surrounding light conditions” and [0019] “by taking the intensive directed light direction property, and subsequently direction of determined intensive directed light, into consideration when determining the shadow property in the rendering process as suggested by said embodiment, the graphic visualization may be adjusted such that the embedded virtual shadow resemble the shadow of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light” Edgren teaches receive the rendered imagery includes a virtual content (e.g., a virtual shadow) to be presented through the display, further includes control information embedded in the rendered imagery (e.g., the graphic visualization is adjusted by the embedded virtual shadow to resemble the shadow of the fictitious physical element under the intensive directed light). , and wherein the control information is embedded in a portion of the rendered imagery that is separate from the virtual content to be presented through the display (Edgren, Fig. 3, [0056] “the adaptation of the graphic visualization 13 may be based on at least the shadow property, such that a virtual shadow of the virtual element 14 is applied in the graphic visualization 13 to resemble a shadow of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light 17…when the shadow property additionally is based on at least the obstacle distance property and/or the obstacle shape property, be based on the shadow property such that a virtual shadow 42 of the virtual element 14 is applied in the graphic visualization 13 to resemble a shadow 22 of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light 17” Edgren teaches the control information is embedded in a portion of the rendered image e.g., the shadow property of a virtual shadow 42, (Fig. 3), a finger shadow of the virtual element 14 (a portion of dash board in rendered imagery, Fig. 5), is separated from the virtual content (the virtual shadow of the virtual element 14, Fig. 5e) to be presented through the display under the directed light 17 (Fig. 3). read the control information embedded in the rendered imagery (Edgren, [0043] “in the determined direction 18 of the intensive directed light. FIG. 3. The obstacle 15 is here represented by a hand, and particularly an index finger, , approaching the display 3 in a determined direction 20. The obstacle 15 is positioned at a determined distance 19 from the display 3. A shadow 22 of the obstacle 15 is partly casted on the display 3, and subsequently a part thereof on the graphic visualization 13” Edgren teaches the Fig. 3 shows read (apply) the control information embedded in the rendered imagery e.g., determining direction 18 of the directed light, the obstacle hand 15 with particular an index finger approaching the display 3 at a distance 19. A shadow finger 22 of the obstacle hand 15 is partly casted/embedded on the display 3 in the rendered imagery; based on the control information, adjust one or more characteristics of a virtual content to match the one or more characteristics of the ambient lighting (Edgen, [0007] “The user interface system determines surrounding light characteristics of surrounding light associated with the display, and determines light adjusted characteristics of the virtual element based on the surrounding light characteristics” and [0015] “the surrounding light may comprise ambient light” and [0042] “The user interface system 2 may comprise, the processor 10” Edgen teaches based on the control information, adjust characteristics of a virtual content (the shadow of the finger) to match the characteristic of the ambient light (the surrounding/background light) wherein the display controller is configured to adjust the one or more characteristics of the virtual content after the virtual content has been rendered (Edgen, [0055] “the adaptation of the graphic visualization 13 may comprise selecting a graphic visualization profile to represent the graphic visualization 13 from a set of predefined graphic visualization profiles. The best-fitting profile, i.e. pre-rendered graphic visualization, constituting the best match is selected from the predefined set, which set for instance is stored in memory 11” and [0058] “in Action 403 determines light adjusted characteristics of the virtual element 14 based on the surrounding light characteristics. Then, when the user interface 2 adapts the graphic visualization 13 based on the light adjusted characteristics” Edgen teaches adjust characteristics of the virtual object from a selecting predefined graphic visualization profile, i.e., pre-rendered graphic visualization (the virtual object has been rendered), instruct the [[a]] display configured to display to present the virtual content with the adjusted one or more characteristics (Edgren, [0019] “by taking the intensive directed light direction property, and subsequently direction of determined intensive directed light, into consideration when determining the shadow property in the rendering process as suggested by said embodiment, the graphic visualization may be adjusted such that the embedded virtual shadow resemble the shadow of the fictitious physical element should the fictitious physical element have been subjected to the intensive directed light” Edgren teaches instruct the display includes a virtual content (e.g., a virtual shadow) to be presented through the display, further includes control information embedded in the rendered imagery (e.g., the graphic visualization is adjusted by the virtual content with characteristics (the embedded virtual shadow) to resemble the shadow of the fictitious physical element under the intensive directed light). Regarding Claim 2, Edgren discloses the virtual or augmented reality display system of Claim 1, wherein the one or more characteristics of the ambient lighting comprise a brightness of the ambient lighting (Edgren, [0015] “the surrounding light may comprise ambient light” and [0050] “light adjusted characteristics of the virtual element 14 based on the surrounding light characteristics. The light adjusted characteristics may comprise an illumination property” Edgren teaches the ambient lighting (the surrounding light) includes a brightness (illumination property) of the ambient lighting. Regarding Claim 3, Edgren discloses the virtual or augmented reality display system of Claim 1, wherein the one or more characteristics of the ambient lighting comprise a hue of the ambient lighting (Edgren, [0015] “the surrounding light may comprise ambient light” and [0050] “light adjusted characteristics of the virtual element 14 based on the surrounding light characteristics. The light adjusted characteristics may comprise a hue property” Edgren teaches the ambient lighting (the surrounding light) includes hue (hue property) of the ambient lighting. Regarding Claim 4 (Currently amended), Edgren discloses the virtual or augmented reality display system of Claim 1, wherein the one or more characteristics of the virtual content comprise a brightness of the virtual content (Edgren, [0059] “Properties such as shadows, illuminations, hues,…of the virtual element 14, which may have effect on different areas of the graphic visualization 13, may have been adapted such that the graphic visualization 13 realistically renders a corresponding fictitious physical element” Edgren teaches a characteristic of the virtual object (virtual element 14) includes a brightness (an illumination) of the virtual content. Regarding Claim 5 (Currently amended), Edgren discloses the virtual or augmented reality display system of Claim 1, wherein the one or more characteristics of the virtual content comprise a hue of the virtual content (Edgren, [0059] “Properties such as shadows, illuminations, hues,…of the virtual element 14, which may have effect on different areas of the graphic visualization 13, may have been adapted such that the graphic visualization 13 realistically renders a corresponding fictitious physical element” Edgren teaches a characteristic of the virtual object (virtual element 14) includes a hue of the virtual content. Regarding Claim 6 (Currently amended), Edgren discloses a method (Edgren, [0007] “a method performed by a user interface system for adapting a graphic visualization of a virtual element on a display” Edgren teaches a method of a virtual display system, in a virtual or augmented reality display system, the method comprising: receiving the rendered imagery that includes virtual content to be presented through the display, wherein the rendered imagery further includes control information embedded in the rendered imagery, and wherein the control information is embedded in a portion of the rendered imagery that is separate from the virtual content to be presented through the display; reading the control information embedded in the rendered imagery; receiving one or more characteristics of ambient lighting from a sensor of the display system; based on the control information, adjusting, using a processor, one or more characteristics of [[a]] the virtual content to match the one or more characteristics of the ambient lighting, wherein the one or more characteristics of the virtual content are adjusted after the virtual content has already been rendered, instructing the display to present displaying the virtual content with the adjusted one or more characteristics Claim 6 is substantially similar to claim 1 is rejected based on similar analyses. Regarding Claim 7, Edgren discloses the method of Claim 6, wherein the one or more characteristics of the ambient lighting comprise a brightness of the ambient lighting. Claim 7 is substantially similar to claim 2 is rejected based on similar analyses. Regarding Claim 8, Edgren discloses the method of Claim 6, wherein the one or more characteristics of the ambient lighting comprise a hue of the ambient lighting. Claim 8 is substantially similar to claim 3 is rejected based on similar analyses. Regarding Claim 9 (Currently amended), Edgren discloses the method of Claim 6, wherein the one or more characteristics of the virtual content comprise a brightness of the virtual content. Claim 9 is substantially similar to claim 4 is rejected based on similar analyses. Regarding Claim 10 (Currently amended), Edgren discloses the method of Claim 6, wherein the one or more characteristics of the virtual ocontent comprise a hue of the virtual content. Claim 10 is substantially similar to claim 5 is rejected based on similar analyses. Regarding Claim 19 (New), Edgren discloses one or more computer-readable storage media (Edgren, [0042] “software and/or firmware, e.g. stored in a memory such as the memory 11, that when executed by the one or more processors such as the processor 10” storing instructions which, when executed, cause at least one processor to perform operations for presenting rendered imagery through a display of a virtual or augmented reality display system, the operations comprising: receiving the rendered imagery that includes virtual content to be presented through the display, wherein the rendered imagery further includes control information embedded in the rendered imagery, and wherein the control information is embedded in a portion of the rendered imagery that is separate from the virtual content to be presented through the display; reading the control information embedded in the rendered imagery; receiving one or more characteristics of ambient lighting from a sensor of the display system; based on the control information, adjusting one or more characteristics of the virtual content to match the one or more characteristics of the ambient lighting, wherein the one or more characteristics of the virtual content are adjusted after the virtual content has been rendered; and instructing the display to present the virtual content with the adjusted one or more characteristics. Claim 19 is substantially similar to claim 1 is rejected based on similar analyses. Claims 11, 13-15 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable by Edgren et al. (U.S. 2014/0160100 A1) in view of Spring et al. (U.S. 2020/0312026 A1). Regarding Claim 11 (New), the method of Claim 6, Edgren does not explicitly teach wherein the control information is embedded in the rendered imagery by replacing a row or a column in a frame of the rendered imagery with the embedded control information. However, Spring teaches the control information is embedded in the rendered imagery by replacing a row or a column in a frame of the rendered imagery with the embedded control information (Spring, [0071] “as the 3D scene is rendered, any contextual information that has been added to the scene can be shown as icons, links, etc. embedded in the 3D scene such as shown in FIGS. 4E through 4K” and [0097] “with reference now to FIG. 4H, a screen shot 450 of an optimized 3D scene rendered with contextual information identifiers 441-443 and their associated frames 451-453 are presented on the display 116” Spring teaches the control information is embedded by replacing (adding) a row or a column in a frame (rows, columns in frames 451-453, Fig. 4H) of the rendered image with the embedded control information. Edgren and Spring are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Edgren to combine with embedded rows/columns in a frame (as taught by Spring) in order to apply the embedded rows/columns in a frame because Spring can provide the control information is embedded by replacing (adding) a row or a column in a frame (rows, columns in frames 451-453, Fig. 4H) of the rendered image with the embedded control information (Spring, Fig. 4H, [0071], [0097]). Doing so, it may provide user to see where additional contextual information may be useful, and enables the user to point the camera to the desired region to obtain the contextual information (Spring, [0044]). Regarding Claim 13 (New), the method of Claim 6, Edgren does not explicitly teach wherein the control information is embedded in the rendered imagery by adding an additional row or column in a frame of the rendered imagery, the additional row or column including the embedded control information. Spring teaches the control information is embedded in the rendered imagery by adding an additional row or column in a frame of the rendered imagery, the additional row or column including the embedded control information (Spring, [0071] “as the 3D scene is rendered, any contextual information that has been added to the scene can be shown as icons, links, etc. embedded in the 3D scene such as shown in FIGS. 4E through 4K” and [0097] “with reference now to FIG. 4H, a screen shot 450 of an optimized 3D scene rendered with contextual information identifiers 441-443 and their associated frames 451-453 are presented on the display 116” Spring teaches the control information is embedded by adding a row or a column in a frame (rows, columns in frames 451-453, Fig. 4H) of the rendered image with the embedded control information. Edgren and Spring are combinable see rationale in claim 1. Regarding Claim 14 (New), Edgren discloses the method of Claim 6, Edgren does not explicitly teach further comprising removing the control information from the rendered imagery. However, Spring teaches removing the control information from the rendered imagery (Spring, [0059] “FIG. 4B. Screen shot 405 includes crosshairs 402, contextual information capture 407, scan start/stop 401, and large ring 408” and [0060] “small ring 412 is used to indicate to the user that the motion has been reduced to within the bounds of contextual information capturing device's ability to capture a higher resolution image). Once large ring 408 disappears and small ring 412 appears the user can select contextual information” Spring teaches removing the control information (e.g., the large ring 408, the contextual information) in order to use the small ring 412 to capture a higher resolution image (Figs, 4B, 4C). Edgren and Spring are combinable see rationale in claim 1. Regarding Claim 15 (New), a combination of Edgren and Spring discloses the virtual or augmented reality display system of Claim 1, wherein the control information is embedded in the rendered imagery by replacing a row or a column in a frame of the rendered imagery with the embedded control information. Claim 15 is substantially similar to claim 11 is rejected based on similar analyses. Regarding Claim 17 (New), a combination of Edgren and Spring discloses the virtual or augmented reality display system of Claim 1, wherein the control information is embedded in the rendered imagery by adding an additional row or column in a frame of the rendered imagery, the additional row or column including the embedded control information. Claim 17 is substantially similar to claim 13 is rejected based on similar analyses. Regarding Claim 18 (New), a combination of Edgren and Spring discloses the virtual or augmented reality display system of Claim 1, wherein the display controller is further configured to remove the control information from the rendered imagery. Claim 18 is substantially similar to claim 14 is rejected based on similar analyses. Claims 12, 16 are rejected under 35 U.S.C. 103 as being unpatentable by Edgren (U.S. 2014/0160100 A1) in view of Spitzer et al. (U.S. 2018/0136720 A1). Regarding Claim 12 (New), the method of Claim 6, Edgren does not explicitly teach wherein the control information is embedded in the rendered imagery by replacing one or more values in one or more pixels of the rendered imagery with the embedded control information. However, Spitzer teaches the control information is embedded in the rendered imagery by replacing one or more values in one or more pixels of the rendered imagery with the embedded control information (Spitzer, [0092] “the rendering device 1402 may transmit the query and receive the indicator in response via a control channel implemented as parameters embedded in a header or other field of a pixel stream” and [0079] “FIG. 12, an image region 1202 having a resolution of 10×16 pixels is to be combined with an image region 1204 having a resolution of 6×10 pixels, diagram 1206, the pixels of the first row 1208 (row 0) of the base image region 1202 are extracted, as is a set 1220 of the first 6 pixels from row 0 of the non-base image region 1204, and these pixels are combined to generate the illustrated row 1212 (row 0) of twenty-two pixels for a resulting combined pixel array” Spitzer teaches the control information (a control channel) is embedded in the rendered imagery by replacing (adding) values in pixels of the render imagery. Edgren and Spitzer are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Edgren to combine with replacing one or more values in one or more pixels of the rendered imagery (as taught by Spitzer) in order to replacing (adding) one or more values in one or more pixels of the rendered imagery with the embedded control information because Spitzer can provide the control information (a control channel) is embedded in the rendered imagery by replacing (adding) values in pixels of the render imagery (Spitzer, Fig. 12, [0079], [0092]). Doing so, it may provide efficiently allocating pixel computation processes between the rendering device and the display controller (Spitzer, [0025]). Regarding Claim 16 (New), a combination of Edgren and Spitzer discloses the virtual or augmented reality display system of Claim 1, wherein the control information is embedded in the rendered imagery by replacing one or more values in one or more pixels of the rendered imagery with the embedded control information. Claim 16 is substantially similar to claim 12 is rejected based on similar analyses. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /KHOA VU/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 23, 2025
Application Filed
Nov 18, 2025
Non-Final Rejection — §101, §102, §103
Feb 10, 2026
Interview Requested
Feb 18, 2026
Applicant Interview (Telephonic)
Feb 18, 2026
Examiner Interview Summary
Mar 02, 2026
Response Filed
Apr 05, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598266
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12597087
HIGH-PERFORMANCE AND LOW-LATENCY IMPLEMENTATION OF A WAVELET-BASED IMAGE COMPRESSION SCHEME
2y 5m to grant Granted Apr 07, 2026
Patent 12578941
TECHNIQUE FOR INTER-PROCEDURAL MEMORY ADDRESS SPACE OPTIMIZATION IN GPU COMPUTING COMPILER
2y 5m to grant Granted Mar 17, 2026
Patent 12567181
SYSTEMS AND METHODS FOR REAL-TIME PROCESSING OF MEDICAL IMAGING DATA UTILIZING AN EXTERNAL PROCESSING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12548431
CONTEXTUALIZED AUGMENTED REALITY DISPLAY SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+15.8%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month