Prosecution Insights
Last updated: April 19, 2026
Application No. 18/225,237

METHODS AND SYSTEMS FOR CONTROLLING MEDIA CONTENT PRESENTATION ON A SMART GLASSES DISPLAY

Final Rejection §103
Filed
Jul 24, 2023
Examiner
RAYAN, MIHIR K
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
4 (Final)
85%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
494 granted / 582 resolved
+22.9% vs TC avg
Moderate +11% lift
Without
With
+10.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
606
Total Applications
across all art units

Statute-Specific Performance

§101
3.1%
-36.9% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
23.3%
-16.7% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 582 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Acknowledgment is made of Applicant arguments/Remarks made in amendment in which the following is noted: claims 41, 43, 45, 46, 51, 53, 55, and 56 are amended; the rejection of the claims traversed; and claims 1 – 40, 44, 47, 54, 57 are cancelled. Claims 41 – 43, 45 – 46, 48 – 53, 55 – 56, and 58 – 60 are currently pending and an Office action on the merits follows. Response to Arguments Applicant’s arguments with respect to claim(s) 41 – 43, 45 – 46, 48 – 53, 55 – 56, and 58 – 60 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues, Rider (and in view of Sako) does not disclose at least wherein the user is associated with a user profile that specifies an action for adjusting a presentation of the media asset on the smart glass display. However, the Office notes Kuang discloses said limitation as discussed in the rejection of the claim(s) below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 41, 42, 43, 45, 48 – 53, 55, and 58 - 60 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rider al; (Publication number: US 2016/0178905 A1), hereafter Rider, in view of SAKO et al; (Publication number: US 2009/0278766 A1), hereafter SAKO, in view of Kuang et al; (Patent number: US 11, 069, 104 B1), hereafter Kuang. Re claim 51, Rider discloses a system for controlling a smart glass display (fig 2A, ref 100) comprising: control circuitry (paragraph [23], computing device 100) configured to: cause to be presented a media asset on a portion of the smart glass display at a first transparency level (paragraph [50], when smart glass 225 of FIG. 2A is turned on and the transparency level is correspondingly adjusted according to one embodiment. In one embodiment and as illustrated, turning on smart glass 225 facilitates background 261 to be fogged, dimmed, or darkened, etc., having influence (e.g., positive influence) in making the foreground having map 253; also see paragraph [53], At block 310, a smart glass at a computing device (e.g., wearable glasses, smart window, etc.) may be turned on and any transparency associated with the smart glass (and thus with the computing device ) may be dynamically and correspondingly adjusted and set to an appropriate level. ); measure, using a sensor, illuminance information about light projecting unto the smart glass display (paragraph [55], a determination is made as to whether a change in the surrounding light conditions is detected; paragraph [47], in one embodiment, light sensor 227 may be used to detect or sense the surrounding lighting conditions. ); determining, using a control circuitry, a geographical location of the user based on the physical location of the user (paragraph [35], In some embodiments, while evaluating the information, condition logic 203 may take into consideration any number and type of predefined thresholds, predetermined criteria, policies, user preferences, voice instructions, gestures, etc,. to reach its decision regarding whether the transparency of smart glass 225 is to be adjusted. For example, predefined user preferences may dictate glass transparency levels to be adjusted based on certain hours (such as 8 AM – 5 PM, evenings, sleep hours, etc.), particular locations (e.g., office, in-flight, outdoors, etc.); adjusting presentation of the media asset on the smart glass display to a second transparency level by modifying the luminosity of the media assert portion of the smart glass, based on the illuminance information the geographic location of the user (paragraph [55], At decision block 320, a determination is made as to whether a change in the surrounding light conditions is detected … However, if the current transparency level is to be adjusted, in one embodiment, the current transparency level associated with the smart device is dynamically adjusted to a new appropriate level at block 335. At block 340, the process continues with the new transparency level and further, the process continues with decision block 320,; paragraph [35], to reach its decision regarding whether the transparency of smart glass 225 is to be adjusted. For example, predefined user preferences may dictate glass transparency levels to be adjusted based on certain hours (such as 8 AM – 5 PM, evening, sleep hours, etc.) particular locations (e.g., office, in-flight, outdoors, etc.; paragraph [38], upon receiving the instructions, adjustment logic 211 may automatically and dynamically adjust transparency levels of smart glass display 225. For example, in one embodiment, power source 231 may be triggered by adjustment logic 221 to supply additional power to supply light to smart glass 225 to reduce its transparency (such as making smart glass 225 foggier, dirtier, and/or darker [luminosity])). Rider discloses the claimed invention but is silent on adjusting the presentation to the second transparency level by modifying the luminosity and color mix for the media asset portion of the smart glass display, based at least in part on measured light illuminance exceeding a threshold light illuminance value and the geographic location of the user ; and wherein the user is associated with a user profile that specifies an action for adjusting a presentation of the media asset on the smart glass display. Sako, in analogous art, discloses a display apparatus and display method. More particularly, Sako discloses adjusting a color mix (Sako [0270] blue or red color is enhanced) and adjusting luminosity based at least in part on measured light illuminance exceeding a threshold light illuminance value (Sako Figure 14 and [0254 – 0256] higher/lower than x Lux/y Lux) and the geographic location of the user (Sako Figure 18; [0300 – 0301]). It would have been obvious to modify Rider to include adjusting the presentation to the second transparency level by modifying the luminosity and color mix for the media asset portion of the smart glass display, based at least in part on measured light illuminance exceeding a threshold light illuminance value and the geographic location of the user, as claimed. Those skilled in the art would appreciate the ability to enable appropriate or entertaining display operation to be performed in accordance with an outside world situation (Sako [0004]). Further, Kuang discloses a display that uses a light sensor to generate environmentally matched artificial reality content. More particularly, Kuang discloses wherein the user is associated with a user profile that specifies an action for adjusting a presentation of the media asset on the smart glass display (Kuang Col 5 lines 1 – 5; The personalized color display settings may be stored in a user's profile. During presentation of a virtual object of a particular color, the personalized color display settings are taken into account and the presented color is adjusted accordingly. By customizing generated artificial reality content to environmentally match the ambient lighting and ambient color, the display assembly provides realistic-looking generated content for the user. For example, the display assembly may detect the background or context of the generated content and adjust the display settings for a more natural appearance of the generated content.). It would have been obvious to further modify Rider (in view of Sako) wherein the user is associated with a user profile that specifies an action for adjusting a presentation of the media asset on the smart glass display as claimed. Those skilled in the art would appreciate providing a realistic-looking generated content for the user. Re claim 52, in the obvious combination, Rider discloses that the first transparency level is determined based on environmental conditions around the smart glass display (paragraph [53], At block 310, a smart glass at a computing device (e.g., wearable glasses, smart window, etc.) may be turned on an any transparency associated with the smart glass (and thus the computing device) may be dynamically and correspondingly adjusted and set to an appropriate level. For example, surrounding light conditions [environment condition] may change such that is becomes difficult for the user of the wearable glasses to view or read any text and/or graphics being displayed on the screen of the wearable glasses; see also Sako Figure 4) Re claim 53, in the obvious combination, Rider discloses wherein the environmental conditions include weather conditions and sunlight conditions based at least in part on the geographic location of the smart glasses comprising the smart glass display (Sako [0265][0267] weather conditions and cloudy weather [sunlight conditions]). Re claim 55, in the obvious combination, Rider discloses wherein the control circuitry is configured to adjust presentation of the media asset on the smart glass display by adjusting a light transporting the media asset on the smart glass display (Rider [0055]). Re claim 58, in the obvious combination, Rider discloses the smart glass display is disposed within one or more lenses of the smart glasses display and supported by a frame (Rider FIG. 2F lens 275 is display and supported by frame). Re claim 59, in the obvious combination, Rider disclose that the smart glass display is voice-controller (paragraph [55], or whether a user has placed a voice command and/or a gesture command to alter the current transparency level.). Re claim 60, in the obvious combination, Rider does not disclose wherein the smart glass display is controlled by movement of an eye pupil. However, Joo further discloses wherein the smart glass display is controlled by movement of an eye pupil (Joo [0122] According to an exemplary embodiment, the user input unit 130 may receive a multiple input. Throughout the specification, a “multiple input” refers to combination of at least two input methods. For example, the wearable glasses 100 may receive a touch input and a motion input of a user or may receive a touch input and a sound input of a user. Also, the wearable glasses 100 may receive a touch input and an eyeball input of a user. An eyeball input refers to a user input for adjusting eye blinking, gaze positions, an eyeball movement speed or the like to control the wearable glasses 100). It would have been obvious to further modify Rider wherein the smart glass display is controlled by movement of an eye pupil, as claimed. Those skilled in the art would appreciate the ability to interact with the user interface in an inconspicuous manner. Claims 41, 42, 43, 45, 48, 49, and 50 are method claim(s) drawn to the corresponding apparatus claims 50, 51, 52, 53, 55, 58, 59, and 60 by the smart glass display and are rejected for the same reasons as above since implementing the apparatus would necessitate using the method as claimed. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIHIR K RAYAN whose telephone number is (571)270-5719. The examiner can normally be reached Monday - Friday 9 - 5pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached on 571-272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. //MIHIR K RAYAN/ 11 March 2026 Primary Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

Jul 24, 2023
Application Filed
Apr 30, 2024
Non-Final Rejection — §103
Jul 26, 2024
Response Filed
Nov 11, 2024
Response after Non-Final Action
Nov 30, 2024
Non-Final Rejection — §103
Mar 03, 2025
Response Filed
Jul 22, 2025
Response after Non-Final Action
Aug 04, 2025
Request for Continued Examination
Aug 12, 2025
Response after Non-Final Action
Aug 22, 2025
Non-Final Rejection — §103
Nov 19, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Nov 24, 2025
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594007
ENERGY TRANSMITTING DEVICE AND SYSTEM TO MONITOR AND TREAT TINNITUS
2y 5m to grant Granted Apr 07, 2026
Patent 12586294
Systems And Methods For Generating Stabilized Images Of A Real Environment In Artificial Reality
2y 5m to grant Granted Mar 24, 2026
Patent 12572222
MODULAR VEHICLE HMI
2y 5m to grant Granted Mar 10, 2026
Patent 12554320
BODY TRACKING METHOD, BODY TRACKING SYSTEM, AND HOST
2y 5m to grant Granted Feb 17, 2026
Patent 12547244
Gaze-Driven Autofocus Camera for Mixed-reality Passthrough
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
85%
Grant Probability
96%
With Interview (+10.7%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 582 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month