Prosecution Insights
Last updated: April 19, 2026
Application No. 18/734,599

SYSTEMS AND METHODS FOR PRESENTING SUPPLEMENTAL CONTENT IN AUGMENTED REALITY

Non-Final OA §103
Filed
Jun 05, 2024
Examiner
MUSHAMBO, MARTIN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
690 granted / 816 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
15 currently pending
Career history
831
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 816 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 02/20/2026, 01/05/2026, 06/05/2024, 11/07/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 52-71 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 20190331914 A1) hereinafter Lee, in view Bittner (US 20130317912 A1). Claim 52. (New) A method, comprising: receiving an input requesting information related to an object in a field of view of an augmented reality device (Lee, [0097] input from a user that specifies a region of interest within the user's field of view); analyzing an image comprising at least a portion of the field of view to detect a plurality of distinct objects within the field of view (Lee, [0193]-[0197] wearable computing device 312 can receive an image of a picture of an eye of a wearer of wearable computing device 312, in order to determine the eye gaze. In some embodiments, wearable computing device 312 can send the image of the eye(s) of the wearer to a server, such as server 122, for the server to determine the eye gaze vector based on received images of the eye(s) of the wearer. Hence a region of interest can be indicated using eye gaze vector); based at least in part on the input, identifying an object of the plurality of distinct objects (Lee, [0097] [0197]); identifying metadata associated with the identified object (Lee, [0153] [01154]); Lee does not explicitly disclose searching a social media platform using the metadata associated with the identified object to identify social media content related to the identified object; and generating, for presentation, the social media content to the augmented reality device. Bittner discloses searching a social media platform using the metadata associated with the identified object to identify social media content related to the identified object (Bittner, Abstract, [0043] determination of connection between an image and a user is made, [0047] restaurant choice is matched to friend preference profile of restaurant from database); and generating, for presentation, the social media content to the augmented reality device (Bittner, [0048] information is displayed with additional information from database). It would have been obvious to one ordinary skilled in the art before the filing of the claimed invention to combine the teachings of Lee with the teachings of Bittner since they are both analogous in augmented reality related field. One ordinary skilled in the art before the filing of the claimed invention would have been motivated to combine the teachings of Lee with the teachings of Bittner in order to provide the advertisements to a targeted user based on the user's interests and environment.Claim 62 essentially recites the same limitations as claim 52. Therefore, the rejection of claim 52 is applied to claim 62.Claims 53 and 63. (New) The method of claim 52, further comprising determining the identified object comprises a reference to an event, wherein the metadata associated with the identified object is identified based at least in part on the reference to the event (Lee, Abstract the event here is a sharing session). Same rationale as claim 52Claims 54 and 64. (New) The method of claim 52, further comprising determining the identified object comprises a reference to a person, wherein the metadata associated with the identified object is identified based at least in part on the reference to the person (Bittner, [0027]-[0034] If the server is accessing one’s information, it is implied that a user was identified. The user's personal information stored on the user information database 106 may be used by the social media scrape module 108 to assist in identifying relevant data during the social media scraping. data regarding the user's likes, interests, hobbies, travel preferences and the like is collected from the user's pages on the social media websites 112). Same rationale as claim 52Claims 55 and 65. (New) The method of claim 52, further comprising determining the identified object is related to a user account associated with the augmented reality device, wherein the metadata associated with the identified object is identified based at least in part on the relationship to the user account (Bittner, [0027]-[0034] The user's personal information stored on the user information database 106 may be used by the social media scrape module 108 to assist in identifying relevant data during the social media scraping. data regarding the user's likes, interests, hobbies, travel preferences and the like is collected from the user's pages on the social media websites 112). Same rationale as claim 52Claims 56 and 66. (New) The method of claim 52, wherein searching the social media platform using the metadata associated with the identified object to identify social media content related to the identified object comprises comparing the metadata associated with the identified object to metadata associated with social media content of the social media platform (Lee, [0153] [0154]). Same rationale as claim 52Claims 57 and 67. (New) The method of claim 52, further comprising determining the identified object is related to a user account associated with the augmented reality device, wherein searching a social media platform comprises searching the user account (Lee, [0027]-[0029] The server of FIG. 4 also includes a social media scrape module 108. The social media scrape module accesses social media websites 112 through network 92 and accesses the user's social media data after logging on to the social media websites using the user's social media website usernames and passwords. The social media scrape module 108 then scrapes the users social media data for information that may be relevant and of interests to businesses and advertisers). Same rationale as claim 52Claims 58 and 68. (New) The method of claim 52, wherein the input comprises an indication of an area of the field of view on which a camera of the augmented reality device is focused (Lee, [0242] [0243] [0247]). Same rationale as claim 52Claims 59 and 69. (New) The method of claim 52, wherein the input is received through a voice recognition interface (Lee, [0242] [0243] [0247] determining that the audio input includes at least part of the audio of interest, generate an indication of a region of interest associated with the at least part of the audio of interest). Same rationale as claim 52Claims 60 and 70. (New) The method of claim 52, wherein generating, for presentation, the social media content to the augmented reality device comprises generating, for presentation using speakers of the augmented reality device, audio related to the social media content (Lee, [0008] An instruction of audio of interest is received at the wearable computing device. Audio input is received at the wearable computing device via one or more microphones. The wearable computing device determines whether the audio input includes at least part of the audio of interest. In response to determining that the audio input includes the at least part of the audio of interest, the wearable computing device generates an indication of a region of interest associated with the at least part of the audio of interest. The wearable computing device displays the indication of the region of interest as part of the computer-generated image). Same rationale as claim 52Claims 61 and 71. (New) The method of claim 52, wherein generating, for presentation, the social media content to the augmented reality device comprises generating, for presentation using a display of the augmented reality device, the social media content (Bittner, [0048] information is displayed with additional information from database). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is as follows:US 9998790 B1 Methods and systems are described herein for providing streamlined access to media assets of interest to a user. The method includes determining that a supplemental viewing device, through which a user views a field of view, is directed at a first field of view. The method further involves detecting that the supplemental viewing device is now directed at a second field of view, and determining that a media consumption device is within the second field of view. A first media asset of interest to the user that is available for consumption via the media consumption device is identified, and the supplemental viewing device generates a visual indication in the second field of view. The visual indication indicates that the first media asset is available for consumption via the media consumption device, and the visual indication tracks a location of the media consumption device in the second field of view.US 9769434 B1 An application server based in a communication network facilitates help desk support for end users that employ a camera-equipped wearable device. Upon receiving a request from a user for assistance with a problem or task, the application server controls the wearable device to collect multimedia information—e.g., images, video, and sounds—associated with an object of the task. The application server then crowdsources the collected multimedia information to a plurality of other remote user devices. Based on helpful information received from users of the remote devices, and/or an analysis performed at the server, the application server categorizes the task. So categorized, the application server can retrieve pertinent assistance data associated with the task and send that information to the user.US 20130194164 A1 Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARTIN MUSHAMBO whose telephone number is (571)270-3390. The examiner can normally be reached Monday-Friday (8:00AM-5:00PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARTIN MUSHAMBO/ Primary Examiner, Art Unit 2615 03/21/2026
Read full office action

Prosecution Timeline

Jun 05, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602892
WALLPAPER DISPLAY METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598282
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586331
SYSTEM AND METHOD FOR CHANGING OVERALL STYLE OF PUBLIC AREA BASED ON VIRTUAL SCENE
2y 5m to grant Granted Mar 24, 2026
Patent 12579754
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573146
PRODUCT PLACEMENT SYSTEMS AND METHODS FOR 3D PRODUCTIONS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 816 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month