Prosecution Insights
Last updated: April 18, 2026
Application No. 18/153,746

Method for representing virtual information in a real environment

Final Rejection §102§103
Filed
Jan 12, 2023
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
6 (Final)
73%
Grant Probability
Favorable
7-8
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a) the invention was known or used by others in this country, or patented or described in a printed publication in this or a foreign country, before the invention thereof by the applicant for a patent. Claim(s) 1-3, 5-13 and 15-20 is/are rejected under pre-AIA 35 U.S.C. 102(a) as being anticipated by Mukai et al., U.S. Patent Number 8,264,584 B2. Regarding claim 1, Mukai discloses a computing system configured to: determine a plurality of regions in a real environment, wherein the plurality of regions are defined based on a position and orientation of an electronic device and a distance from the electronic device camera (col. 18, lines 29-30, image-capturing range information obtaining unit obtains information (range information) in a range captured by the image capturing apparatus; see also figures 7 and 10-12); determine, based on the position and orientation of the camera and location information for each of a plurality of points of interest (POIs), a first set of the plurality of POIs belonging to a first region of the plurality of regions in the real environment and a second set of the plurality of POIs belonging to a second region of the plurality of regions in the real environment (col. 20, lines 22-27, landmark-information extracting unit extracts landmark information included in the image captured by the image capturing apparatus, which Examiner interprets as a first set of POIs, and landmark information of the periphery of the image, from the map database, which Examiner interprets as a second set belonging to a second region, i.e. the periphery region, in the real environment; see also figure 27); display, on a screen of the electronic device, a view of the real environment fed from a camera of the electronic device and the first set of the plurality of POIs overlayed over the view of the environment at screen positions and perspective corresponding to a pose of the camera as the view of the real environment is captured and the location information for the first set of the plurality of POIs, wherein each of the first set of the plurality of POIs is displayed at a size corresponding to a relative position of the corresponding POI to the electronic device (col. 21, lines 1-5, display-resolution information detecting unit is resolution information of the display unit that displays image information captured by image capturing apparatus and the landmark information in an overlaying manner; col. 25, lines 56-59, the overlaying display processing unit displays the extracted landmark information overlaid on a scene image obtained by the image processing unit; see also figure 38); and display, on the screen, the second set of the plurality of POIs at screen positions along a margin of the screen display in the view of the real environment, wherein the second set of the plurality of POIs are presented in a uniform size, wherein the second set of the plurality POIs are each selectable to present a preview of virtual information of a corresponding POI of the second set of the plurality of POIs (col. 22, lines 12-15, landmark-displaying capturing mode for displaying landmarks on a screen while obtaining information for capturing image and the periphery information; col. 30, lines 22-26, the landmark display area, the landmarks distant from the image capturing apparatus are displayed in the upper portion of the display area; furthermore, in the landmark display area, the landmarks closer to the image capturing apparatus are displayed in the lower portion of the display area; col. 31, lines 62-67, selecting a landmark within a landmark display area; once a landmark is selected, a landmark detailed information display are is displayed on the screen; see also figure 43). determine a position and orientation of an electronic device comprising a camera capturing image data of a view of a real environment (col. 17, lines 21-23, image capturing apparatus includes a position-information obtaining unit, and image-capturing direction obtaining unit; col. 18, lines 19-21, image-capturing direction obtaining unit including an image-capturing direction detecting unit that obtains direction information indicating an orientation) Regarding claim 2, Mukai discloses further configured to: detect input at a POI associated with POI content (col. 32, lines 16-19, user selects a landmark URL on the screen, using an operation key of the image capturing apparatus, with the landmark URL being displayed in the landmark detailed information area, which Examiner interprets as detected input at a POI associated with POI content, i.e. landmark URL); and in response to the detected input, perform an action related to the POI associated with the POI content (col. 32, lines 19-22, a web browser providing in the image capturing apparatus is activated, the website indicated by the selected URL is read from the Internet, and the website is displayed on the screen). Regarding claim 3, Mukai discloses wherein the screen comprises a semi- transparent screen (col. 25, lines 44-48, image capturing apparatus displays a captured scene obtained through a lens of the image capturing apparatus and the landmark information obtained from the map server in the overlaying manner using the overlaying display processing unit, which Examiner interprets as a semi-transparent screen). Regarding claim 5, Mukai discloses further configured to: display additional content corresponding to each POI (col. 2, lines 58-59, plurality of additional information each corresponding to an object included in the digital image captured by the imagining unit; col. 2, lines 64-65, display the additional information overlaid). Regarding claim 6, Mukai discloses further configured to: display a visually perceivable relation indication indicative of a relation between each POI and the additional content (figure 38, area 1, area 2, area 3, which Examiner interprets as a visual hint indicating of a relation). Regarding claim 7, Mukai discloses wherein the additional content comprises at least one of an icon, a graphical object, text, a figure, a video, an annotation, a name, and a description of a corresponding POI (figure 38). Regarding claim 8, Mukai discloses further configured to: capture depth data of the real environment (figure 10). Regarding claim 9, Mukai discloses wherein the additional content comprises a distance between a position of each POI in the real environment and a camera position when capturing the view of the real environment (col. 33, lines 11-14, the names of landmarks of the subjects are displayed according to each distance from a capturing position to a corresponding one of the subjects). Regarding claim 10, Mukai discloses wherein each POI comprises a visual directional hint indicative of the position and orientation of the camera relative to the POI, and further in response to detecting a change in a camera pose, update the visual directional hint in accordance with the change of the camera pose relative to the POI (col. 33, lines 15-18, when landmarks are displayed as indicated by an arrow, the distances to the respective subjects displayed as the landmarks are longer in the order from the first row to the second row; see also figure 47). Regarding claims 11-13 and 15-20, they are rejected based upon similar rational as above claims 1-3, and 5-10 respective. Mukai further discloses a method (col. 17, lines 46-48). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 4 and 14 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Mukai as applied to claims 1 and 11 above, and further in view of Robarts et al., U.S. Patent Number 7,073,129 b1. Regarding claims 4 and 14, Mukai discloses col. 44, lines 22-25, motion sensor for detecting a posture and movement of the user or placed in the display, the user may input an operation using their own posture and movement in the display; col. 44, lines 31-33, although embodiment 2 uses a television as a representative display, the display is not limited to this; for example, the display may be a personal computer; col. 51, lines 47-50, real image is captured using an image capturing apparatus, such as a camera, a mobile phone with a camera, a notebook computer with a camera. However, it is noted that Mukai fails to specifically disclose wherein the screen comprises a head- mounted display. Robarts discloses a screen comprises a head mounted display (figure 1; col. 6, lines 36-39, computer has a body worn output devices, including the hand-held flat display, an earpiece speaker, and a head-mounted display; col. 26, lines 27-30, displays a layer of information, on top of the real-world view). It would have been obvious to one of ordinary skill in the art before filing date of the claimed invention to substitute the display as disclosed by Mukai, with the display as disclosed by Robarts, in that the substitution would yield the predictable result of capturing and displaying the landmarks and overlay information using a head mounted display capturing device. Response to Arguments Applicant's arguments filed 01/21/2026 have been fully considered but they are not persuasive. Applicant argues the prior art cited fails to disclose "display, on a screen of the electronic device, a view of the real environment fed from a camera of the electronic device and the first set of the plurality of POIs overlayed over the view of the real environment at screen positions and perspective corresponding to a pose of the camera as the view of the real environment is captured." Applicant further discloses that Mukai fails to disclose a view of the real environment fed from the camera having an overlay because in the claims the overlay is placed based on the camera pose. Examiner responds that Mukai discloses col. 6, lines 40-41, an image-capturing position obtaining unit; col. 6, lines 43-45, an image capturing direction obtaining unit; and further discloses col. 7, lines 34-37, image capturing apparatus displays additional information in a position in a range determined by the image capturing position and image capturing direction in which the image has been captured. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Jan 12, 2023
Application Filed
Sep 27, 2023
Non-Final Rejection — §102, §103
Dec 27, 2023
Response Filed
Mar 12, 2024
Final Rejection — §102, §103
Jun 17, 2024
Request for Continued Examination
Jun 20, 2024
Response after Non-Final Action
Sep 06, 2024
Non-Final Rejection — §102, §103
Dec 30, 2024
Response Filed
Mar 19, 2025
Final Rejection — §102, §103
Jul 24, 2025
Request for Continued Examination
Jul 25, 2025
Response after Non-Final Action
Oct 16, 2025
Non-Final Rejection — §102, §103
Jan 21, 2026
Response Filed
Apr 02, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month