Prosecution Insights
Last updated: April 19, 2026
Application No. 18/121,988

METHOD AND DEVICE FOR PRESENTING CONTENT BASED ON MACHINE-READABLE CONTENT AND OBJECT TYPE

Final Rejection §103§112
Filed
Mar 15, 2023
Examiner
ALLEN, NICHOLAS E
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
585 granted / 760 resolved
+22.0% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
68 currently pending
Career history
828
Total Applications
across all art units

Statute-Specific Performance

§101
22.7%
-17.3% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
4.7%
-35.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 760 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In response to Applicant’s claims filed on July 10, 2025, claims 1-2, 5-8, 12-13, 16-18, 20-21, 23-24, 26, 28 are now pending for examination in the application. “The 112 rejection under 35 USC 112 set forth in the 07/10/2025 office action is hereby withdrawn.” Response to Arguments This office action is in response to amendment filed 07/10/2025. In this action claim(s) 1-2, 5-8, 12-13, 16-18, 20-21, 23-24, 26, 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ramneth et al. (US Pub. No. 20200143238) in view of Lee et al. (US Pub. No. 20190331914). The Ramneth et al. reference has been added to address the amendment of in response to a gaze of a user of the device being directed to a first portion of the image including the first object, displaying first virtual content that is related to the machine-readable content. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim(s) 1, 16, and 20 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. There is no support for “wherein the first virtual content is obtained based on the first object type and the machine-readable content detected in the first portion of the image ….” And “different from the first virtual content, wherein the second virtual content is obtained based on the second object type and the machine-readable content detected in the second portion of the image.” In claims 1, 16, 20 and 25. Dependent claims 2, 5-8, 12-13, 17-18, 21, 23-24, 26, 28 are also rejected for inheriting the deficiencies of the independent claims from which they depend on. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s 1-2, 5-8, 12-13, 16-18, 20-21, 23-24, 26, 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ramneth et al. (US Pub. No. 20200143238) in view of Lee et al. (US Pub. No. 20190331914). With respect to claim 1, Ramnath et al. teaches a method comprising: at a device comprising a display, an image sensor, one or more processors, and a memory (Paragraph 42 discloses artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers and Fig. 9): capturing, via the image sensor, an image of a physical environment that includes machine- readable content displayed with a first object of a first object type and a second object of a second object type (Paragraph 16 discloses a computing device may access an image of a real-world environment that may contain one or more depictions of real-world objects that are intended targets for AR effects); in response to a gaze of a user of the device being directed to a first portion of the image including the first object, displaying first virtual content that is related to the machine-readable content (Paragraph 5 discloses the associated machine learning model may be trained to detect objects of one or more real-world object types (e.g., the object-type “posters” associated with the Cowboy RoboNinja poster in the image) and note specific AR targets), wherein the first virtual content is obtained based on the first object type and the machine- readable content detected in the first portion of the image); and in response to the gaze being directed to a second portion of the image including the second object, displaying second virtual content that is related to the machine-readable content and is different from the first virtual content (Paragraph 17 discloses the local-feature comparison process may be able to disambiguate the potentially matching AR targets to accurately identify a matching AR target). Ramneth et al. discloses explicitly disclose in response to the gaze being directed to a second portion of the image including the second object. However, Lee et al. teaches in response to the gaze being directed to a second portion of the image including the second object (Paragraph 143 discloses wearable computing device 312 having field of view 316 and gaze direction 318 has indicated region of interest 510 within environment 310). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify Ramnath et al. with Lee et al. to include in response to a gaze of a user of the device being directed to the first/second objects, displaying first/second virtual content that is related to the machine-readable content. This would have facilitated presenting different results using an augmented reality interface. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 2, Ramnath et al. teaches the method of claim 1, wherein the machine-readable content includes at least one of text, a one-dimensional barcode, or a two-dimensional barcode (Paragraph 28 discloses local-feature descriptors for several points of interest (e.g., the point of interest 420) may be extracted for the image portion 410 and Fig. 4 which clearly is text). The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 5, Lee et al. teaches the method of claim 1, further comprising: creating a search query using the machine-readable content and the first object type (Paragraph 126 discloses generate queries for one or more search engines, search tools, databases, and/or other sources that include the text “Canola.”); transmitting the search query to a server (Paragraph 67 discloses the overlay module 113 may query the system 100 for data that comprises the virtual overlays 407, 409, 411); and receiving the first virtual content in response to the search query (Paragraph 67 discloses the overlay module 113 may query the system 100 for data that comprises the virtual overlays 407, 409, 411). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Ramnath et al. reference and the Lee et al. reference is applicable to dependent claim 5. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 6, Ramnath et al. teaches the method of claim 1, further comprising: transmitting the machine-readable content and an indication of the first object type to a server (Paragraph 23 discloses each of the one or more regions of interest may be sent through a first machine learning model (e.g., a CNN) that has been trained to detect objects of one or more real-world object types (e.g., the object-type “posters” associated with the Cowboy RoboNinja poster in the image 310); and receiving the first virtual content in response to a search query created by the server using the machine-readable content and the object type (Paragraph 42 discloses artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 7, Lee et al. teaches the method of claim 1, wherein displaying the first virtual content comprises: determining a plurality of search queries based on the machine-readable content (See Fig 4A and Paragraph 124 discloses in response to the “Find Objects with Text and Apples” instruction, wearable computing device 312 has found two objects: (1) a canola oil bottle with the text “Canola” and (2) a basket of apples); selecting a particular search query from the plurality of search queries based on the first object type (Paragraph 126 discloses generate queries for one or more search engines, search tools, databases, and/or other sources that include the text “Canola.” Upon generating these queries, wearable computing device 312 can communicate the queries as needed, and, in response, receives search results based on the queries); transmitting the particular search query to a server (Paragraph 126 discloses generate queries for one or more search engines, search tools, databases, and/or other sources that include the text “Canola.” Upon generating these queries, wearable computing device 312 can communicate the queries as needed, and, in response, receives search results based on the queries); and receiving the first virtual content from the server in response to transmitting the particular search query (Paragraph 126 discloses generate queries for one or more search engines, search tools, databases, and/or other sources that include the text “Canola.” Upon generating these queries, wearable computing device 312 can communicate the queries as needed, and, in response, receives search results based on the queries). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Ramnath et al. reference and the Lee et al. reference is applicable to dependent claim 7. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 8, Ramnath et al. teaches the method of claim 1, wherein the first virtual content includes at least one of first text, first audio, a first set of images, or a first video, and the second virtual content includes at least one of second text, second audio, a second set of images or a second video (Paragraph 42 discloses artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer)). The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 12, Ramnath et al. teaches method of claim 1, wherein displaying the first virtual content includes displaying the first virtual content adjacent to the first object and displaying the second virtual content includes displaying the second virtual content adjacent to the second object (Paragraph 4 discloses a user viewing the Cowboy RoboNinja movie poster on a display of a smartphone may see on the display, near the poster, an avatar of Cowboy RoboNinja). The Badr et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 13, Badr et al. teaches the method of claim 1, wherein displaying the virtual content includes displaying an affordance to perform an action based on the search query (Paragraph 50 discloses allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects). With respect to claim 16, Badr et al. discloses a device comprising: a display (Paragraph 42 discloses artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers and Fig. 9); an image sensor (Paragraph 42 discloses artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers and Fig. 9); a non-transitory memory (Paragraph 42 discloses artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers and Fig. 9); and one or more processors (Paragraph 42 discloses artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers and Fig. 9) to: capture, via the image sensor, an image of a physical environment that includes machine- readable content displayed with a first object of a first object type and a second object of a second object type (Paragraph 16 discloses a computing device may access an image of a real-world environment that may contain one or more depictions of real-world objects that are intended targets for AR effects); in response to a gaze of a user of the device being directed to a first portion of the image including the first object, display first virtual content that is related to the machine-readable content (Paragraph 5 discloses the associated machine learning model may be trained to detect objects of one or more real-world object types (e.g., the object-type “posters” associated with the Cowboy RoboNinja poster in the image) and note specific AR targets), wherein the first virtual content is obtained based on the first object type and the machine- readable content detected in the first portion of the image); and in response to the gaze being directed to a second portion of the image including the second object, display second virtual content that is related to the machine-readable content and is different from the first virtual content (Paragraph 17 discloses the local-feature comparison process may be able to disambiguate the potentially matching AR targets to accurately identify a matching AR target). Ramneth et al. discloses explicitly disclose in response to the gaze being directed to a second portion of the image including the second object. However, Lee et al. teaches in response to the gaze being directed to a second portion of the image including the second object (Paragraph 143 discloses wearable computing device 312 having field of view 316 and gaze direction 318 has indicated region of interest 510 within environment 310). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify Ramnath et al. with Lee et al. to include in response to a gaze of a user of the device being directed to the first/second objects, displaying first/second virtual content that is related to the machine-readable content. This would have facilitated presenting different results using an augmented reality interface. With respect to claim 17, it is rejected on grounds corresponding to above rejected claim 5, because claim 17 is substantially equivalent to claim 5. With respect to claim 18, it is rejected on grounds corresponding to above rejected claim 6, because claim 18 is substantially equivalent to claim 6. With respect to claim 20, Badr et al. discloses a non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with an image sensor, cause the device to: capture, via the image sensor, an image of a physical environment that includes machine- readable content displayed with a first object of a first object type and a second object of a second object type (Paragraph 16 discloses a computing device may access an image of a real-world environment that may contain one or more depictions of real-world objects that are intended targets for AR effects); in response to a gaze of a user of the device being directed to a first portion of the image including the first object, display first virtual content that is related to the machine-readable content (Paragraph 5 discloses the associated machine learning model may be trained to detect objects of one or more real-world object types (e.g., the object-type “posters” associated with the Cowboy RoboNinja poster in the image) and note specific AR targets), wherein the first virtual content is obtained based on the first object type and the machine- readable content detected in the first portion of the image); and in response to the gaze being directed to a second portion of the image including the second object, display second virtual content that is related to the machine-readable content and is different from the first virtual content (Paragraph 17 discloses the local-feature comparison process may be able to disambiguate the potentially matching AR targets to accurately identify a matching AR target). Ramneth et al. discloses explicitly disclose in response to the gaze being directed to a second portion of the image including the second object. However, Lee et al. teaches in response to the gaze being directed to a second portion of the image including the second object (Paragraph 143 discloses wearable computing device 312 having field of view 316 and gaze direction 318 has indicated region of interest 510 within environment 310). Therefore, it would have been obvious at the time the invention was made to a person having ordinary skill in the art to modify Ramnath et al. with Lee et al. to include in response to a gaze of a user of the device being directed to the first/second objects, displaying first/second virtual content that is related to the machine-readable content. This would have facilitated presenting different results using an augmented reality interface. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 21, Lee et al. teaches the method of claim 1, wherein the first object type corresponds to an edible food item and the second object type is a book (See Fig 4A and Paragraph 124 discloses in response to the “Find Objects with Text and Apples” instruction, wearable computing device 312 has found two objects: (1) a canola oil bottle with the text “Canola” and (2) a basket of apples). The motivation to combine statement previously provided in the rejection of independent claim 1 provided above, combining the Badr et al. reference and the Lee et al. reference is applicable to dependent claim 21. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 1. With respect to claim 23, Ramnath et al. teaches the method of claim 1, wherein the first virtual content includes a virtual character and the second virtual content includes graphical content (Paragraph 4 discloses a user viewing the Cowboy RoboNinja movie poster on a display of a smartphone may see on the display, near the poster, an avatar of Cowboy RoboNinja). With respect to claim 24, it is rejected on grounds corresponding to above rejected claim 21, because claim 24 is substantially equivalent to claim 21. With respect to claim 26, it is rejected on grounds corresponding to above rejected claim 23, because claim 26 is substantially equivalent to claim 23. The Ramnath et al. reference as modified by Lee et al. teaches all the limitations of claim 20. With respect to claim 28, Ramnath et al. teaches the non-transitory memory of claim 20, wherein the first virtual content is three- dimensional and the second virtual content is two-dimensional (Paragraph 38 discloses an AR effect that includes a 3D Cowboy RoboNinja avatar may be associated (e.g., in an index of a database) with the matching AR target, which may correspond to a Cowboy RoboNinja movie poster. In this example, when a Cowboy RoboNinja movie poster is detected within an image captured at a client device, the corresponding AR target may be identified and the Cowboy RoboNinja avatar may be rendered on the client device. As another example and not by way of limitation, when a user points a camera of a client device toward a landmark such as the Eiffel Tower, relevant information or animations may be rendered as AR effects). Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PG-PUB 20150112972 is directed to SYSTEM AND METHOD OF IDENTIFYING VISUAL OBJECTS: Paragraph [0005] discloses identify and obtain more information about objects in the captured sequence of images that are likely to be of interest to the user. If the server is successful in doing so, the server may transmit the additional information to the mobile device. The additional information may include information that is inherent to the item captured in the image such as the product's size if the item is a product. The additional information may be related but not necessarily inherent to the product, such as a search result that is obtained by querying a web search engine with the name of the object. The server may use various methods to determine the object within a captured image that is likely to be of greatest interest to the user. One method may include determining the number of images in which an individual object appears. The server may also determine how often related additional information found for one image matches related additional information found for other images. The server may send the additional information to the mobile device. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS E ALLEN whose telephone number is (571)270-3562. The examiner can normally be reached Monday through Thursday 830-630. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571) 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.E.A/Examiner, Art Unit 2154 /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Mar 15, 2023
Application Filed
Nov 08, 2023
Response after Non-Final Action
Jan 25, 2024
Non-Final Rejection — §103, §112
May 21, 2024
Interview Requested
May 28, 2024
Applicant Interview (Telephonic)
May 28, 2024
Examiner Interview Summary
Jun 11, 2024
Response Filed
Oct 08, 2024
Final Rejection — §103, §112
Jan 23, 2025
Applicant Interview (Telephonic)
Jan 24, 2025
Examiner Interview Summary
Feb 18, 2025
Request for Continued Examination
Feb 25, 2025
Response after Non-Final Action
Apr 03, 2025
Non-Final Rejection — §103, §112
Jun 29, 2025
Interview Requested
Jul 08, 2025
Examiner Interview Summary
Jul 08, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12380068
RECENT FILE SYNCHRONIZATION AND AGGREGATION METHODS AND SYSTEMS
2y 5m to grant Granted Aug 05, 2025
Patent 12339822
METHOD AND SYSTEM FOR MIGRATING CONTENT BETWEEN ENTERPRISE CONTENT MANAGEMENT SYSTEMS
2y 5m to grant Granted Jun 24, 2025
Patent 12321704
COMPOSITE EXTRACTION SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE PLATFORM
2y 5m to grant Granted Jun 03, 2025
Patent 12271379
CROSS-DATABASE JOIN QUERY
2y 5m to grant Granted Apr 08, 2025
Patent 12259876
SYSTEM AND METHOD FOR A HYBRID CONTRACT EXECUTION ENVIRONMENT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
93%
With Interview (+16.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 760 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month