Prosecution Insights
Last updated: April 19, 2026
Application No. 17/689,912

AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR LABELS

Non-Final OA §103
Filed
Mar 08, 2022
Examiner
WILSON, NICHOLAS R
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Techinvest Company Limited
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
1y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
467 granted / 537 resolved
+25.0% vs TC avg
Moderate +12% lift
Without
With
+12.1%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 12m
Avg Prosecution
25 currently pending
Career history
562
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/24/2025 have been fully considered but they are not persuasive. The applicant argues, “Applicant's respectfully submit that Lin is not actually "silent" on this subject, but rather teaches a POSITA not to eliminate recognizing an AR marker or bar code if Lin's objectives are to be achieved.” “Applicant's respectfully submit that Lin is not actually "silent" on this subject, but rather teaches a POSITA not to eliminate recognizing an AR marker or bar code if Lin's objectives are to be achieved. In particular, Lin teaches using such a marker to convey dynamic or static machine or machine sensor values to the display device: As disclosed above, active machine-readable indicia can encode a plurality of machine-readable values. In particular, this can be useful when a sensor for the associated machine detects a fault. In such a circumstance, the machine-readable indicia can activate (from displaying nothing to displaying a value) or change from displaying an "operating correctly" value to displaying a "fault condition" value. Alternatively (or in addition), active machine- readable indicia can continuously update to display dat received from one or more sensors for the associated machine. [0030] By contrast, an active machine readable indicium can display a plurality of encoded values. For example, an electrophoretic (such as e-ink) can be used to display an arbitrary image such as a machine-readable indicium. [0029] Furthermore, even in embodiments where the machine sensor data is communicated in other ways, the machine readable indicia is used to determine which machine's sensors to associate sensor data with: Once the set of machine-readable indicia present in the scene has been identified, processing can proceed to step 406, where data from the sensors associated with each identified machine-readable indicium is retrieved and processed. [0043] As shown in Figure 3, the three machines could be identical; and without the marker or bar code, Lin's system would be unable to know which sensor output to display near which machine, which would defeat the entire point of Lin's machine labelling:”. (See Applicant’s Remarks, page 7, ) The examiner respectfully disagrees. The combination of Lin in view of Border in view of Letellier is recognizing printed indicia using markerless tracking. Recognizing the printed indicia would allow for the system to determine where to place the overlays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lin in view of Border with the markerless tracking techniques of Letellier such that the augmentation could occur without materially altering printed material and disturbing the layout and design. The applicant argues, “The office action provides no explanation of why a POSITA would have declined to implement the key machine indicia feature of Lin’s system with resulting functionality loss”.(See Applicant’s Remarks, page 8) The examiner respectfully disagrees that there would be functionality loss as detailed above. The combination of Lin in view of Border in view of Letellier is recognizing printed indicia using markerless tracking. Recognizing the printed indicia would allow for the system to determine where to place the overlays. The applicant argues, “However, this contention does not explain how Letellier's "markerless tracking" technology could be used to distinguish between the three identical three-dimensional machines Lin shows in Figure 3.” (See Applicant’s Remarks, page 9) The examiner respectfully disagrees. The combination of Lin in view of Border in view of Letellier is recognizing printed indicia using markerless tracking. Recognizing the printed indicia would allow for the system to determine where to place the overlays. The applicant argues “Applicant respectfully submits a POSITA would not have looked to Border to modify Lin, but would instead have implemented Lin as-is without using Border's "visual cue" technique. The Office Action apparently agrees: Lin in view of Border is silent to recognizing without recognizing any AR marker or bar code but relies on Letellier for the missing teaching.” The examiner respectfully disagrees. The combination of Lin in view of Border teaches a flat label object with printed indicia. Lin and Border are analogous since both of them are dealing with viewing object in the augmented reality environment. Lin provided a way of identify and superimposing portion of object based on the identified indicia. Border provided an augmented reality with restricted region of object based on user permission and overlay onto user field of view in the environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made combine the system of Lin with the security checking and measurement techniques of Border such that when using the augmented reality device, system will be able to use identified security feature and dynamically adjust the display content accordingly. The combination of Lin in view of Border in view of Letellier is recognizing printed indicia using markerless tracking. Recognizing the printed indicia would allow for the system to determine where to place the overlays. The applicant argues “applicant still respectfully points out that this leads away from combining Lin with Letellier since the resulting system would not be suitable for Lin's goal of distinguishing between different manufacturing machines on a factory floor.” (See Applicant’s Remarks, page 10) The examiner respectfully disagrees. The combination of Lin in view of Border in view of Letellier is recognizing printed indicia using markerless tracking. Recognizing the printed indicia would allow for the system to determine where to place the overlays. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lin in view of Border with the markerless tracking techniques of Letellier such that the augmentation could occur without materially altering printed material and disturbing the layout and design. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (US 2018/0253876)(Hereinafter referred to as Lin) in view of Border et al. (US 2013/0278631)(Hereinafter referred to as Border) in view of Letellier et al. (“PROVIDING ADITTIONAL COTENT TO PRINT MEDIA USING AUGMENTED REALITY”, IEEE, 2017)(Hereinafter referred to as Letellier). Regarding claim 1, Lin teaches A method of presenting information to a user wearing eyeglass frames (Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400), the method comprising: enabling capture by a portable camera carried by eyeglass frames (Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4)( Accordingly, technician 202 may be equipped with a head-mounted display (HMD) 220. In some embodiments, head-mounted display may be an optical HMD (also known as an optical see-through HMD) which overlays projected imagery on a partially transparent lens. See paragraph [0025]), of a captured image of an object having indicia disposed thereon (see Lin, fig. 4 step 402); recognizing, with a processor carried by the eyeglass frames (see Lin, fig. 4 step 404)( Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4), matching the recognized indicia from the captured image with a record in a database (see Lin, fig. 4 step 406)( In other embodiments, all sensors communicate their data to a central controller and the data from the relevant sensors is retrieved from the central controller. See paragraph [0043]); selecting a media item in response to the matching (see Lin, fig. 4 step 408); and automatically superimposing a selected interactive media item onto an electronic display by the eyeglass frames of the captured image or an image derived therefrom such that the selected interactive media item appears to a user looking through the eyeglass frames to be anchored to the flat label object from different eyeglass frame viewpoints relative to the object (As user moves had or position slightly overlay is still positioned in proximity to the object)(See Lin, fig. 4 step 410)( Processing then proceeds to step 410 where the generated display for each recognized machine-readable indicium is overlain on the display in proximity to the associated indicium or sensor. See paragraph [0046]), but is silent to a flat label object with printed indicia and recognizing without requiring recognition of an AR marker, bar code or other identification marker. However, Border teaches a flat label object with printed indicia at ([0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.) Lin and Border are analogous since both of them are dealing with viewing object in the augmented reality environment. Lin provided a way of identify and superimposing portion of object based on the identified indicia. Border provided an augmented reality with restricted region of object based on user permission and overlay onto user field of view in the environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made combine the system of Lin with the security checking and measurement techniques of Border such that when using the augmented reality device, system will be able to use identified security feature and dynamically adjust the display content accordingly. Lin in view of Border is silent to recognizing without requiring recognition of an AR marker, bar code or other identification marker. Letellier teaches a markerless tracking augmented reality technique for print media (Our approach is to provide a variety of additional content to users of the seasonal magazine for the Konzerthaus Berlin. In order to not disturb the layout and design of the magazine, markerless tracking was used. See page 180, right col., first paragraph). Lin in view of Border and Letellier teach of augmented reality presentation of material and Letellier teaches that by utilizing markerless tracking the system can avoid disturbing the layout and design of the magazine, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lin in view of Border with the markerless tracking techniques of Letellier such that the augmentation could occur without materially altering printed material and disturbing the layout and design. Regarding claim 2, Lin in view of Border in view of Letellier teaches The method of claim 1 wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Lin; See [0003] discloses a system for providing an augmented reality interface for sensor applications.) (Letellier; Our approach is to provide a variety of additional content to users of the seasonal magazine for the Konzerthaus Berlin. In order to not disturb the layout and design of the magazine, markerless tracking was used. See page 180, right col., first paragraph). Regarding claim 3, Lin in view of Border in view of Letellier teaches the method of claim 1 wherein the recognizing comprises recognizing a two-dimensional label object with print on it (Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 4, Lin in view of Border in view of Letellier teaches The method of claim 1 wherein the label object has indicia printed thereon, and the recognizing comprises recognizing at least some of the printed indicia ( Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 5, Lin in view of Border in view of Letellier teaches The method of claim 4 wherein the recognizing includes recognizing characters printed on the label object ( Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 6, Lin in view of Border in view of Letellier teaches The method of claim 5 wherein the label object comprises a patch of printed material attached to an associated item (Border; See [0692] discloses the viewed text is on a sign, a printed document, book, a road sign, a billboard, a menu, and the like.). Regarding claim 7, Lin in view of Border in view of Letellier teaches The method of claim 6 wherein the selected media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content (Lin; For example, HMD 220 could be a smartphone configured to capture imagery from a camera on one side and display augmented imagery on the display on the opposite side. See paragraph [0025]) (Border; See [0771] discloses the user may designate a feature for holding the overlay content by interacting with a user interface of the eyepiece. In embodiments, the overlay may present the content on or in proximity to the recognized feature, and further embodiments, the recognized feature may be at least one of an item for purchase, an item on sale, a sign, an advertisement, an aisle, a location in a store, a kiosk, a service counter, a cash register, a television, a screen, shipping cart, and the like.)( Border; The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. See paragraph [01292]). Regarding claim 8, Lin in view of Border in view of Letellier teaches The method of claim 1 wherein the superimposing is performed on a handheld display device (Lin; For example, HMD 220 could be a smartphone configured to capture imagery from a camera on one side and display augmented imagery on the display on the opposite side. See paragraph [0025]) ( Border; The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. See paragraph [01292]). Regarding claim 9, Lin in view of Border in view of Letellier teaches The method of claim 1 wherein the selected media item comprises a call button (Border; See [0699] discloses the screen of the augmented reality glasses may display a list of options for making the call and the user may gesture using a pointing control device or use any other control technique described herein to select the video calling option on the screen of the augmented reality glasses.). Regarding claim 10, Lin in view of Border in view of Letellier teaches The method of claim 1 further including displaying any or all of the following action buttons in any combination or subcombination: Price Tag, Photo Gallery, Videos, Description, Call, Mail, Shop link, Explanation, Intro, Social Media links, Map, Discount Codes, Reviews, Tutorials, Directions, Test drives, Booking opportunities (Border; See [0723] discloses the eyepiece may provide for an interface to accept wireless streaming media (e.g. video, audio, text messaging, phone call and calendar alerts) from an external facility, such as a smart phone, a tablet, a personal computer, an entertainment device, a portable music and video device, a home theater system, a home entertainment system, another eyepiece, and the like.). Regarding claim 11, Lin teaches A system for presenting information to a user wearing eyeglass frames (Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4)( Accordingly, technician 202 may be equipped with a head-mounted display (HMD) 220. In some embodiments, head-mounted display may be an optical HMD (also known as an optical see-through HMD) which overlays projected imagery on a partially transparent lens. See paragraph [0025]), comprising: a portable camera disposed on the eyeglass frames and configured to capture an image (Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4)( Accordingly, technician 202 may be equipped with a head-mounted display (HMD) 220. In some embodiments, head-mounted display may be an optical HMD (also known as an optical see-through HMD) which overlays projected imagery on a partially transparent lens. See paragraph [0025]) of an object having indicia disposed thereon (see Lin, fig. 4 step 402); and at least one processor connected to the portable camera, disposed on the eyeglass frames and configured to perform operations ( Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4) (see Lin, fig. 4 step 406)( In other embodiments, all sensors communicate their data to a central controller and the data from the relevant sensors is retrieved from the central controller. See paragraph [0043]);comprising: recognizing the indicia from the captured image (see Lin, fig. 4 step 404) ( Turning now to FIG. 4, a flowchart illustrating the operation of a method in accordance with embodiments of the invention is depicted and referred to generally by reference numeral 400. The method begins at a step 402 where raw imagery from one or more cameras on HMD 220 for technician 202 is retrieved for processing. In some embodiments where multiple cameras are present, imagery from only a single camera is retrieved for processing. In some embodiments, camera imagery is processed on a processor of HMD 220. In other embodiments, HMD 220 transmits the imagery from the camera(s) to another computer (such as, for example, central controller 210) for processing. See paragraph [0036] figure 4); matching the recognized object with a record in a database in response to the recognizing (see Lin, fig. 4 step 406)( In other embodiments, all sensors communicate their data to a central controller and the data from the relevant sensors is retrieved from the central controller. See paragraph [0043]); selecting a media item in response to the matching (see Lin, fig. 4 step 408); and automatically superimposing a selected interactive media item onto an electronic display of the captured image or an image derived therefrom such that the selected interactive media item appears to a user looking through the eyeglass frames to be anchored to the object from different eyeglass frame viewpoints relative to the object, (As user moves had or position slightly overlay is still positioned in proximity to the object)(See Lin, fig. 4 step 410)( Processing then proceeds to step 410 where the generated display for each recognized machine-readable indicium is overlain on the display in proximity to the associated indicium or sensor. See paragraph [0046]), but is silent to a flat label object with printed indicia and recognizing without requiring recognition of an AR marker, bar code or other identification marker. However, Border teaches a flat label object with printed indicia at ([0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.) Lin and Border are analogous since both of them are dealing with viewing object in the augmented reality environment. Lin provided a way of identify and superimposing portion of object based on the identified indicia. Border provided an augmented reality with restricted region of object based on user permission and overlay onto user field of view in the environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made combine the system of Lin with the security checking and measurement techniques of Border such that when using the augmented reality device, system will be able to use identified security feature and dynamically adjust the display content accordingly. Lin in view of Border is silent to recognizing without requiring recognition of an AR marker, bar code or other identification marker. Letellier teaches a markerless tracking augmented reality technique for print media (Our approach is to provide a variety of additional content to users of the seasonal magazine for the Konzerthaus Berlin. In order to not disturb the layout and design of the magazine, markerless tracking was used. See page 180, right col., first paragraph). Lin in view of Border and Letellier teach of augmented reality presentation of material and Letellier teaches that by utilizing markerless tracking the system can avoid disturbing the layout and design of the magazine, therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the system of Lin in view of Border with the markerless tracking techniques of Letellier such that the augmentation could occur without materially altering printed material and disturbing the layout and design. Regarding claim 12, Lin in view of Border in view of Letellier teaches The system of claim 11 wherein the superimposing comprises using at least one of augmented reality, mixed reality and virtual reality (Lin; See [0003] discloses a system for providing an augmented reality interface for sensor applications.) (Letellier; Our approach is to provide a variety of additional content to users of the seasonal magazine for the Konzerthaus Berlin. In order to not disturb the layout and design of the magazine, markerless tracking was used. See page 180, right col., first paragraph). Regarding claim 13, Lin in view of Border in view of Letellier teaches The system of claim 11 wherein the recognition recognizing comprises recognizing a two-dimensional label object with print on it ( Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 14, Lin in view of Border in view of Letellier teaches The system of claim 11 wherein the label object has indicia printed thereon, and the recognizing comprises recognizing at least some of the printed indicia ( Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 15, Lin in view of Border in view of Letellier teaches The system of claim 14 wherein the recognizing includes recognizing characters printed on the label object ( Border; [0691-0692] specify the eyepiece may be commanded to display certain content based upon sensing a predetermined external visual cue. The visual cue may be an image, an icon, a picture, face recognition, a hand configuration, a body configuration, and the like. The eyepiece may include a visual recognition language translation facility for providing translations for visually presented content, such as for road signs, menus, billboards, store signs, books, magazines, and the like. The visual recognition language translation facility may utilize optical character recognition to identify letters from the content, match the strings of letters to words and phrases through a database of translations.). Regarding claim 16, Lin in view of Border in view of Letellier teaches The system of claim 15 wherein the label object comprises a patch of printed material attached to an associated item (Border; See [0692] discloses the viewed text is on a sign, a printed document, book, a road sign, a billboard, a menu, and the like.). Regarding claim 17, Lin in view of Border in view of Letellier teaches The system of claim 16 wherein the selected media item comprises a digital overlay that leads to specific action selected from the group consisting of providing specific information; a video, tutorial, or any kind of displayable content (Lin; For example, HMD 220 could be a smartphone configured to capture imagery from a camera on one side and display augmented imagery on the display on the opposite side. See paragraph [0025]) (Border; See [0771] discloses the user may designate a feature for holding the overlay content by interacting with a user interface of the eyepiece. In embodiments, the overlay may present the content on or in proximity to the recognized feature, and further embodiments, the recognized feature may be at least one of an item for purchase, an item on sale, a sign, an advertisement, an aisle, a location in a store, a kiosk, a service counter, a cash register, a television, a screen, shipping cart, and the like.)( Border; The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. See paragraph [01292]). Regarding claim 18, Lin in view of Border in view of Letellier teaches The system of claim 11 wherein the superimposing is performed on a handheld display device (Lin; For example, HMD 220 could be a smartphone configured to capture imagery from a camera on one side and display augmented imagery on the display on the opposite side. See paragraph [0025]) ( Border; The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. See paragraph [01292]). Regarding claim 19, Lin in view of Border in view of Letellier teaches The system of claim 11 wherein the selected media item comprises a call button (Border; See [0699] discloses the screen of the augmented reality glasses may display a list of options for making the call and the user may gesture using a pointing control device or use any other control technique described herein to select the video calling option on the screen of the augmented reality glasses.). Regarding claim 20, Lin in view of Border in view of Letellier teaches The system of claim 11 further including displaying any or all of the following action buttons in any combination or subcombination: Price Tag, Photo Gallery, Videos, Description, Call, Mail, Shop link, Explanation, Intro, Social Media links, Map, Discount Codes, Reviews, Tutorials, Directions, Test drives, Booking opportunities (Border; See [0723] discloses the eyepiece may provide for an interface to accept wireless streaming media (e.g. video, audio, text messaging, phone call and calendar alerts) from an external facility, such as a smart phone, a tablet, a personal computer, an entertainment device, a portable music and video device, a home theater system, a home entertainment system, another eyepiece, and the like.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wu et al. (“Augmented Reality Instruction for Object Assembly based on Markerless Tracking”, IEEE, 2016), generally relates to instructions for assembly based on markerless tracking of objects, see figure 1. Maidi et al. (“Markerless Tracking for Mobile Augmented Reality”, IEEE, 2011.), A method based local invariant descriptors is implemented to extract image feature points for natural fiducial identification. See abstract Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS R WILSON whose telephone number is (571)272-0936. The examiner can normally be reached M-F 7:30-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (572)-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS R WILSON/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 08, 2022
Application Filed
Sep 18, 2024
Non-Final Rejection — §103
Mar 19, 2025
Response Filed
Apr 19, 2025
Final Rejection — §103
Aug 25, 2025
Response after Non-Final Action
Oct 24, 2025
Request for Continued Examination
Oct 28, 2025
Response after Non-Final Action
Oct 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602869
APPARATUS, SYSTEMS AND METHODS FOR PROCESSING IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602891
TELEPORTATION SYSTEM COMBINING VIRTUAL REALITY AND AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12579605
INFORMATION PROCESSING DEVICE AND METHOD OF CONTROLLING DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12567215
SYSTEM AND METHOD OF CONTROLLING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561911
3D CAGE GENERATION USING SIGNED DISTANCE FUNCTION APPROXIMANT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.1%)
1y 12m
Median Time to Grant
High
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month