Prosecution Insights
Last updated: April 19, 2026
Application No. 18/087,588

AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY

Non-Final OA §103
Filed
Dec 22, 2022
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Techinvest Company Limited
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the Amendment filed on 11/12/2025. Claims 1-20 are pending. Claim 1 has been amended. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1, 3, 5-6, 11-12, 14, 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US 20140306866 A1, hereinafter Miller) in view of Jiang et al. (US 20170076499 A1, hereinafter Jiang) Regarding Claim 1, Miller teaches a single user portable display device (Miller, Fig. 2 a user device Paragraph [0079], “a user 210 may interface one or more digital worlds through a smart phone 220”) for automatically recognizing two or three dimensional real world objects (Miller, Paragraph [0159], “the system is configured to recognize a 3D object in the real world” ). and augmenting or enhancing display of such real world objects with an additional presentations superimposed thereon (Miller, Paragraph [0111], “a user wearing a 3D head-mounted display looking up in the sky and seeing a virtual plane <read on additional presentations> flying overhead, superimposed on the real world”), the single user portable display device (Miller, Fig. 2 a user device) comprising: a portable housing configured to be carried by a single user (Miller, Paragraph [0079], “FIG. 2, a user 210 may interface one or more digital worlds through a smart phone 220”; it is noted smart phone is carried by a single user), the portable housing having a surface, a touch screen display disposed on the portable housing surface and configured to be viewed and operated by the single user (Miller, Fig. 2 Paragraph [0005], “a user display device comprising a housing frame mountable on a head of a user” [0081], “if the user device is a smart phone, the user interaction may be implemented by a user contacting a touch screen” [0045], “an example of objects viewed by a user when the mobile, wearable user device”; [0079], “a user 210 may interface one or more digital worlds through a smart phone 220” [0129], “use smartphones and/or tablets to display augmented and virtual viewpoints (visual accommodation via magnifying optics, mirrors, contact lenses, or light structuring elements)”; it is noted since smartphone and/or tablets necessarily have a housing (casing) with surfaces, within which cameras, processors, memory and wireless interfaces are disposed; they are also plainly portable and carried by a single user, therefore it is portable housing), a push button disposed on the housing (Miller, [0178], “a real button on such device may be configured to open a virtual panel which is configured to interact with the actual device and/or other devices, people, or objects”), a camera disposed within the housing (Miller, Paragraph [0005], “a user display device comprising a housing frame mountable on a head of a user, a first pair of cameras coupled to the housing frame”), a memory disposed within the housing (Miller, Paragraph [0086], “local user devices 120…may include software, firmware, memory), a wireless communications device disposed within the housing (Miller, Paragraph [0072], “the gateway has its own wired and/or wireless connection to data networks for communicating with the servers”), and a processor disposed within the housing and operatively coupled to the touch screen display, the memory, the camera, the push button and the wireless communications device, the processor configured to perform, in response to instructions stored in the memory, operations comprising (Miller, Paragraph [0089], “user device will include a processor for executing program code stored in memory on the device, coupled with a display, and a communications interface”; it is noted the touch screen has push button): (a) display, on the touch screen display, an image of a real world object acquired by the camera as the camera is aimed at the real world object (Miller, Paragraph [0016], “a set of points in the captured field-of-view image, and creating a fiducial for at least one physical object in the captured field-of-view image”); (b) recognize at least a portion of the acquired image (Miller, Paragraph [0035], “extracting a set of points in the captured field-of-view image, associating the extracted set of points to a particular object, and recognizing a different object based on the associated set of points of the particular object”) [[ without requiring a bar code or AR marker or other special marking ]] that is not intended to be recognized by humans but instead is designed or intended to be automatically recognized electronically by a machine (Miller, Paragraph [0099], Processor 308 to recognize various features and/or shape patterns (captured by the sensors 312) to identify the physical object 402 as a stool); and ( c) in response to recognizing at least a portion of the acquired image, anchoring a presentation to the displayed image of the real world object currently being acquired by the camera (Miller, Paragraph [0023], “The additional virtual world data may be associated with a physical object sensed by the head-mounted user display device. The additional virtual world data may be associated with the display object having a predetermined relationship with the sensed physical object”) such that if the housing is moved to change position and/or orientation of the housing relative to the real world object (Miller, Paragraph [0130], with a system such as that depicted in FIGS. 3 and 14, 3-D points may be captured from the environment, and the pose (i.e., vector and/or origin position information relative to the world) of the cameras that capture those images create a virtual copy of the real world), the presentation moves with apparent movement of the image of the real world object on the touch screen display (Miller, Paragraph [0008], “The user display device may further comprise a sensor assembly comprising at least one sensor to sense at least one of a movement of the user” [0013], “a method comprises tracking a movement of a user's eyes, estimating a depth of focus of the user's eyes based on the tracked eye movement, modifying a light beam associated with a display object based on the estimated depth of focus such that the display object appears in focus”), wherein the presentation provides offering details concerning the recognized acquired image portion (Miller, Paragraph [0099], “recognize various features and/or shape patterns <read on details> (captured by the sensors 312) to identify the physical object <read on image portion> 402 as a stool”). Miller does not explicitly disclose [[ recognize at least a portion of the acquired image ]] without requiring a bar code or AR marker or other special marking. However, Jiang teaches a touch screen display disposed on the portable housing surface and configured to be viewed and operated by the single user (Jiang, Paragraph [0006], "a mobile device includes a processor and a display coupled to the processor"; Paragraph [0054], "a user interface is used to let user choose (e.g., via a touch" see also Fig. 15 showing an input camera device with a processor and display) ... recognize at least a portion of the acquired image without requiring a bar code or AR marker or other special marking (Jiang, Paragraph [0026], "some AR systems require the use of markers to which to map the visual objects ... The embodiments disclosed herein avoid the need to include markers in the real word scene. The embodiments disclose various applications and methods for recognition of real-world images without markers."; see also Jiang, Paragraph [0005), "Markers are not used to insert the virtual objects."); Jiang and Miller are analogous since both of them are dealing with augmented reality image processing on portable user devices. Miller provided a way of superimposing a presentation on a view of a real world object and anchoring the presentation such that it moves with apparent movement of the view when the device moves. Jiang provided a way of providing a mobile augmented reality experience in which the system recognizes real-world images without using markers (e.g., "avoid the need to include markers" and "Markers are not used"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the markerless recognition taught by Jiang into the modified invention of Miller such that, during the augmented reality process, the system recognizes at least a portion of the acquired image without requiring a bar code, AR marker, or other special machine-readable marking, thereby avoiding marker placement and improving usability. Regarding Claim 3, the combination of Miller and Jiang teaches the invention in Claim 1. Miller further teaches further including at least one sensor disposed in the housing, the sensor configured to sense posture, orientation and/or position of the device (Miller, Paragraph [0081], “User devices may include additional components that enable user interaction such as sensors, wherein the objects and information (including gestures) detected by the sensors”), wherein the at least one sensor comprises an inertial sensor such as an accelerometer and/or a gyrosensor (Miller, Paragraph [0175], “one or more GPS or localizing sensors (206); and/or one or more accelerometers, inertial measurement units, and/or gyros (208)” ), and wherein the processor is configured to display, on the touch screen display, different portions of the presentation based on such sensed posture, orientation and/or position (Miller, Paragraph [0176], “Movements to different orientations and locations may be tracked utilizing coarse localization and orientation tools”). Regarding Claim 5, the combination of Miller and Jiang teaches the invention in Claim 1. Miller further teaches wherein the presentation comprises a mixed reality (MR) image that mixes virtual and real scenes (Miller, Paragraph [0056], [0104], “a user display configuration suitable for virtual and/or augmented reality” “the user interface presents a virtual world 600 comprised of fully virtual objects 610, and rendered physical objects 620 (renderings of objects otherwise physically present in the scene)”). Regarding Claim 6, the combination of Miller and Jiang teaches the invention in Claim 1. Miller further teaches wherein the presentation comprises an augmented reality (AR) image that augments real scenes with virtual objects (Miller, Paragraph [0104], “the user interface presents a virtual world 600 comprised of fully virtual objects 610, and rendered physical objects 620 (renderings of objects otherwise physically present in the scene)” [0138], “the system may be configured to share basic elements (walls, windows, desk geometry, etc.) with any user who walks into the room in virtual or augmented reality, and in one embodiment that person's system will be configured to take images from his particular perspective”). Regarding Claim 11, the combination of Miller and Jiang teaches the invention in Claim 1. Miller further teaches wherein the presentation is anchored as if it were glued or otherwise adhered or attached to the real world object (Miller, Paragraph [0159], “the system is configured to recognize a 3D object in the real world, and then augment it. "Recognition" in this context may mean identifying the 3D object with high enough precision to anchor imagery to the 3D object”). Regarding Claim 12, Miller teaches a single user portable display device for automatically recognizing two or three dimensional real world objects (Miller, Fig. 2 a user device Paragraph [0079], “a user 210 may interface one or more digital worlds through a smart phone 220” [0159], “the system is configured to recognize a 3D object in the real world”) and augmenting or enhancing views of such real world objects with an additional presentation superimposed thereon (Miller, Paragraph [0111], “a user wearing a 3D head-mounted display looking up in the sky and seeing a virtual plane <read on additional presentations> flying overhead, superimposed on the real world”), the single user portable display device comprising: a display (Miller, Fig. 2 a user device; Paragraph [0079], “a user 210 may interface one or more digital worlds through a smart phone 220”; it is noted smart phone has display screen), a push button (Miller, [0178], “a real button on such device may be configured to open a virtual panel which is configured to interact with the actual device and/or other devices, people, or objects”), a camera (Miller, Paragraph [0005], “a user display device comprising a housing frame mountable on a head of a user, a first pair of cameras coupled to the housing frame”), a memory (Miller, Paragraph [0086], “local user devices 120…may include software, firmware, memory), , a wireless communications device (Miller, Paragraph [0072], “the gateway has its own wired and/or wireless connection to data networks for communicating with the servers”), and a processor operatively coupled to the display, the memory, the camera, the push button and the wireless communications device, the processor configured to perform, in response to instructions stored in the memory, operations comprising (Miller, Paragraph [0089], “user device will include a processor for executing program code stored in memory on the device, coupled with a display, and a communications interface”; it is noted the touch screen has push button): (a) acquire an image with the camera (Miller, Paragraph [0130], “the cameras that capture those images create a virtual copy of the real world” ) ; (b) recognize at least a portion of the acquired image (Miller, Paragraph [0035], “extracting a set of points in the captured field-of-view image, associating the extracted set of points to a particular object, and recognizing a different object based on the associated set of points of the particular object”) [[ without requiring a bar code or AR marker or other special machine-readable marking ]]; and ( c) in response to recognizing at least a portion of the acquired image (Miller, Paragraph [0035], “extracting a set of points in the captured field-of-view image, associating the extracted set of points to a particular object, and recognizing a different object based on the associated set of points of the particular object”), anchoring a display of a presentation to a view of the real world object (Miller, Paragraph [0023], “The additional virtual world data may be associated with a physical object sensed by the head-mounted user display device. The additional virtual world data may be associated with the display object having a predetermined relationship with the sensed physical object” ) such that if the single user portable display device is moved to change position and/or orientation of the real world object relative to the display device (Miller, Paragraph [0130], with a system such as that depicted in FIGS. 3 and 14, 3-D points may be captured from the environment, and the pose (i.e., vector and/or origin position information relative to the world) of the cameras that capture those images create a virtual copy of the real world), the presentation moves on the display with movement of the view of the real world object (Miller, Paragraph [0008], “The user display device may further comprise a sensor assembly comprising at least one sensor to sense at least one of a movement of the user” [0013], “a method comprises tracking a movement of a user's eyes, estimating a depth of focus of the user's eyes based on the tracked eye movement, modifying a light beam associated with a display object based on the estimated depth of focus such that the display object appears in focus”), wherein the presentation provides offering details concerning the recognized acquired image portion (Miller, Paragraph [0099], “recognize various features and/or shape patterns <read on details> (captured by the sensors 312) to identify the physical object <read on image portion> 402 as a stool”). Miller does not explicitly disclose [[ recognize at least a portion of the acquired image ]] without requiring a bar code or AR marker or other special machine-readable marking; However, Jiang teaches recognize at least a portion of the acquired image without requiring a bar code or AR marker or other special machine-readable marking (Jiang, Paragraph [0026], "some AR systems require the use of markers ... The embodiments disclosed herein avoid the need to include markers in the real word scene. The embodiments disclose various applications and methods for recognition of real-world images without markers."; see also Jiang, Paragraph (0005], "Markers are not used to insert the virtual objects.") Jiang and Miller are analogous since both of them are dealing with augmented reality image processing on portable user devices. Miller provided a way of superimposing a presentation on a view of a real world object and anchoring the presentation such that it moves with apparent movement of the view when the device moves. Jiang provided a way of providing a mobile augmented reality experience in which the system recognizes real-world images without using markers (e.g., "avoid the need to include markers" and "Markers are not used"). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the markerless recognition taught by Jiang into the modified invention of Miller such that, during the augmented reality process, the system recognizes at least a portion of the acquired image without requiring a bar code, AR marker, or other special machine-readable marking, thereby avoiding marker placement and improving usability. Regarding Claim 14, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 5 and therefore is rejected under the same rationale. Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 6 and therefore is rejected under the same rationale. Claim(s) 2, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US 20140306866 A1, hereinafter Miller) in view of Jiang et al. (US 20170076499 A1, hereinafter Jiang) as applied to Claim 1 above and in view of Spivack et al. (US 20190266404 A1, hereinafter Spivack) Regarding Claim 2, the combination of Miller and Jiang teaches the invention in Claim 1. Miller further teaches wherein the processor is further configured to change position, orientation and/or perspective of the anchored presentation as the real world object changes position, orientation and/or viewing angle relative to the housing (Miller, Paragraph [0055], The user-designed code can be detected and tracked, for example, with a mobile device with a camera. Position and orientation within a camera image can be determined automatically, so the user-designed code can be used for tracking an object, such as within an augmented reality application. [0098], A key frame may also include information relating to the pose, e.g., the relative position and orientation of the camera in the environment when the image of the key frame was captured). But, Miller does not explicitly disclose in order to provide a photorealistic image in which the anchored presentation appears to be part of the real world object. However, Spivack teaches wherein the processor is further configured to change position, orientation and/or perspective of the anchored presentation as the real world object changes position, orientation and/or viewing angle relative to the housing (Miller, Paragraph [0027], [0031], [0141],gyroscope and accelerometer to determine a device's geographic location, altitude, orientation, and position relative to any nearby beacons…when the user moves or they move their device thus causing both non-moving and moving objects in the environment to appear to change position in the camera frame as a result of movement of the user's device. For example, when the user shifts perspective or camera angle and/or distance, the objects in the frame may not actually change position in physical space…utilizing computer vision methods to detect the distance between the virtual object and horizontal/vertical surfaces and/or a distance between the virtual object and various anchor points (e.g., visual anchor points) and/or landmarks in the real world environment) in order to provide a photorealistic image in which the anchored presentation appears to be part of the real world object (Spivack, Paragraph [0027], with virtual objects (e.g., 2D or 3D computer-generated objects) layered on top, in such a way that the virtual objects appear to be part of the real-world scene [0036], save in compressed form to be able to generate a photorealistic 3D version of the real-world scene). Spivack and Miller are analogous since both of them are dealing with augmented reality image process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Spivack provided a way of superimposing the virtual object on the real world image in the augmented reality environment and to provide photorealistic 3D version of the scene in combine with real scene. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate photorealistic image generation taught by Spivack into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to provide more accurate and precise data merging in between virtual object and real object which increase the realistic and better user experience when using the system. Regarding Claim 13, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale. . Claim(s) 4, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US 20140306866 A1, hereinafter Miller) in view of Jiang et al. (US 20170076499 A1, hereinafter Jiang) as applied to Claim 1 above and in view of Miller (US 11205304 B2, hereinafter Miller304) Regarding Claim 4, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Miller304 teaches comprising an optical flow sensor disposed within the housing, the optical flow sensor providing an optical flow image analysis (Miller304, Column 89, Line 46-50, The AR system may, in at least some implementations, advantageously perform optical flow analysis in hardware by finding features) and wherein the processor is configured to display, on the touch screen display, different portions of the presentation based on such optical flow image analysis (Miller304, Column 33, Line 15-18, Column 90, Line 5-10, Component(s) may provide a tactile sensation of pressure and/or texture when touching virtual content…In executing optical flow algorithms and imaging, the AR system identifies an object in a frame and then determines where that object appears in at least one subsequent frame). Miller304 and Miller are analogous since both of them are dealing with augmented reality image process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Miller304 provided a way of superimposing the virtual object on the real world image in the augmented reality environment using optical flow image analysis. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate optical flow analysis taught by Miller304 into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to use optical flow analysis to enhance the accuracy, realism and interactivity to adjust the media in the augmented reality environment which will provide more immersive and engaging user experience. Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale. Claim(s) 7, 10, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US 20140306866 A1, hereinafter Miller) in view of Jiang et al. (US 20170076499 A1, hereinafter Jiang) as applied to Claim 1 above and in view of Fein et al. (US 20140267409 A1, hereinafter Fein) Regarding Claim 7, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Fein teaches wherein the presentation is animated (Fein, Paragraph [0101], modifications or replacements reduce the difficulty of interaction regarding the presentations of interest; (i) restoring ( e.g with animation or other transitions)). Fein and Miller are analogous since both of them are dealing with augmented reality image process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Fein provided a way of superimposing the virtual object on the real world image in the augmented reality environment using animated process. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate animated process taught by Fein into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to use animated process when superimposing the virtual object onto the real scene which allow user to see the changing gradually in real time which provide more user friendly immersive experience when using the augmented reality environment system. Regarding Claim 10, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Fein teaches wherein the presentation comprises a superimposed virtual shopping cart to provide a virtual or mixed reality composite image (Fein, Paragraph [0072], imagine a user is viewing an augmented reality scene in a retail store. She will see the real objects in the store (such as books, microwave ovens, and housewares) as well as virtual objects in the augmented reality display (such as product annotations and a shopping cart that follows her wherever she goes) and the processor is further configured to wireless transmit an acceptance of the offering details in response to touch screen manipulation of the superimposed virtual shopping cart (Fein, Paragraph [0073], [0108], [0152], Augmented reality system 322 running on or through augmented reality device 1502 may communicate over a network 1504, wirelessly or by hardwired connection… a virtual interface object in an augmented reality display is selected (by a first gesture, voice command, touch, or some other pre-determined method) …a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.).. Fein and Miller are analogous since both of them are dealing with augmented reality image process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Fein provided a way of wireless superimposing the virtual object including shopping cart on the real world image in the augmented reality environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate wireless transmission taught by Fein into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to use wireless transmission during the superimposing the virtual object onto the real scene which provide easily communication in between systems and to provide more flexibility of the system and better user experience. Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 7 and therefore is rejected under the same rationale. Claim(s) 8, 9, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller et al. (US 20140306866 A1, hereinafter Miller) in view of Jiang et al. (US 20170076499 A1, hereinafter Jiang) as applied to Claim 1 above and in view of Hunt (US 20120139847 A1) Regarding Claim 8, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Hunt teaches wherein the presentation is displayed with controls such as play/stop, fast-forward, rewind and mute that can be pressed on the touch screen display to control the presentation (Hunt, Fig. 5, Element 520 Virtual control on Touchpad; Paragraph [0042], [0050], Specific zones on the touchpad 105 may serve as dedicated buttons, shown as virtual controls 520… the context-specific user interface 405 may include graphical elements for transport controls, e.g., Play, Stop, Pause, FF (fast forward), RW (rewind), commonly used buttons such as Volume Up/Down and Mute are included). Hunt and Miller are analogous since both of them are dealing with user interface media data process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Hunt provided a way of providing overlay virtual control buttons on the screen when dealing with media data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate multi-function virtual buttons taught by Hunt into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to superimpose multi function media control virtual button onto the screen in order for user to easily control the media when using the system which increase the flexibility of the system at the same time provide more user friendly and better immersive experience for user. Regarding Claim 9, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Hunt teaches wherein the presentation comprises a superimposed control bar displayed on the touch screen display that controls playing of the presentation (Hunt, Fig. 4D, Fig. 5, Element Control bar, Paragraph [0045], in the context specific user interface 445 is to include graphical elements for a timeline bar 440 on the display device 120, with a "now" icon 455 representing the current display point. A user may directly drag (via the cursor 420) the now icon 455 along the timeline bar 440 to move to a different position along the timeline bar 440. A gesture such as a double-tap anywhere in the context-specific user interface 455 may activate the play function and a triple-tap may activate the fast-forward function). Hunt and Miller are analogous since both of them are dealing with user interface media data process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Hunt provided a way of providing overlay virtual control bar button on the screen when dealing with media data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate virtual control bar taught by Hunt into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to superimpose virtual control bar onto the screen in order for user to easily control the media when using the system which increase the flexibility of the system at the same time provide more user friendly and better immersive experience for user. Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 8 and therefore is rejected under the same rationale. Regarding Claim 20, the combination of Miller and Jiang teaches the invention in Claim 1. The combination does not explicitly disclose but Hunt teaches wherein the presentation comprises a superimposed control bar that controls playing of audio (Hunt, Fig. 5, Element 520 Virtual control on Touchpad; Paragraph [0042], [0048], [0050], Specific zones on the touchpad 105 may serve as dedicated buttons, shown as virtual controls 520… the context-specific user interface 405 may include graphical elements for transport controls, e.g., Play, Stop, Pause, FF (fast forward), RW (rewind), commonly used buttons such as Volume Up/Down and Mute are included; select an alternative scene angle and control the CE device 110 to output the alternative scene angle of the digital video and audio content for display on the display device). Hunt and Miller are analogous since both of them are dealing with user interface media data process. Miller provided a way of superimposing the virtual object on the real world image and align the virtual object along with real world object hen moving around the device in the augmented reality environment. Hunt provided a way of providing overlay virtual control bar button on the screen to control the audio when dealing with media data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate virtual control bar taught by Hunt into modified invention of Miller such that during the process of image in the augmented reality environment, system will be able to superimpose virtual control bar onto the screen in order for user to easily control the media when using the system which increase the flexibility of the system at the same time provide more user friendly and better immersive experience for user. Response to Arguments Applicant’s arguments with respect to claim 1, 12, filed on 02/28/2025, with respect to rejection under 35 USC § 103 have been considered but are moot in view of the new ground(s) of rejection. It has now been taught by the combination of prior arts Miller and Jiang. In regard to Claims 2-11, 13-20, they directly/indirectly depends on independent Claim 1, 12 respectively. Applicant does not argue anything other than the independent claim 1, 12. The limitations in those claims in conjunction with combination previously established as explained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20130010068 A1 Augmented reality system US 20110090252 A1 Markerless augmented reality system and method using projective invariant Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Dec 22, 2022
Application Filed
Aug 24, 2024
Non-Final Rejection — §103
Feb 28, 2025
Response Filed
May 07, 2025
Final Rejection — §103
Jun 12, 2025
Examiner Interview Summary
Jun 12, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Request for Continued Examination
Nov 21, 2025
Response after Non-Final Action
Dec 24, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month