Prosecution Insights
Last updated: April 19, 2026
Application No. 14/802,878

EXTERNAL USER INTERFACE FOR HEAD WORN COMPUTING

Non-Final OA §103§112
Filed
Jul 17, 2015
Examiner
JANSEN II, MICHAEL J
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Mentor Acquisition One LLC
OA Round
21 (Non-Final)
66%
Grant Probability
Favorable
21-22
OA Rounds
2y 3m
To Grant
86%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
409 granted / 619 resolved
+4.1% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
37 currently pending
Career history
656
Total Applications
across all art units

Statute-Specific Performance

§101
1.3%
-38.7% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
25.2%
-14.8% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§103 §112
DETAILED ACTION This communication is in response to Application No. 14/802,878 originally filed 07/17/2015. The Request for Continued Examination and Amendment presented on 12/19/2025 which provides amendments to claims 1, 6, 10, 15, 21, 22, claims 5, 7-8, 14, 16, 18-20, 23-26 and 31 have been cancelled and new claim 32 is added is hereby acknowledged. Currently claims 1-4, 6, 9-13, 15, 17, 12-22, 27-30, and 32 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/13/2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-4, 6, 9-13, 15, 17, 12-22, 27-30, and 32 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Regarding the Park reference: Park is clear in teaching that a protection layer is provided on the display of the mobile device to prevent external users of the HMD, including the user themselves when NOT wearing the HMD device, from viewing the content of the protected area. The second UI is combined with the first UI in such a way that the user can view the entirety of the content displayed on the external device and touch screen “because the second UI lies over components of the first UI, [such that] the user can readily see the one complete UI without any special efforts, simply by seeing the first and second UIs at the same time.” (Park, [0112]) Furthermore, only a component requiring privacy protection among components being displayed on the external device is displayed on the HMD while allowing other components to continue being viewed and displayed by the external device. (Park, [0178] and Fig. 13 as an example.) Park teaches on screen button (i.e. item 420, 511, 711 etc,) and teaches virtual keys (i.e. buttons) throughout the disclosure that can be pressed to “reduce portions of the virtual user interface” or perform other actions. The point and purpose of “reducing” a displayed virtual interface would be that it is not needed or to simply change the function. A user, in view of the Park disclosure, has the option to bring items for viewing on the HMD or not by simply pressing the button to apply privacy mode or remove the mode. It is rather obvious, and flows naturally from the disclosure, that a user completing, for example, entering a password, would no longer need privacy mode and close (by again pressing the button) the privacy window. This clearly minimizes or “reduces” the onscreen windows of the HMD thereby freeing up the display screen of the HMD and not impeding the users line of sight. Likewise, in the instant application, when a user is done using the quick launch menu (after second input), the menu is removed/minimized freeing up display screen space. Therefore, this allows remaining portions displayed by the external device to be seen with the data displayed by the HMD and thus “the see-through display is transparent to allow viewing of the entire second user interface and the entire touch screen”. Applicants’ claims are still broad enough to read on Park in this context. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-4, 6, 9-13, 15, 17, 12-22, 27-30, and 32 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The independent claims 1 and 10 have been amended to similarly recite the features of “mobile device comprising a touch screen and one or more buttons, wherein the mobile device is adapted to communicate with the wearable head device; and one or more processors of the wearable head device and the mobile device, …. receiving, via the one or more buttons of the mobile device, a second input; and in response to receiving the second input, initiating a predefined action on the wearable head device, wherein the predefined action comprises one or more of reducing portions of the virtual user interface on the see-through display and moving portions of the virtual user interface to an edge of the see-through display.” Applicants asserted support is said to be found in [0100], [0102], [0103], and [0105]-[0107]. The Office however notes that paragraphs [0100-0103] are directed an “quick launch interface” while paragraphs [0105]-[0107] are directed auto-adaptable graphic user interface with an external user control device(i.e. claimed as a touch screen). The disclosure, overall, discusses the “quick launch interface” in connection with a “pen” type device that includes physical buttons 222. The disclosure in [0100], [0102], [0103], and [0105]-[0107] does not make it explicitly clear that these are intended to be elements associated with the HWC and utilized together. While the language used suggests it could be, there is no clear discussion or disclosure how the two are combined including the remaining language/features claimed in further depending claims. In addition, the claim currently, (and previously for the last 10 rounds of prosecution) have been directed to the auto-adaptable graphic user interface discussed in paragraphs [0105]-[0107]. These paragraphs are silent with respect to the inclusion of buttons and/or “quick launch interface” or anything similar. Thus, Applicants’ paragraphs in [0100-0107] (and other paragraphs) appear to be nothing more than a collective of separate ideas and/or concepts based on the implementation of the buttons on a pen device 200 or a steering wheel 1002 (See Fig. 2-6 and 10) from previous paragraphs and there is not a clear discussion on how these concepts would be combined. Applicant is reminded of 35 USC 112 where the “specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.”. In this case, based on the current record, The Office claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention therefore the claims are rejected. As has been noted similarly with previous amendments in prior office actions, Applicants claims appear to be a combination of features recited in the specification as filed, and while they may individually be supported in their own right, their combination is not. The Office notes that while each element may be individually described in the specification, the deficiency is the lack of adequate description of their combination. See MPEP 2163 II. A. Further depending claims 2-4, 6, 9, 11-13, 15, 17, 12-22, 27-30, and 32 from independent claims 1 and 10 inherit the deficiencies of their respective base claims and are rejected under similar rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, 6, 9-13, 15, 17, 21-22, 27-30, and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim U.S. Patent Application Publication No. 2014/0078043 A1 hereinafter Kim in view of Park U.S. Patent Application Publication No. 2014/0139439 A1 hereinafter Park in view of Lamb et al. U.S. Patent Application Publication No. 2012/0284673 A1 hereinafter Lamb. Consider Claim 1 and similar method claim 10: Kim discloses a system comprising: (Kim, See Abstract.) wearable head device having a see-through display, wherein: (Kim, [0029], “Especially, the HMD 100 to which an UI is applied includes a display screen 101 and at least one sensor 102. Not only all contents and images are provided the user 10 by the HMD through the display screen 101 but also information about the UI of the exemplary embodiments is provided. Further, the HMD 100 includes at least one sensor 102, detects ambient environmental conditions in the proximity of the HMD 100, and is utilized as an important element to determine a HMD UI to operate such sensor functions. Further, the HMD 100 is able to include a supporting component 103 in order for the user 10 to wear the HMD 100 in the head and an audio outputting unit 104 wearable in the ear.”) the wearable head device is configured to run a computer application, and the see-through display is adapted to present content associated with the computer application; mobile device comprising a … screen and one or more buttons, wherein the mobile device is adapted to communicate with the wearable head device; and (Kim, [0052], “For example, the virtual keyboard UI method of FIG. 6a displays the virtual keyboard 410 on the surface of the detected object 400 and generates a command that the user directly inputs by touching the virtual keyboard 410. Then, the corresponding object 400 provides the user 10 with the touch feeling so that the user 10 can efficiently use the virtual keyboard 410. Also, the drawing UI method of FIG. 6b is a method, for example, in which a virtual window 420 that can be drawn is displayed on the surface of the detected object 400 and the user 10 generates desired commands by using a pen 430. Then, the corresponding object 400 provides the user 10 with the touch feeling so that the user 10 can efficiently use the pen 430.”) one or more processors of the wearable head device and the mobile device, wherein the one or more processors are adapted to perform: (Kim, [0032], “According to FIG. 2, the HMD 100 of the exemplary embodiment includes a processor 110, a sensor unit 120, a storage unit 130, a communications unit 140, a user input unit 150, a display controller 160, a UI control unit 170.”) running the computer application on the wearable head device; (Kim, [0086], [0090-0092], [0011], “In another aspect of the exemplary embodiments, a UI apparatus comprises a sensor unit detecting whether an object exists in the proximity of the HMD and if the object is detected, the sensor unit senses a distance between the object and the HMD. The apparatus further comprises a processor controlling a User Interface (UI) of the HMD based on a result of the sensor unit. The physical UI mode is applied if the detected object is within a predetermined distance from the HMD and the non-physical UI mode is applied if the object is not detected or is not within the predetermined distance from the HMD.”) providing for display, via the see-through display, a virtual user interface associated with the computer application running on the wearable head device; (Kim, [0048-0059], [0064- [0067], [0057], “In addition, the location of the virtual keyboard 410 can be determined based on the user's view angle. For example, the processor 110 can control to determine whether the user is using a first view angle, the view angle of the right eye, or a second view angle, the view angle of the left eye, or both. The processor 110 then controls the virtual keyboard 410 so that the virtual keyboard 410 is located at an appropriate point corresponding to the view angle. For example, the appropriate point corresponding to the view angle can be the center point of the corresponding view angle when only one view angle is used or the overlapping point of the corresponding view angles when the both view angles are used.”) transmitting, from the wearable head device to the mobile device, an instruction to display a second user interface on the touch screen; (Kim, [0049], “When a status of the object in the proximity of the HMD is determined as F1(S122), F2(S123), or F3(S124) through the step of S121, the HMD processor 110 selects a HMD UI and operates it by the UI control unit 170. For example, in the case of F1(S122), the aforementioned physical UI mode is applied S131 and in the case of F2 status (S123) or F3 status (S124), the aforementioned non-physical UI mode is applied S132. These physical and non-physical UI modes can also be referred as Object and Non-object modes, respectively.”) in accordance with the instruction, configuring the mobile device to control the computer application running on the wearable head device via the second user interface; providing for display, on the touch screen, the second user interface associated with the computer application running on the wearable head device, and further providing the second user interface for display on the touch screen such that the see-through display is transparent to allow viewing of the second user interface and the touch screen; receiving a first input for the computer application via the second user interface, wherein said receiving the first input comprises receiving an input on the touch screen; (Kim, [0048-0059], [0064-0067], [0092], “If the HMD processor 110 determines that it is F1 status S522, the HMD processor 110 performs connecting communications through the communications 140. S531. If the communications connection is completed, the HMD processor 110 operates the physical UI mode by using the display in the device through the aforementioned UI control unit 170. S531. That is, the display equipped in the corresponding device can be utilized as the HMD virtual keyboard. On the other hand, if the HMD processor 110 determines that it is F2 status S523 or F3 status S524, the HMD processor 110 operates the non-physical UI mode through the aforementioned UI control unit 170. S533.”) Kim teaches the use of an external mobile device however does not state (although arguably this is inherent to the device based on Kim at [0092]) a mobile device comprising a touch screen, receiving, via the one or more buttons of the mobile device, a second input; and in response to receiving the second input, initiating a predefined action on the wearable head device, wherein the predefined action comprises one or more of reducing portions of the virtual user interface on the see-through display and moving portions of the virtual user interface to an edge of the see-through display. Park however teaches a mobile device comprising a touch screen; and (Park, [0090-0096], [0058] “The plurality of sensors may include a gravity sensor, a geomagnetic sensor, a motion sensor, a gyro sensor, an acceleration sensor, an infrared sensor, an inclination sensor, an ambient light sensor, an altitude sensor, an odor sensor, a temperature sensor, a depth sensor, a pressure sensor, a bending sensor, an audio sensor, a video sensor, a Global Positioning System (GPS) sensor, a touch sensor, etc. These sensors may be included as separate elements in the HMD 300 or incorporated into at least one element in the HMD 300.”) Park additionally teaches a mobile device comprising a touch screen, receiving, via the one or more buttons of the mobile device, a second input; and in response to receiving the second input, initiating a predefined action on the wearable head device, wherein the predefined action comprises one or more of reducing portions of the virtual user interface on the see-through display. (Park, [0039], [0044-0045], [0163] “However, many other components than the keypad and the window may require data protection. Referring to FIG. 13(a), in the case where the portable device is a smart phone 1310, when messages are transmitted and received between users by Social Networking Service (SNS), the smart phone 1310 may provide a component 1311 that displays a message transmission and reception history. Since messages carry personal data in many cases, they often require privacy protection.” [0164], “Therefore, an HMD 1320 may recognize the component 1311 for displaying a message transmission and reception history and provide the component 1311 on a second UI 1321. The smart phone 1310 may display a protection layer 1313 over the component 1311.”) It therefore would have been obvious to those having ordinary skill in the art before the effective filing date of the invention to provide an external device for control and/or communication with a head mounted display device as seen in both Kim and Park and would have been utilized for the art recognized purpose of providing a Head Mounted Display User Interface (HMD UI), to provide an optimized HMD UI considering the ambient environmental conditions in the proximity of the HMD. Especially, another object of the exemplary embodiments is to apply the HMD UI differently based on whether a usable object for the HMD UI exists in the proximity of the HMD. One of these recognized benefits is that HMD can be used in conjunction with various external devices. The HMD is connected to an external device through a network and thus can output content received from the external. Furthermore, the HMD can receive a user input to the external device or perform an operation in interaction with the external device. (Park, [0010]) Kim in view of Park does not expressly state wherein the predefined action comprises one or more of … moving portions of the virtual user interface to an edge of the see-through display. Lamb however teaches that it was a known technique to those having ordinary skill in the art to provide a quick menu option triggered by first input and subsequently removed by a second input and thus teaches a predefined action comprises one or more of … moving portions of the virtual user interface to an edge of the see-through display. (Lamb, [0019], [0044-0046], [0018], “Some example embodiments may provide users with a relatively easy to implement and intuitive interaction mode by which quick access to functionalities that are not necessarily directly associated with a current application or display screen may be provided. In this regard, for example, some embodiments may provide for an ability to access a predefined set of functional elements (e.g., quick launch icons) that cause the launching of a corresponding application or function when respective ones of the predefined set of functional elements are selected. A gesture (e.g., a trigger gesture) may be defined to trigger the display of the quick launch icons under predefined circumstances (trigger conditions). The trigger gesture may include a swipe gesture from an edge portion of the touch screen display (e.g., the bottom edge in one example) toward a middle portion of the touch screen display. As such, the trigger gesture may, in some instances, be dependent upon the length of the swipe gestures. For example, in some cases, the trigger gesture may be defined relative to certain threshold distances or lengths of the swipe gesture. For example, a swipe gesture that exceeds (or is longer than) a certain threshold may trigger a certain function, while a swipe gesture that does not exceed (or is shorter than the threshold) may cause presentation of the quick launch icons. However, in some examples, the trigger gesture may not be directly tied to the length of the swipe gesture, but may instead be tied to a motion delay inserted in connection with the initiation of a swipe gesture of any length. For example, responsive to detection of any swipe (e.g., from an edge of the display toward a middle portion of the display), the insertion of a motion delay (e.g., of about 300 ms in one example) may complete the trigger gesture. In some cases, the functionalities that are accessible through the quick launch icons (and therefore also in some cases the graphical representation provided for respective ones of the quick launch icons) may be static and/or predefined. However, in other cases, the functionalities may be dynamically determined based on various conditions. Furthermore, in some cases, the functionalities that are accessible via the quick launch icons may depend upon the current device state (e.g., the current application or content being displayed).”) The rationale to support a conclusion that the claim would have been obvious is that a particular known technique was recognized as part of the ordinary capabilities of one skilled in the art. One of ordinary skill in the art would have been capable of applying this known technique to a known device (method, or product) that was ready for improvement and the results would have been predictable to one of ordinary skill in the art. In this particular case, providing a quick launch menu that can be accessed and subsequently removed by a secondary input to free up or clear the display area for viewing of other content would have been obvious in view Lamb and the results would have been predictable to one of ordinary skill in the art. As a result of utilizing such a technique, users may enjoy improved capabilities with respect to accessing information related to content and other services or applications that may be used in connection with a device. (Lamb, [0006]) Consider Claim 2 and similar method claim 11: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the second user interface comprises one or more indications of available computer applications to be operated on the wearable head device, wherein a selection of one of the indications initiates a corresponding computer application on the wearable head device. (Lamb, [0018-0019], [0044-0046], Park, [0076], [0039], “An external device 100 according to the present invention may output various types of content 111. For example, the external device 100 may output a variety of multimedia content including live broadcasting, movies, music, soap operas, Web pages, games, applications, etc.”) Consider Claim 3 and similar method claim 12: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the second user interface comprises one or more indications, each such indication corresponding to a second computer application of the wearable head device, wherein a selection of one of the indications via the user interface causes the corresponding computer application to present content via the wearable head device. (Lamb, [0018-0019], [0044-0046], Park, [0080], [0081], “In another embodiment of the present invention, the external device may extract attribute information about content being displayed on the external device and may switch to another mode according to the extracted attribute information. The content may include any data displayable on the external device as well as a photo and a video. The attribute information about the content may be pre-stored in the storage unit of the external device. Further, the attribute information about the content may include information indicating whether data protection is needed. This attribute information may be set by the user.”) Consider Claim 4 and similar method claim 13: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the second user interface comprises a text entry user interface corresponding to the computer application. (Lamb, [0018-0019], [0044-0046],Kim, [0048-0059], [0064-0067], [0092], “If the HMD processor 110 determines that it is F1 status S522, the HMD processor 110 performs connecting communications through the communications 140. S531. If the communications connection is completed, the HMD processor 110 operates the physical UI mode by using the display in the device through the aforementioned UI control unit 170. S531. That is, the display equipped in the corresponding device can be utilized as the HMD virtual keyboard. On the other hand, if the HMD processor 110 determines that it is F2 status S523 or F3 status S524, the HMD processor 110 operates the non-physical UI mode through the aforementioned UI control unit 170. S533.”) Consider Claim 6 and similar method claim 15: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the one or more processors are further adapted to communicate one or more signals to the wearable head device based on the first input received via the second user interface. (Lamb, [0018-0019], [0044-0046], Kim, [0048-0059], [0064-0067], [0092], “If the HMD processor 110 determines that it is F1 status S522, the HMD processor 110 performs connecting communications through the communications 140. S531. If the communications connection is completed, the HMD processor 110 operates the physical UI mode by using the display in the device through the aforementioned UI control unit 170. S531. That is, the display equipped in the corresponding device can be utilized as the HMD virtual keyboard. On the other hand, if the HMD processor 110 determines that it is F2 status S523 or F3 status S524, the HMD processor 110 operates the non-physical UI mode through the aforementioned UI control unit 170. S533.”) Consider Claim 9 and similar method claim 17: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the one or more processors are further adapted to perform: providing for display, via the see-through display, an application control element comprising a cursor. (Park, [0154], “Therefore, an HMD 1130 of the present invention may recognize a component of the smart TV 1110 and provide the recognized component on a second UI 1131, as illustrated in FIG. 11(b). The second UI 1131 may be laid over the component of the first UI being displayed on the smart TV 1110. The smart TV 1110 may display a protection layer over the component of the first UI overlaid with the second UI 1131.”) Consider Claim 21 and similar method claim 22: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein: the mobile device is configured to operate in a standalone mode or in a control mode, wherein the mobile device is configured to control the wearable head device when operating in the control mode; and the mobile device is configured to select one of the standalone mode and the control mode in response to the first input received via the second user interface. (Kim, [0083] According to the fourth embodiment of the exemplary embodiments, the HMD UI mode determination process includes the steps of object location determination S420, view angle determination S430, digital device detection and location determination S440 and HMD UI mode determination S450. Once the HMD UI mode determination process begins S410, the HMD processor 110 detects an object in the proximity of the HMD by the object sensor 121 and determines the location of the object S421. After determining the step of S421, the processor 110 determines the relationship between the HMD and the object as one of the aforementioned F1, F2, and F3 statuses. For example, it is called F1 status when an object is detected and the detected object stays within distance in which physical feedback is possible, shown in S422. In addition, it is called F2 status when an object is detected and the detected object stays not within distance in which physical feedback is possible, shown in S423. Lastly, it is called F3 status when an object does not exist in the proximity of the HMD, shown in S424.”) Consider Claim 27 and similar method claim 29: Kim in view of Park in view of Lamb disclose the system of claim 1, wherein the second user interface comprises a representation of a physical input device. (Kim, [0052], “For example, the virtual keyboard UI method of FIG. 6a displays the virtual keyboard 410 on the surface of the detected object 400 and generates a command that the user directly inputs by touching the virtual keyboard 410. Then, the corresponding object 400 provides the user 10 with the touch feeling so that the user 10 can efficiently use the virtual keyboard 410.” Park, [0164], “Therefore, an HMD 1320 may recognize the component 1311 for displaying a message transmission and reception history and provide the component 1311 on a second UI 1321. The smart phone 1310 may display a protection layer 1313 over the component 1311.”) Consider Claim 28 and similar method claim 30: Kim in view of Park in view of Lamb disclose the system of claim 27, wherein the physical input device is associated with the computer application. (Kim, [0048-0059], [0064-0067], [0092], “If the HMD processor 110 determines that it is F1 status S522, the HMD processor 110 performs connecting communications through the communications 140. S531. If the communications connection is completed, the HMD processor 110 operates the physical UI mode by using the display in the device through the aforementioned UI control unit 170. S531. That is, the display equipped in the corresponding device can be utilized as the HMD virtual keyboard. On the other hand, if the HMD processor 110 determines that it is F2 status S523 or F3 status S524, the HMD processor 110 operates the non-physical UI mode through the aforementioned UI control unit 170. S533.”) Consider Claim 32: Kim in view of Park in view of Lamb disclose the system of claim 1, further comprising one or more sensors, wherein the one or more processors are adapted to further perform: determining, via the one or more sensors, an action of a user of the wearable head device, wherein the predefined action is initiated further in accordance with the determined action. (Park, [0080] “In accordance with an embodiment of the present invention, the external device displays a mode switching icon 420 so that its mode may be switched according to a user input. That is, upon receipt of a user input requesting data protection from the user, for example, upon touch of the icon 420, the external device may switch to the protection mode.” Lamb, [0019], [0044-0046], [0018], “Some example embodiments may provide users with a relatively easy to implement and intuitive interaction mode by which quick access to functionalities that are not necessarily directly associated with a current application or display screen may be provided. In this regard, for example, some embodiments may provide for an ability to access a predefined set of functional elements (e.g., quick launch icons) that cause the launching of a corresponding application or function when respective ones of the predefined set of functional elements are selected. A gesture (e.g., a trigger gesture) may be defined to trigger the display of the quick launch icons under predefined circumstances (trigger conditions). The trigger gesture may include a swipe gesture from an edge portion of the touch screen display (e.g., the bottom edge in one example) toward a middle portion of the touch screen display. As such, the trigger gesture may, in some instances, be dependent upon the length of the swipe gestures. For example, in some cases, the trigger gesture may be defined relative to certain threshold distances or lengths of the swipe gesture. For example, a swipe gesture that exceeds (or is longer than) a certain threshold may trigger a certain function, while a swipe gesture that does not exceed (or is shorter than the threshold) may cause presentation of the quick launch icons. However, in some examples, the trigger gesture may not be directly tied to the length of the swipe gesture, but may instead be tied to a motion delay inserted in connection with the initiation of a swipe gesture of any length. For example, responsive to detection of any swipe (e.g., from an edge of the display toward a middle portion of the display), the insertion of a motion delay (e.g., of about 300 ms in one example) may complete the trigger gesture. In some cases, the functionalities that are accessible through the quick launch icons (and therefore also in some cases the graphical representation provided for respective ones of the quick launch icons) may be static and/or predefined. However, in other cases, the functionalities may be dynamically determined based on various conditions. Furthermore, in some cases, the functionalities that are accessible via the quick launch icons may depend upon the current device state (e.g., the current application or content being displayed).”) Conclusion Prior art made of record and not relied upon which is still considered pertinent to applicant's disclosure is cited in a current or previous PTO-892. The prior art cited in a current or previous PTO-892 reads upon the applicants claims in part, in whole and/or gives a general reference to the knowledge and skill of persons having ordinary skill in the art before the effective filing date of the invention. Applicant, when responding to this Office action, should consider not only the cited references applied in the rejection but also any additional references made of record. In the response to this office action, the Examiner respectfully requests support be shown for any new or amended claims. More precisely, indicate support for any newly added language or amendments by specifying page, line numbers, and/or figure(s). This will assist The Office in compact prosecution of this application. The Office has cited particular columns, paragraphs, and/or line numbers in the applied rejection of the claims above for the convenience of the applicant. Citations are representative of the teachings in the art and are applied to the specific limitations within each claim, however other passages and figures may apply. Applicant, in preparing a response, should fully consider the cited reference(s) in its entirety and not only the cited portions as other sections of the reference may expand on the teachings of the cited portion(s). Applicant Representatives are reminded of CFR 1.4(d)(2)(ii) which states “A patent practitioner (§ 1.32(a)(1) ), signing pursuant to §§ 1.33(b)(1) or 1.33(b)(2), must supply his/her registration number either as part of the S-signature, or immediately below or adjacent to the S-signature. The number (#) character may be used only as part of the S-signature when appearing before a practitioner’s registration number; otherwise the number character may not be used in an S-signature.” When an unsigned or improperly signed amendment is received the amendment will be listed in the contents of the application file, but not entered. The examiner will notify applicant of the status of the application, advising him or her to furnish a duplicate amendment properly signed or to ratify the amendment already filed. In an application not under final rejection, applicant should be given a two month time period in which to ratify the previously filed amendment (37 CFR 1.135(c) ). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J JANSEN II whose telephone number is (571)272-5604. The examiner can normally be reached Normally Available Monday-Friday 9am-4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached on 571-272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael J Jansen II/ Primary Examiner, Art Unit 2626
Read full office action

Prosecution Timeline

Jul 17, 2015
Application Filed
Apr 01, 2017
Non-Final Rejection — §103, §112
Jul 07, 2017
Response Filed
Oct 03, 2017
Final Rejection — §103, §112
Apr 11, 2018
Request for Continued Examination
Apr 20, 2018
Response after Non-Final Action
May 11, 2018
Non-Final Rejection — §103, §112
Nov 07, 2018
Response Filed
Jan 02, 2019
Final Rejection — §103, §112
Jun 07, 2019
Request for Continued Examination
Jul 17, 2019
Response after Non-Final Action
Aug 28, 2019
Non-Final Rejection — §103, §112
Dec 03, 2019
Response Filed
Jan 03, 2020
Final Rejection — §103, §112
May 08, 2020
Request for Continued Examination
May 17, 2020
Response after Non-Final Action
Jul 31, 2020
Non-Final Rejection — §103, §112
Nov 05, 2020
Response Filed
Jan 11, 2021
Final Rejection — §103, §112
Mar 15, 2021
Response after Non-Final Action
Apr 14, 2021
Request for Continued Examination
Apr 16, 2021
Response after Non-Final Action
Jun 02, 2021
Non-Final Rejection — §103, §112
Oct 04, 2021
Response Filed
Oct 26, 2021
Final Rejection — §103, §112
Dec 28, 2021
Response after Non-Final Action
Jan 20, 2022
Response after Non-Final Action
Jan 20, 2022
Applicant Interview (Telephonic)
Feb 01, 2022
Request for Continued Examination
Feb 03, 2022
Response after Non-Final Action
Feb 25, 2022
Non-Final Rejection — §103, §112
Jun 01, 2022
Response Filed
Aug 17, 2022
Final Rejection — §103, §112
Oct 21, 2022
Response after Non-Final Action
Nov 21, 2022
Request for Continued Examination
Nov 28, 2022
Response after Non-Final Action
Dec 02, 2022
Non-Final Rejection — §103, §112
Mar 07, 2023
Response Filed
May 08, 2023
Final Rejection — §103, §112
Jun 28, 2023
Response after Non-Final Action
Jul 11, 2023
Response after Non-Final Action
Jul 27, 2023
Applicant Interview (Telephonic)
Jul 27, 2023
Examiner Interview Summary
Aug 09, 2023
Request for Continued Examination
Aug 15, 2023
Response after Non-Final Action
Aug 16, 2023
Non-Final Rejection — §103, §112
Oct 25, 2023
Examiner Interview Summary
Oct 25, 2023
Applicant Interview (Telephonic)
Nov 01, 2023
Applicant Interview (Telephonic)
Nov 01, 2023
Examiner Interview Summary
Nov 07, 2023
Response Filed
Dec 19, 2023
Final Rejection — §103, §112
Feb 13, 2024
Response after Non-Final Action
Feb 27, 2024
Response after Non-Final Action
Mar 22, 2024
Request for Continued Examination
Mar 26, 2024
Response after Non-Final Action
Apr 05, 2024
Non-Final Rejection — §103, §112
Jun 12, 2024
Examiner Interview Summary
Jun 12, 2024
Applicant Interview (Telephonic)
Jul 11, 2024
Response Filed
Oct 01, 2024
Final Rejection — §103, §112
Nov 21, 2024
Response after Non-Final Action
Dec 23, 2024
Request for Continued Examination
Dec 27, 2024
Response after Non-Final Action
Apr 18, 2025
Non-Final Rejection — §103, §112
Jul 18, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103, §112
Nov 13, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Jan 02, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586323
AUGMENTED PHOTO CAPTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12586505
GRAYSCALE COMPENSATION METHOD, APPARATUS AND SYSTEM, DISPLAY DRIVING METHOD, APPARATUS AND SYSTEM, AND CHIP AND MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12555508
Display Apparatus and Luminance Adjustment Method Therefor
2y 5m to grant Granted Feb 17, 2026
Patent 12555524
DISPLAY DEVICE AND METHOD OF PREDICTING DETERIORATION OF DISPLAY PANEL
2y 5m to grant Granted Feb 17, 2026
Patent 12554324
EYE TRACKING SYSTEM, EYE TRACKING METHOD, AND EYE TRACKING PROGRAM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

21-22
Expected OA Rounds
66%
Grant Probability
86%
With Interview (+20.4%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month