Prosecution Insights
Last updated: April 18, 2026
Application No. 18/319,955

METHOD, DEVICE AND MEDIUM FOR DISPLAYING OPERATION INTERFACE ON INTERFACE DISPLAYING AREA

Final Rejection §103§112
Filed
May 18, 2023
Examiner
TRAN, TUYETLIEN T
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
6 (Final)
67%
Grant Probability
Favorable
7-8
OA Rounds
3y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
429 granted / 637 resolved
+12.3% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
22 currently pending
Career history
659
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 637 resolved cases

Office Action

§103 §112
DETAILED ACTION This action is responsive to the following communication: The amendment filed on 03/12/2026. This action is made final. Claims 1-4, 7, 13, 15-17, 22-24, 27-30 are pending in the case. Claims 1, 13, and 22 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claims 23-24 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends. Claim 23 recites the limitations that are recited in claim 1 upon which it depends. Claim 24 is rejected as incorporating the deficiency of the claim 23 upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, 13, 15-17, 22-24, 27-30 are rejected under 35 U.S.C. 103 as being unpatentable over Starner et al. (US 2015/0268799 A1; hereinafter as Starner) in view of Davis et al. (US 2019/0121522 A1; hereafter Davis). As to claim 1, Starner discloses: An interface displaying method (see ¶ 0005-0006), comprising: in response to an interface displaying instruction, recognizing a preset human body part (see Fig. 3A and ¶ 0059; The user may direct where the projection of the laser pattern of objects 300 is displayed based on a gaze direction of the user (e.g., a direction that the user is looking) In the example shown in FIG. 3A, the user is looking at a hand 302, and thus, the laser pattern of objects 300 may be projected onto the hand 302. In this example, the hand 302 may be considered a display hand, and the laser projection of objects 300 is a number keypad); determining an interface displaying area on the recognized preset human body part (see ¶ 0109; a location of the hand can be determined; As another example, the processor may be configured to determine an area of the surface, and to instruct the projector to provide the virtual input device onto the area of the surface, such as by widening or narrowing the projection to accommodate an available area; Thus, the projection may initially be of a first coverage area, and may subsequently be modified to that of a second coverage area that is less than or greater than the first coverage area (e.g., start off wide and decrease, or start off narrow and increase)); generating and displaying an operation interface according to the interface displaying area (see ¶ 0109; instruct the projector to provide the virtual input device onto the area of the surface. See Figs. 3A-5 and ¶ 0062-0064; virtual input device is displayed/projected onto the hand 302); receiving a clicking operation for the operation interface from a user on the preset human body part (see Fig. 5-6 and ¶ 0065-0066; receiving an input on a virtual input device; a user may use an opposite hand to provide an input by selecting one of the objects of the laser pattern of objects 308. See Figs. 11A-11C and ¶ 0092; selection and/or clicking operation). Starner does not appear to teach: performing an interactive operation in response to the clicking operation for the operation interface. However, Davis is relied upon for teaching the limitations. Specifically, Davis teaches a method for displaying an interface on a user’s body part (see Figs. 4A-4B and ¶ 0587) comprising: generating and displaying an operation interface according to the interface displaying area (see ¶ 0581, 0587; The device detects the user's hands and based on predetermined or user input projects 308 a user interface on the user's hand. See Fig. 7A-9C; user interface are displayed onto user’s hand/wrist); receiving a clicking operation for the operation interface from a user on the preset human body part (see ¶ 0587; The user then uses their other hand to manipulate the AGUI user interface using gestures or by manipulating the interface projected on their hand. In this case, the user has positioned their palm so that the AGUI device is able to clearly project a display image on it. See Figs. 10A-10B and ¶ 0592; user can select a menu item using other hand. ¶ 0770; the user may touch a virtual control to make a selection). performing an interactive operation in response to the clicking operation for the operation interface (See Figs. 10A-10B and ¶ 0592; When the AGUI device detects that the finger has touched the SEARCH menu control 520, it instructs the running application program to display a sub-menu 505. The AGUI device is aware of the size and placement of the user's hand and that it is in the field of view of the user and selects the user's palm and fingers to display the sub-menu 505 comprising a large display surface 534 and further menu controls 532). Starner teaches wherein determining the interface displaying area on the preset human body part (see ¶ 0109; determine the location of the surface). Starner does not explicitly discloses the determination comprises: recognizing a plurality of human body key points of the preset human body part; and determining the interface displaying area corresponding to the plurality of human body key points. However, Davis teaches determining an interface displaying area on the preset human body part comprises: recognizing a plurality of human body key points of the preset human body part; and determining the interface displaying area corresponding to the plurality of human body key points (see ¶ 0581-0582, 0721, 0725, 0735; the mapping system and the intelligence surrounding it identify zones corresponding to a central palm area and finger areas that extend out from the palm area. ¶ 0722; mapping the surface consists of determining the 3-dimensional spatial location of each point of the projected pattern that can be captured in the image. See Figs. 6A-6C and ¶ 0588; the AGUI system maps out the position of the body parts, detects that the user’s fingers are together, and determines that it can project a large display surface 330 on the palm of the hand, including fingers. ¶ 0747; Processed mapping data recognizes that finger zones include sub-zones separated by joints, and the device can choose to further divide selection content onto specific parts of fingers). Both references, each is directed to a method of displaying a user interface onto an identified user’s body part; therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the user interface disclosed in Starner to include the adaptive graphic user interface of Davis such that the user can make a selection on the displayed interface and perform an interactive operation associated with the selected element as claimed. One of ordinary skill in the art would have been motivated to make such a combination because of the overlapping subject matter (i.e., displaying a user interface on a user’s hand), and the advantage described in Davis is to allow the user to interface with the system in a more convenient way (Davis: see ¶ 0005) and to allow the user to better interact with the system (Davis: see ¶ 0561). As to claim 2, the rejection of claim 1 is incorporated. Starner and Davis further teach wherein before in response to the interface displaying instruction, the method comprises: detecting a current gesture action of a user; if the current gesture action is determined to be a preset gesture action, obtaining the interface displaying instruction (Davis: ¶ 0587, 0581; The device detects the user's hands and based on predetermined or user input projects 308 a user interface on the user's hand; the user has positioned their palm so that the AGUI device is able to clearly project a display image on it. ¶ 0597; Gestures may be employed by the user to indicate the exact position and size of the area of the table to be used as a display). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 3, the rejection of claim 1 is incorporated. Starner and Davis further teach wherein the generating and displaying an operation interface according to the interface displaying area comprises: recognizing area size information of the interface displaying area; generating and displaying the operation interface according to the area size information (Starner: see ¶ 0052, 0108; the display 202 is positioned and sized such that images being displayed appear to be overlaid upon or to "float" in a user's view of the physical world, thus providing an experience in which computer-generated information can be merged with the user's perception of the physical world. To do so, on-board computing system 204 may be configured to analyze video footage that is captured by the video camera 208 to determine what graphics should be displayed, and how the graphics should be displayed (e.g., location on the display, size of the graphics, etc.). Davis: see ¶ 0581. ¶ 0036, 0227; illustrates how an AGUI system adapt display content to the shape and size of displays). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 4, the rejection of claim 3 is incorporated. Starner and Davis further teach wherein the generating and displaying the operation interface according to the area size information comprises: determining scaling ratio information according to the area size information (Starner: see ¶ 0052, 0108; the display 202 is positioned and sized such that images being displayed appear to be overlaid upon or to "float" in a user's view of the physical world, thus providing an experience in which computer-generated information can be merged with the user's perception of the physical world. To do so, on-board computing system 204 may be configured to analyze video footage that is captured by the video camera 208 to determine what graphics should be displayed, and how the graphics should be displayed (e.g., location on the display, size of the graphics, etc.). Davis: see ¶ 0581. ¶ 0036, 0227; illustrates how an AGUI system adapt display content to the shape and size of displays. ¶ 0565; The AGUI system may adapt a graphic user interface or other multimedia content to the active display device and/or surface by receiving precise size, shape, pixel or other dimensional and display specifications from the device and then adapting and optimizing the graphic user interface or other multimedia content for the AGUI display device and/or surface); scaling, according to the scaling ratio information, a preset standard operation interface, to generate and display the operation interface (Davis: see ¶ 0612; the interface adapts in real time; The display content may also be reorganized or resized to fit the updated available surface. ¶ 0725, 0739; Content may also dynamically adjust in size, spatial position and orientation within each visual display sub-area or zone in relation to the spatial position and viewing angle of one or more viewing parties. ¶ 0734; the application consults rules to determine whether this should cause scaling or positioning of the display content to change to compensate for a change in order to maintain a steady display). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 7, the rejection of claim 1 is incorporated. Starner and Davis further teach wherein the determining the interface displaying area corresponding to the plurality of human body key points comprises: determining an area surrounded by the reference bounding box to be the interface displaying area; or determining, in a reference bounding box and according to a preset shape, an area surrounded by a maximum bounding box to be the interface displaying area; or determining, in the reference bounding box, an area with a preset shape and a preset size to be the interface displaying area (Starner: see ¶ 0052, 0108; the display 202 is positioned and sized such that images being displayed appear to be overlaid upon or to "float" in a user's view of the physical world, thus providing an experience in which computer-generated information can be merged with the user's perception of the physical world. To do so, on-board computing system 204 may be configured to analyze video footage that is captured by the video camera 208 to determine what graphics should be displayed, and how the graphics should be displayed (e.g., location on the display, size of the graphics, etc.). Davis: see ¶ 0581, 0036, 0227; illustrates how an AGUI system adapt display content to the shape and size of displays. ¶ 0565; The AGUI system may adapt a graphic user interface or other multimedia content to the active display device and/or surface by receiving precise size, shape, pixel or other dimensional and display specifications from the device and then adapting and optimizing the graphic user interface or other multimedia content for the AGUI display device and/or surface. Davis: see ¶ 0612; the interface adapts in real time; The display content may also be reorganized or resized to fit the updated available surface. ¶ 0725, 0739; Content may also dynamically adjust in size, spatial position and orientation within each visual display sub-area or zone in relation to the spatial position and viewing angle of one or more viewing parties. ¶ 0734; the application consults rules to determine whether this should cause scaling or positioning of the display content to change to compensate for a change in order to maintain a steady display. ¶ 0581; the mapping system and the intelligence surrounding it identify zones corresponding to a central palm area and finger areas that extend out from the palm area). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 23, the rejection of claim 1 is incorporated. Starner and Davis further teach wherein determining the interface displaying area on the preset human body part comprises: recognizing a plurality of human body key points of the preset human body part; and determining the interface displaying area corresponding to the plurality of human body key points (Starner: see ¶ 0109; a location of the hand can be determined; As another example, the processor may be configured to determine an area of the surface, and to instruct the projector to provide the virtual input device onto the area of the surface, such as by widening or narrowing the projection to accommodate an available area; Thus, the projection may initially be of a first coverage area, and may subsequently be modified to that of a second coverage area that is less than or greater than the first coverage area (e.g., start off wide and decrease, or start off narrow and increase). Davis: see ¶ 0581-0582, 0721, 0725, 0735; the mapping system and the intelligence surrounding it identify zones corresponding to a central palm area and finger areas that extend out from the palm area. ¶ 0722; mapping the surface consists of determining the 3-dimensional spatial location of each point of the projected pattern that can be captured in the image. See Figs. 6A-6C and ¶ 0588; the AGUI system maps out the position of the body parts, detects that the user’s fingers are together, and determines that it can project a large display surface 330 on the palm of the hand, including fingers. ¶ 0747; Processed mapping data recognizes that finger zones include sub-zones separated by joints, and the device can choose to further divide selection content onto specific parts of fingers). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claim 24, the rejection of claim 1 is incorporated. Starner and Davis further teach wherein determining the interface displaying area corresponding to the plurality of human body key points comprises: determining edge human body key points of the plurality of human body key points; connecting the edge human body key points to obtain a reference bounding box; and determining the interface displaying area according to the reference bounding box (Davis: see Figs. 8A-8C and ¶ 0590-0591; identify surface area 338. ¶ 0721; The size and shape of each surface identified through light and optical depth mapping, imaging, device or sensor networking and spatial positioning and the shape, edges and boundaries of each mapped and identified display surface may be further calculated in relation to their position within a 2-dimensional or 3-dimensional grid of squares. Each mapped surface may then be organized into zones of one or more visual display sub-areas and interactive graphic content may then be assigned to each visual display sub-area or zone of visual display sub-areas based on the size, shape and type of object and/or display surface and the type of content being displayed). Thus, combining Starner and Davis would meet the claimed limitations for the same reasons as set forth in claim 1. As to claims 13, 15-17, 27, 28, claims 13, 15-17, 27, 28 are directed to the electronic device comprising a processor, a memory for storing executable instructions for the processor; where in the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement an interface displaying method as claimed in claims 1, 2-4, 23, 24, respectively; therefore, are rejected under similar rationale. (Starner: see Fig. 1 and ¶ 0036-0045). As to claims 22, 29, 30, claims 22, 29, 30 are directed to the non-transitory computer readable storage medium storing a computer program configured to perform the interface displaying method recited in independent claims 1, 23, 24, respectively; therefore, are rejected under similar rationale. (Starner: see Fig. 1 and ¶ 0007, 0067). Response to Arguments Applicant's arguments filed 03/12/26 have been fully considered but they are not persuasive. Applicants argued that claim 1 requires that the device affirmatively recognizes the preset human body part (e.g., by recognizing human body key points using a pre-trained neural network model) and that Starner’s gaze-directed projection mechanism does not involve recognizing key points on a human body part, nor does it determine an interface displaying area based on such key points. This key-point-based recognition and area determination is a specific technical approach that is entirely absent from Starner (see Remark page 8-9). In response, the examiner respectfully disagrees and notes that the features upon which applicant relies (i.e., recognizing human body key points using a pre-trained neural network model) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) (emphasis added). In this case, all that recites in claim 1 is “recognizing a plurality of human body key points of the preset human body part and determining the interface displaying area corresponding to the plurality of human body key points”. The claim does not recite any specific technical algorithm for recognizing a plurality body key points. In addition, the disputed features are rejected based on the combined teaching of Starner and Davis. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicants argued that Davis’s zone-mapping approach is a different technical paradigm from the claimed key-point-based recognition and interface area determination (see Remark page 9). In response, the examiner respectfully disagrees and notes that claim 1 only requires a broad mechanism for recognizing a plurality of human body key points of the preset human body part. Claim 1 does not recite any a pre-trained neural network model being used in the recognizing mechanism. In this case, Davis discloses in Figs. 6A-6C and ¶ 0588 that the AGUI system maps out the position of the body parts, detects that the user’s fingers are together, and determines that it can project a large display surface 330 on the palm of the hand, including fingers. Also in ¶ 0747, Davis discloses the Processed mapping data recognizes that finger zones include sub-zones separated by joints, and the device can choose to further divide selection content onto specific parts of fingers. The finger zones that include sub-zones separated by joints reads on the recognized human body key points since finger zones are key points of the hand body part. Applicants argued that there is no motivation to combine Stamer and Davis (see Remark page 10). In response, the examiner respectfully disagrees and notes that as long as some motivation or suggestion to combine the references is provided by the prior art taken as a whole, the law does not require that the references be combined for the reasons contemplated by the inventor.” In re Beattie, 974 F.2d 1309, 1312 (Fed. Cir. 1992) (citations omitted). In addition, the examiner recognizes that obviousness can only be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988) and In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992). For at least these reasons, the examiner maintains that the combined teaching of Starner and Davis renders obvious the disputed feature. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275,277 (CCPA 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUYETLIEN T TRAN whose telephone number is (571)270-1033. The examiner can normally be reached M-F: 8:00 AM - 8:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Irete (Fred) Ehichioya can be reached at 571-272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUYETLIEN T TRAN/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

May 18, 2023
Application Filed
Dec 14, 2023
Non-Final Rejection — §103, §112
Mar 04, 2024
Response Filed
Jun 14, 2024
Final Rejection — §103, §112
Sep 18, 2024
Request for Continued Examination
Sep 20, 2024
Response after Non-Final Action
Sep 25, 2024
Non-Final Rejection — §103, §112
Dec 26, 2024
Response Filed
Feb 19, 2025
Final Rejection — §103, §112
Apr 24, 2025
Response after Non-Final Action
Aug 18, 2025
Request for Continued Examination
Aug 28, 2025
Response after Non-Final Action
Dec 10, 2025
Non-Final Rejection — §103, §112
Mar 12, 2026
Response Filed
Apr 07, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602153
SIGNAL TRACKING AND OBSERVATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12586104
OBJECT DISPLAY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585376
SYSTEMS AND METHODS OF REDUCING OBSTRUCTION BY THREE-DIMENSIONAL CONTENT
2y 5m to grant Granted Mar 24, 2026
Patent 12585377
SYSTEM AND METHOD FOR HANDLING OVERLAPPING OBJECTS IN VISUAL EDITING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12573257
DIGITAL JUKEBOX DEVICE WITH IMPROVED USER INTERFACES, AND ASSOCIATED METHODS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.0%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 637 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month