Prosecution Insights
Last updated: April 19, 2026
Application No. 19/112,806

INTERACTION METHOD, APPARATUS AND DISPLAY DEVICE

Non-Final OA §102§112
Filed
Mar 18, 2025
Examiner
ZHENG, XUEMEI
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Goertek Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
598 granted / 707 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
23 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
1.0%
-39.0% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
23.0%
-17.0% vs TC avg
§112
25.8%
-14.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The amendment filed on 3/18/2025 has been entered. In the amendment, Applicant amended claims 1-10. Currently claims 1-10 are pending. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the feature “the display image of the first device corresponds to a display interface of the second device” in claim 2 must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “first acquiring module”, “transforming module”, “generating module” and “first determining module” in claim 6, “second acquiring module” and “executing module” in claim 7 and “communication module” in claim 10. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: first acquiring module: Fig. 10, first acquiring module 1001 (no structure disclosed); [00111] (no structure disclosed); transforming module: Fig. 10, transforming module 1002 (no structure disclosed); [00112] (no structure disclosed); generating module: Fig. 10, generating module 1003 (no structure disclosed); [00113] (no structure disclosed); first determining module: Fig. 10, first determining module 1004 (no structure disclosed); [00114] (no structure disclosed); second acquiring module: [00116] (no structure disclosed); executing module: [00117] (no structure disclosed); communication module: Fig. 11, communication module 1103 (no structure disclosed). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 6, the elements “first acquiring module”, “transforming module”, “generating module” and “first determining module” have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 §U.S.C. 112, sixth paragraph. However, for each of “first acquiring module”, “transforming module”, “generating module” and “first determining module”, there is no clear supporting structural or material description in the specification performing the corresponding function (see the section of “Claim Interpretation” for identified relevant portions). For rebuttal of this rejection, Applicant must point out supporting portion(s) in the specification that covers the corresponding structure that achieves the claimed function, and equivalents thereof. Applicant may identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Claims 7-9 are rejected because they depend on claim 6. Further regarding Claim7, the elements ““second acquiring module” and “executing module” have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 §U.S.C. 112, sixth paragraph. However, for each of “second acquiring module” and “executing module”, there is no clear supporting structural or material description in the specification performing the corresponding function (see the section of “Claim Interpretation” for identified relevant portions). For rebuttal of this rejection, Applicant must point out supporting portion(s) in the specification that covers the corresponding structure that achieves the claimed function, and equivalents thereof. Applicant may identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Regarding Claim 10, the element “communication module” has been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 §U.S.C. 112, sixth paragraph. However, for “communication module”, there is no clear supporting structural or material description in the specification performing the corresponding function (see the section of “Claim Interpretation” for identified relevant portion). For rebuttal of this rejection, Applicant must point out supporting portion(s) in the specification that covers the corresponding structure that achieves the claimed function, and equivalents thereof. Applicant may identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ohashi et al. (WO 2022176450, machine translation of which is used in this examination). Regarding claim 1, Ohashi teaches an interaction method (abstract), comprising: acquiring posture information (Fig. 9: steps S409, S412; Fig. 12: posture information interpreted as orientation information of operating terminal 20; page 16, 2nd paragraph: “Next, the operating terminal 20 … transmits the position and orientation information of the operating terminal 20 and the operation information to the AR display device 10 (step S409)”; page 16, 3rd paragraph: “Next, the AR display device 10 determines processing for the virtual object according to the position/orientation information and the operation information of the operation terminal 20 (step S412)”; Page 17, 7th paragraph: “as shown in FIGS. 12 and 13 , the position of the virtual object 51b in the selected state can be changed by changing the orientation of the operation terminal 20 up, down, left, right, or moving it horizontally while the user keeps tapping”; Examiner’s Note: the orientation information/posture information of the operating terminal 20 must have been acquired by operating terminal 20 in order for the operating terminal 20 to achieve transmission of the position and orientation information to AR display device 10) of a second device (Figs. 1-5, 7, 9-18: operating terminal 20, which may be a dedicated controller, a wearable device worn on a user's hand or foot, or a smart phone (a general-purpose communication terminal) according to section “Operation terminal 20” on page 4), wherein the current posture information is configured to characterize a first direction (Figs. 3 and 12: first direction interpreted as orientation of operating terminal 20) representing an orientation of the second device; transforming the first direction to obtain a second direction (Figs. 3 and 12: second direction represented by indication image L, which is inherently determined by transforming from orientation of operation terminal 20); generating a virtual identifier (Figs. 3 and 12: indication image L) according to the second direction, wherein the virtual identifier extends along the second direction and points to a display image (Figs. 3 and 12: display image including virtual objects 50a-50d) of a first device (Fig. 1: AR display device 10); and determining an intersection point (Figs. 3, 12: exemplary intersection point where instruction image L interests with a virtual object) between the virtual identifier and the display image within a preset first coordinate system (page 4, last paragraph: “the AR display device 10 or the operation terminal 20 transmits to the position estimation server 30 the captured image of the surroundings (or the information of the feature points extracted from the captured image) geographic coordinate information (latitude/longitude/altitude information) and attitude information (for example, azimuth), and the corresponding virtual space position coordinate information (xyz coordinates) and attitude information (for example, rotation matrix) can be obtained”; page 5, 3rd paragraph: “virtual object related information (virtual object ID, 3D image data, placed geographic coordinate information and posture information, sound data, movement of the virtual object, etc. defined scripts, etc.)”, “information on virtual objects is also handled as information on locations (virtual objects are fixed at locations in real space by geographic coordinate information)”) and displaying a preset icon (Figs. 3, 12: highlighted virtual object) at the intersection point. Regarding claim 2, Ohashi further teaches the method according to claim 1, further comprises activating a wireless streaming (Page 7, section “Configuration example”: “The AR display device 10 and the operation terminal 20 are connected for communication by wire or wirelessly, and transmit and receive data”), where the display image of the first device corresponds to a display interface of the second device (Fig. 3). Regarding claim 3, Ohashi further teaches the method according to claim 1, wherein after said "determining an intersection point between the virtual identifier and the display image within a preset first coordinate system, and displaying a preset icon at the intersection point", the method further comprises: acquiring first location information (Fig. 3: location information of virtual object 50c; Fig. 12: location information of virtual object 50b) of the intersection point upon receiving a control instruction (Figs. 3, 12: control instruction from operating terminal 20); and executing an interaction event (Fig. 12: exemplary interaction event of moving virtual object 50b) triggered by the intersection point according to the first location information. Regarding claim 4, Ohashi further teaches the method according to claim 1, wherein said "acquiring posture information of a second devices" comprises: acquiring posture variation information (Fig. 12: posture variation of operating terminal 20) and initial posture information (Fig. 3, 12: starting posture information of operating terminal 20 for selecting a virtual object) of the second device, wherein the initial posture information is configured to characterize posture information of the second device being in a preset initial direction (Figs. 3, 12: preset initial direction according to a virtual object to be selected), and the posture variation information is configured to characterize posture information of variations of the second device relative to the initial direction (Fig. 12: target location for selected virtual object); and determining the posture information according to the posture variation information and the initial posture information (Fig. 12). Regarding claim 5, Ohashi further teaches the method according to claim 1, wherein said "transforming the first direction to obtain a second direction' comprises: transforming the first direction according to a preset transformation relationship to obtain the second direction (Fig. 3: necessary preset transformation relationship involved to derive instruction image L corresponding to orientation of operating terminal 20); wherein the preset transformation relationship is a correspondence between a coordinate system (Fig. 1: position estimation server 30 provides position information (geographical coordinate information) of the current location of AR display device 10) of the first device in use and a coordinate system (Fig. 1: position estimation server 30 provides position information (geographical coordinate information) of the current location of operating terminal 20) of the second device in use. Claim 6 is rejected for substantially the same rationale applied to claim 1 (Note: “first acquiring module”, “transforming module”, “generating module” and “first determining module” are not provided with specific structures by Applicant). Claim 7 is rejected for substantially the same rationale applied to claim 2 (Note: “second acquiring module” and “executing module” are not provided with specific structures by Applicant). Claim 8 is rejected for substantially the same rationale applied to claim 4 (Note: “first acquiring module” is not provided with specific structures by Applicant). Claim 9 is rejected for substantially the same rationale applied to claim 5 (Note: “transforming module” is not provided with specific structures by Applicant). Regarding claim 10, Ohashi further teaches a display device (Figs. 1, 4: AR display device 10), comprising: a communication module (Fig. 4: communication unit 110); a memory (Fig. 4: storage section 160) for storing executable computer instructions; and a processor (Fig. 4: control section 120), communicatively coupled to the communication module and the memory, for executing the interaction method according to claim 1 under the executable computer instructions stored in the memory; wherein the communication module is configured for establishing a communication connection with an electronic device (Figs. 1, 4: operating terminal 20). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 2024/0029377 by O’Leary et al. teaches in Figs. 7A-7N three-dimensional environments including virtual objects that are controlled by input device 7024. US Patent No. 11,861,136 by Faulkner et al., teaches in Figs. 5A1-5A48 displaying a view of at least a portion of a simulated three-dimensional space and a view of a user interface object located within the simulated three-dimensional space, the user interface object being a representation of a computing device that has a non-immersive display environment that provides access to a plurality of different applications and a pose of the user interface object in the simulated three-dimensional space corresponding to a pose of the input device in a physical space surrounding the input device. CN 115617164 A by the same Applicant discloses related technique to this instant application. CN 115576419 A by the same Applicant discloses related technique to this instant application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUEMEI ZHENG whose telephone number is (571)272-1434. The examiner can normally be reached Monday-Friday: 9:30 pm-6:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Lee can be reached at 571-272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XUEMEI ZHENG/Primary Examiner, Art Unit 2629
Read full office action

Prosecution Timeline

Mar 18, 2025
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596441
Chinese Character Input Method, System and Keyboard
2y 5m to grant Granted Apr 07, 2026
Patent 12572318
SYSTEMS AND METHODS FOR DYNAMICALLY SHARING MEDIA BASED ON CONTACT PROXIMITY, GROUP PARTICIPATION, OR EVENT
2y 5m to grant Granted Mar 10, 2026
Patent 12563939
DISPLAY SUBSTRATE AND DISPLAY DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12554140
POSITIONING, STABILISING, AND INTERFACING STRUCTURES AND SYSTEM INCORPORATING SAME
2y 5m to grant Granted Feb 17, 2026
Patent 12554136
COLOR CORRECTION FOR XR DISPLAY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.0%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month