Prosecution Insights
Last updated: April 19, 2026
Application No. 19/245,218

INTERACTION METHOD, AUTOSTEREOSCOPIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §102
Filed
Jun 20, 2025
Examiner
MISHLER, ROBIN J
Art Unit
2628
Tech Center
2600 — Communications
Assignee
Lenovo (Beijing) Limited
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
75%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
488 granted / 707 resolved
+7.0% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
28 currently pending
Career history
735
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
35.2%
-4.8% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 707 resolved cases

Office Action

§102
DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Klug (US 2008/0231926). Regarding claim 1, Klug discloses an interaction method, applied to an autostereoscopic device (see 110, fig. 1 and para. 19), comprising: acquiring first indication information (location and movement of glove 140 in para. 42, 29) sent by an accessory device (accessory device including glove 140 and camera/sensor 102 in fig. 1 and see para. 56, 59, wherein the camera senses the movement of the glove) interacting with the autostereoscopic device (para. 39), wherein the first indication information includes at least one of following information: left-right movement information in a horizontal direction, up-down movement information in a vertical direction, or forward-backward movement information in a depth direction (para. 42, 59, 29; wherein an object is moveable in all directions including up, down, left, right, forward and backward); and based on the first indication information, adjusting a display position of a target application (222, fig. 2-3) in a three-dimensional display mode of the autostereoscopic device (para. 59, 62, 30; wherein gesture inputs move the 3D objects on the display). Regarding claim 2, Klug discloses wherein adjusting the display position of the target application in the three-dimensional display mode of the autostereoscopic device includes: based on the first indication information, determining that the accessory device makes at least one of following movements: an up-down movement in the vertical direction, a left-right movement in the horizontal direction, or a forward-backward movement in the depth direction (para. 42, 59);and performing at least one of following adjustments in the three-dimensional display mode: adjusting a vertical up-down position of at least one target application of the target application (para. 62), adjusting a display viewing angle of at least one target application of the target application (para. 62), or adjusting a depth-wise front-back position of at least one target application of the target application (para. 59, 62). Regarding claim 3, Klug discloses further comprising: determining that the accessory device is triggered to rotate (see circling gesture in para. 62), or a user touches and slides on a surface of the accessory device; and performing at least one of following adjustments in the three-dimensional display mode: adjusting a vertical up-down position of at least one target application of the target application (para. 59), adjusting a display viewing angle of at least one target application of the target application (para. 62), or adjusting a depth-wise front-back position of at least one target application of the target application (para. 62, 59). Regarding claim 4, Klug further discloses further comprising: determining an application matching a gaze point of a user as the target application; or when it is determined that a gesture of the user matches a first preset gesture (para. 47, 76), determining the target application based on distance information between the accessory device and each application displayed on an interface in the three-dimensional display mode (para. 75-76); or in response to a target instruction sent by the accessory device (para. 75-76), determining the target application based on the distance information between the accessory device and each application displayed on the interface in the three-dimensional display mode (para. 75-76), wherein the target instruction is generated by the user touching a preset touch area of the accessory device (para. 75-76, 31). Regarding claim 5, Klug discloses further comprising: using a camera device of the autostereoscopic device to obtain the gaze point of the user or the gesture of the user; and/or acquiring sensor data information of the accessory device to determine the gesture of the user based on the sensor data information (para. 31, 75-76). Regarding claim 6, Klug discloses further comprising: acquiring second indication information sent by the accessory device (e.g. a two handed gesture in para. 62, 29); and when it is determined based on the second indication information that the accessory device makes a forward-backward movement in the depth direction (para. 62, 76, 59), performing a zoom-in operation or a zoom-out operation on the target application (see zoom in para. 62). Regarding claim 7, Klug discloses further comprising: when it is determined that the gesture of the user matches a second preset gesture (para. 62, 29), displaying a target model corresponding to the target application in three dimensions (para. 66; wherein an object is grabbed and moved); acquiring third indication information of the accessory device (see release in para. 66); and based on the third indication information, operating the target model corresponding to the target application (para. 66, wherein the object is released) Regarding claim 8, Klug discloses wherein operating the target model corresponding to the target application includes: determining that the gesture of the user matches a third preset gesture (para. 66, 62), and when it is determined, based on the third indication information (para. 66), that the accessory device makes an up- down movement in the vertical direction, performing touch interaction on the target model (para. 66); determining that the gesture of the user matches a fourth preset gesture (para. 62), and when it is determined, based on the third indication information (para. 62), that movement information of the accessory device matches a preset rotation movement information (para. 62), performing a rotation operation on the target model (para. 62); and determining that the gesture of the user matches a fifth preset gesture (para. 62, 66), and when it is determined, based on the third indication information (para. 62, 66), that the accessory device makes a left- right movement in the horizontal direction (para. 62, 66), performing a zoom-in or zoom-out operation on the target model accordingly (para. 62, 66; wherein e.g. any described gesture inputs can trigger any described object transformations). Regarding claim 9, Klug discloses further comprising: in response to the target instruction sent by the accessory device (para. 66, 62), exiting a three- dimensional display of the target model (para. 66, 62; wherein the 3D object is de-selected or released). Regarding claim 10, Klug discloses further comprising: acquiring fourth indication information sent by the accessory device (para. 62, 66); and based on the fourth indication information, adjusting an interface display in the three-dimensional display mode (para. 62, 66, 30). Regarding claim 11, Klug discloses before acquiring the first indication information sent by the accessory device interacting with the autostereoscopic device, further comprising: in response to a target instruction sent by the accessory device (para. 45, 30), entering the three-dimensional display mode (fig. 2), wherein the target instruction is generated by a user touching a preset touch area of the accessory device (see gun pose in para. 45, used for object selection). Claims 12-19 are rejected for the same reasons stated for claims 1-8, respectively. See above rejections. Claim 20 is rejected for the same reasons stated for claim 1. See above rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBIN J MISHLER whose telephone number is (571)270-7251. The examiner can normally be reached 8:00-5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NITIN PATEL can be reached at (571)272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBIN J MISHLER/ Primary Examiner, Art Unit 2628
Read full office action

Prosecution Timeline

Jun 20, 2025
Application Filed
Feb 23, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598714
LOCKING STRUCTURE AND ELECTRONIC DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12596453
TOUCH SENSOR AND A METHOD FOR DETECTING A USER'S TOUCH
2y 5m to grant Granted Apr 07, 2026
Patent 12585351
TOUCH DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12568688
DISPLAY DEVICE AND TILED DISPLAY DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12567184
VIDEO PROCESSING METHOD, APPARATUS AND DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
75%
With Interview (+5.9%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 707 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month