Prosecution Insights
Last updated: April 19, 2026
Application No. 19/217,930

INTERACTING WITH A MACHINE USING GESTURES IN FIRST AND SECOND USER-SPECIFIC VIRTUAL PLANES

Non-Final OA §102§103§DP
Filed
May 23, 2025
Examiner
WILSON, DOUGLAS M
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Sim Ip Hxr LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
320 granted / 427 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
452
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 2-21 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over Claim 14 of U.S. Patent No. 9,916,009. Although the claims at issue are not identical, they are not patentably distinct from each other because MPEP 804.II.B.2 states: A nonstatutory double patenting rejection is appropriate where a claim in an application under examination claims subject matter that is different, but not patentably distinct, from the subject matter claimed in a prior patent or a copending application. The claim under examination is not patentably distinct from the reference claim(s) if the claim under examination is anticipated by the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 1052, 29 USPQ2d 2010, 2015-16 (Fed. Cir. 1993). U.S. Patent Application 19/217,930 Examined Application U.S. Patent 9,916,009 Reference Application Claim 14 A system comprising: a processor coupled to memory, the memory being loaded with computer instructions that, upon execution on the processor, cause the processor to implement operations comprising: detecting a first state of a first portion of an object based, at least in part, on at least one of a position of a first plane in a three-dimensional (3D) space or a position of the first portion of the object, detecting a second state of a second portion of the object based, at least in part, on at least one of a position of a second plane in the 3D space or a position of the second portion of the object, the first plane being defined based, at least in part, on positional information of the first portion of the object; the second plane being defined based, at least in part, on positional information of the second portion of the object; and determining a first input from the object based, at least in part, on at least one of the first state or the second state. Claim 14 A system, including: an image-capture device including at least one camera; an image analyzer coupled to the at least one camera, the image analyzer being configured to: track using a 3D sensor user movements including sensing positional information of a portion of a hand in a monitored region of space monitored by the 3D sensor; detect a first finger state of the first finger relative to the corresponding first user-specific virtual plane and a second finger state of the second finger relative to the corresponding second user-specific virtual plane, wherein a finger state for a finger relative to the corresponding user-specific virtual plane defined for the finger is one of: the finger moving closer or further away from the corresponding user-specific virtual plane, and the finger moving on or against the corresponding user-specific virtual plane; using the sensed positional information of the portion of the hand, define a plurality of distinct user-specific virtual planes, including at least a first user-specific virtual plane defined in space relative to a position of, and corresponding to, a first finger of the hand and a second user-specific virtual plane defined in space relative to a position of, and corresponding to, a second finger of the hand, in the monitored region of space; determine an input gesture made by the portion of the hand based on the first finger state and the second finger state; provide the command to a machine for executing an action appropriate to the command. using the sensed positional information of the hand, determine from a plurality of zones defined for the monitored region, a zone in which the portion of the hand is present at the time the first finger state and the second finger state are detected; interpret the input gesture as a command using the input gesture and the zone determined from the position of the hand. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 2-5, 7-11, and 13-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Iwamura (US Patent 6,498,628). Regarding Claims 2, 14, and 18 (New), Iwamura teaches a method, a system, and a non-transitory computer readable storage medium, comprising: a processor [fig. 19 @16] coupled to memory [fig. 19 @14], the memory being loaded with computer instructions that, upon execution on the processor, cause the processor to implement operations comprising [col 6 lines 60-63, “This hand pattern recognition requires so small an amount of calculations that software can handle it”] detecting [col 7 lines 2-9, “The image 25 is stored in RAM 14 … when the hand 20 is located, CPU 16 compares the image in the located region with the stored hand pattern and finds the position where both of them match best”] a first state [hand recognized in the x-y plane] of a first portion of an object [user hand (fig. 20 @20)] based, at least in part, on at least one of a position of a first plane [fig. 20 @x-y plane] in a three-dimensional (3D) space [col 8 lines 38-40, “First, the user 18 does the predetermined hand motion, for example, a diagonal motion. As mentioned above, by checking the active region, the hand 20 is located in the X-Y plane”] or a position of the first portion of the object [alternate limitation not addressed], the first plane [x-y plane] being defined based, at least in part, on positional information of the first portion [hand image detected by fig. 16 @13] of the object [col 7 lines 2-9] detecting a second state [hand movement in z-axis stopped] of a second portion [hand image detected by fig. 20 @31] of the object based, at least in part, on at least one of a position of a second plane [Y-Z plane] in the 3D space or a position of the second portion of the object [alternate limitation not addressed], the second plane [Y-Z plane] being defined based, at least in part, on positional information of the second portion [hand image detected by fig. 20 @31] of the object [col 8 lines 41-43, “CPU 16 monitors the video signal from camera 31 and locates the hand position in the Y-Z plane”]; and determining a first input [col 8 lines 49-54, “in the case that the hand 20 moves toward the CRT 9, the active region moves from the position 200 to the position 202 in FIG. 22. In this way, CPU 16 can also obtain the hand motion on the Z-axis”] from the object [user hand] based, at least in part, on at least one of the first state or the second state [col 8 lines 60-63, “This Z-axis motion is detected by CPU 16 as a selection of that button icon and the CPU 16 executes the task associated with the button icon. Regarding Claim 3 (New), Iwamura teaches the method of Claim 2, comprising interpreting the first input to be a command to zoom-in [col 8 line 64 to col 9 line 2, “Z-axis motion may also be used for zooming … CPU 16 sends a command to OSD 5 or video decoder 6 and makes them zoom in the image on the CRT 9”]. Regarding Claim 4 (New), Iwamura teaches the method of Claim 2, comprising interpreting the first input to be a command to zoom-out [col 8 line 64 to col 9 line 2, “When the hand 20 goes away from CRT 9, the image is zoomed out”]. Regarding Claim 5 (New), Iwamura teaches the method of Claim 2, comprising interpreting the first input [z-axis motion] to be a pressure command [pressing an icon as simulation of pushing a button is construed as a pressure command, col 1 lines 65-67, “… the motion detector may detect a hand movement akin to pushing in the icon as one would push in a button“]. Regarding Claims 7, 15, and 19 (New), Iwamura teaches the method of Claim 2, the system of Claim 14, and the non-transitory computer readable storage medium of Claim 18, wherein the operations comprise: personalizing a plane based, at least in part, on at least one of a 3D vector or a depth from a movement along an axis extending from the object to a display [col 8 lines 49-54, “in the case that the hand 20 moves toward the CRT 9, the active region moves from the position 200 to the position 202 in FIG. 22. In this way, CPU 16 can also obtain the hand motion on the Z-axis”]; and defining a thickness of the plane based [Z-axis distance represents distance from surface of display and the detected Z-axis value is the thickness of the X-Y plane], at least in part, on at least one of the 3D vector or the depth. Regarding Claim 8 (New), Iwamura teaches the method of Claim 2, comprising revising a position of at least one of the first plane [x-y plane] or the second plane [alternate limitation is not addressed] based, at least in part, on a detected state [the detected z-axis position determines the location of the x-y plane]. Regarding Claim 9 (New), Iwamura teaches the method of Claim 2, comprising shifting a location [changing the hand location on the z-axis redefines the location of the x-y plane] of at least one of the first plane [x-y plane] or the second plane [alternate limitation not addressed]. Regarding Claims 10, 16, and 20 (New), Iwamura teaches the method of Claim 2, the system of Claim 14, and the non-transitory computer readable storage medium of Claim 18, wherein the operations comprise: determining an interpretation of the first input [movement in z-axis location] based, at least in part, on a hover zone [construed as area above menu button icon 22] in which the first portion of the object is located; and interpreting at least one of a position or a motion of the first portion of the object based, at least in part, on the hover zone [col 4 lines 61 – col 5 line 15, “… the CPU 16 locks the hand motion and an on screen cursor 24 follows it … When the cursor 24 comes to a menu button icon 22 the user 18 wants, the user 18 stops and holds his or her hand 20 there a couple of seconds. The CPU 16 of the TV recognizes this action as the equivalent of a "button push" and executes the function the button icon 22 indicates”]. Regarding Claim 11 (New), Iwamura teaches the method of Claim 2, comprising: interpreting a motion of at least one of the first portion of the object [z-axis movement] or the second portion of the object [alternate limitation not addressed] as an input; and determining from the input, an input command to at least one of (i) an application or (ii) an operating system [col 8 line 64 to col 9 line 2, “Z-axis motion may also be used for zooming … CPU 16 sends a command to OSD 5 or video decoder 6 and makes them zoom in the image on the CRT 9”]. Regarding Claims 13, 17, and 21 (New), Iwamura teaches the method of Claim 2, the system of Claim 14, and the non-transitory computer readable storage medium of Claim 18, wherein the operations comprise determining a second input [col 4 lines 61 – col 5 line 15, “… the CPU 16 locks the hand motion and an on screen cursor 24 follows it … When the cursor 24 comes to a menu button icon 22 the user 18 wants, the user 18 stops and holds his or her hand 20 there a couple of seconds. The CPU 16 of the TV recognizes this action as the equivalent of a "button push" and executes the function the button icon 22 indicates”] from the object based, at least in part, on the second state [hand movement in z-axis stopped]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamura in view of Tsukahara (US 2011/0267310). All reference is to Iwamura unless otherwise indicated. Regarding Claim 6 (New), Iwamura teaches the method of Claim 2 Iwamura does not teach determining, from the positional information of at least one of the first portion of the object or the second portion of the object, a speed of the object and determining a recognition sensitivity based on the speed Tsukahara teaches determining, from positional information of at least one of the first portion of the object or the second portion of the object, a speed of the object and determining a recognition sensitivity based on the speed [¶0089, “The adjustment of the scanning interval of the detection electrodes 10x and 10y may not necessarily be based only on the distance information (capacitance value) of the finger. For example, the scanning interval of the detection electrode may be adjusted on the basis of approaching speed (rate of change of Z positional coordinate) or the like of the finger”] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of adjusting detection sensitivity based on a speed of the detected object, as taught by Tsukahara, into the method taught by Iwamura in order to improve the accuracy of the detected position. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamura in view of Wang (US 2012/0280900). All reference is to Iwamura unless otherwise indicated Regarding Claim 12 (New), Iwamura teaches the method of Claim 2 Iwamura does not teach storing information related to the first input, wherein the information related to the first input identifies a spatial trajectory; and comparing the stored information related to the first input and information related to a second input Wang teaches storing information related to the first input, wherein the information related to the first input identifies a spatial trajectory [¶0013, “The gesture recognition system may be configured to identify, from the received image signal, a motion vector associated with the foreground object's change of position between subsequent image frames and to derive therefrom the translational movement”]; and comparing the stored information related to the first input and information related to a second input [second frame of user hand image] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate the concept of calculating the motion vector of an object and comparing successive image frame motion vectors, as taught by Wang, into the method taught by Iwamura in order to determine a user gesture based on object movement in a single plane. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Douglas Wilson whose telephone number is (571)272-5640. The examiner can normally be reached 1100-1800 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached at 571-272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Douglas Wilson/Primary Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

May 23, 2025
Application Filed
Mar 18, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596431
VIRTUAL REALITY CONTENT DISPLAY SYSTEM AND VIRTUAL REALITY CONTENT DISPLAY METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596279
ACTIVE MATRIX SUBSTRATE AND A LIQUID CRYSTAL DISPLAY
2y 5m to grant Granted Apr 07, 2026
Patent 12583317
INPUT DEVICE FOR A VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12585480
USE OF GAZE TECHNOLOGY FOR HIGHLIGHTING AND SELECTING DIFFERENT ITEMS ON A VEHICLE DISPLAY
2y 5m to grant Granted Mar 24, 2026
Patent 12579947
DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
91%
With Interview (+16.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month