Prosecution Insights
Last updated: April 19, 2026
Application No. 18/975,575

Methods and Apparatuses for Applying Free Space Inputs for Surface Constrained Controls

Non-Final OA §102§103
Filed
Dec 10, 2024
Examiner
JOHNSON, GERALD
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
West Texas Technology Partners LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
87%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
499 granted / 641 resolved
+7.8% vs TC avg
Moderate +9% lift
Without
With
+9.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
674
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
28.6%
-11.4% vs TC avg
§112
4.5%
-35.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 641 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 10, 11 and 17-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Maciocci et al. (Pub. No.: US 2012/0249741). Consider claim 10, A mixed-reality device (paragraph [0205], Fig. 16, head mounted device 1600 providing a mixed reality scene (see paragraph [0086])), comprising: one or more processors communicatively coupled to: a display device that is at least partially transparent and is configured to display a virtual object (paragraph [0092], Fig. 5A, head mounted displays may be replaced by projectors and paragraph [0122], display coupled to a processor of the head mounted device 10; paragraph [0072], head mounted display is semitransparent and may display the generated virtual object); a sensor (paragraph [0061], camera); and a touch device (paragraph [0205], Fig. 16, mobile device 1605 to include smartphones and tablet computers (see paragraph [0058]), known in the art to include a touch device); and a memory device coupled to the one or more processors and storing instructions executable by the one or more processors to: generate virtual object data for displaying a virtual object by the display device (paragraph [0112], the processor within or coupled to the head mounted device 10 may store the spatial data in memory and may build a three-dimensional map of objects and surfaces); output the virtual object data to the display device (paragraph [0074], pico projector may be used to project a virtual object 14 onto the selected anchor surface 16); receive, from the sensor, interaction data corresponding to a free-space interaction by a hand of a user with the virtual object (paragraph [0091], the user may move with his/her arms and hands in a manner that appears to interact with the object 14b in order to close it or remove it from the display); generate touch input data based on the interaction data (paragraph [0071], rendering into the display images of the user's hands, particularly when the user is moving his or her hands as part of a control gesture); and output the touch input data to the touch device (paragraphs [0206] and [0207], Fig. 16, digital data may be communicated to a wireless device and the digital data may be communicated along a wireless link to the control system 1610 operable on the mobile device 1605). Consider claim 17, Maciocci discloses a method, comprising: generating virtual object data for displaying a virtual object by a mixed-reality device (paragraph [0205], Fig. 16, head mounted device 1600 providing a mixed reality scene (see paragraph [0086])); outputting the virtual object data for display by the mixed-reality device (paragraph [0086], the head mounted devices 10 may transmit to each other three-dimensional virtual object models and/or data sets for rendering on their respective displays); receiving, by a sensor (paragraph [0061], camera) of the mixed-reality device, interaction data corresponding to a free-space interaction by a hand of a user with the virtual object (paragraph [0091], the user may move with his/her arms and hands in a manner that appears to interact with the object 14b in order to close it or remove it from the display); determining an instruction associated with the free-space interaction, wherein the instruction is further associated with a touch input of a touch device; and executing the instruction (paragraph [0078], Fig. 1, processor may recognize an input command via a detected gesture (e.g., a finger pointing to a point in space) to place the virtual object 14 as free floating in free space). Consider claim 11, Maciocci discloses wherein: the touch input data is associated with a surface-constrained input recognizable by the touch device; and the surface-constrained input is corresponds to contact with a physical surface of the touch device (paragraph [0124], Fig. 5A, user control block 515 may gather user control inputs to the system, for example audio commands, gestures, and input devices (e.g., keyboard, mouse). In an embodiment, the user control block 515 may include or be configured to access a gesture dictionary to interpret user body part movements identified by the scene manager 510). Consider claim 18, Maciocci discloses wherein the free-space interaction with the virtual object is analogous to a surface-constrained interaction with the touch device, the surface-constrained interaction associated with the touch input (paragraph [0124], Fig. 5A, user control block 515 may gather user control inputs to the system, for example audio commands, gestures, and input devices (e.g., keyboard, mouse). In an embodiment, the user control block 515 may include or be configured to access a gesture dictionary to interpret user body part movements identified by the scene manager 510). Consider claim 19, Maciocci discloses wherein the virtual object data indicates a rectangular and planar boundary for the virtual object (paragraph [0262], the processor may display the anchored virtual object as a rectangular virtual display on a surface). Consider claim 20, Maciocci discloses wherein the interaction with the virtual object comprises a motion of the hand along a plane formed by the rectangular and planar boundary based on the virtual object data (paragraph [0069], Figs. 1, 2, user interactions with the virtual object). Consider claim 21, Maciocci discloses wherein the virtual object data indicates a three-dimensional volume for the virtual object (paragraph [0069], Figs. 1, 2, the virtual object 14 may be any virtual object 14, including, for example, text, graphics, images and 3D shapes). Consider claim 22, Maciocci discloses wherein the interaction with the virtual object comprises a motion of the hand along a surface of the three-dimensional volume (paragraph [0069], Figs. 1, 2, user interactions with the virtual object). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Maciocci in view of Itani (Pub. No.: US 2014/0115520). Consider claim 1, Maciocci discloses a head-mounted display device (paragraph [0205], Fig. 16, head mounted device 1600) communicatively coupled to a touch device (paragraph [0205], Fig. 16, mobile device 1605 to include smartphones and tablet computers (see paragraph [0058]), known in the art to include a touch device, communicating with the head mounted device 1600), the head- mounted display device comprising: an image projector that projects an image (paragraph [0074], Fig. 1, pico projector included within the head mounted device 10); a display surface that displays the image projected by the image projector (paragraph [0062], Fig. 1, pico-projector may enable projection of images onto surfaces); one or more sensors (paragraph [0061], camera) approximate to the display, wherein the one or more sensors are configured to detect a gesture or posture of a hand of a user wearing the head-mounted display device (paragraph [0068], user inputs which may be made through gestures that may be imaged by the camera and identify surfaces); one or more processors communicatively coupled to the image projector and the one or more sensors (paragraph [0117], Fig. 5A, scene sensor 500 coupled to processor 507 where in processor 507 includes cameras (see paragraph [0118]); paragraph [0092] head mounted displays may be replaced by projectors and paragraph [0122], display coupled to a processor of the head mounted device 10); a wireless communication device communicatively coupled to the one or more processors (paragraph [0115], Fig. 5A, processor 507 associated with wireless communications); and one or more memory devices communicatively coupled to the one or more processors and storing instructions executable by the one or more processors (paragraph [0112], the processor within or coupled to the head mounted device 10 may store the spatial data in memory and may build a three-dimensional map of objects and surfaces) to: display a virtual object via the image projector and the passive display surface (paragraph [0074], pico projector may be used to project a virtual object 14 onto the selected anchor surface 16); sense, by the one or more sensors, an interaction by the hand of the user with the virtual object, wherein the hand of the user is visible to the user through the passive display surface (paragraph [0091], the user may move with his/her arms and hands in a manner that appears to interact with the object 14b in order to close it or remove it from the display); generate input data based on the interaction by the hand of the user with the virtual object (paragraph [0071], rendering into the display images of the user's hands, particularly when the user is moving his or her hands as part of a control gesture); and transmit the input data to the touch device via the wireless communication device (paragraphs [0206] and [0207], Fig. 16, digital data may be communicated to a wireless device and the digital data may be communicated along a wireless link to the control system 1610 operable on the mobile device 1605). Maciocci does not specifically disclose the display surface is a passive display surface. Itani discloses the display surface is a passive display surface (paragraph [0052], viewed surface is essentially a passive screen). Therefore, in order to provide suitable options to the display’s design, it would have been obvious to one having ordinary skill in the design, at the time of invention, to have applied the same technique as suggested by Itani in providing a display surface that is a passive display surface, see teaching found in Itani, paragraph [0052]. Consider claim 2, the combination of Maciocci and Itani discloses the instructions further executable by the one or more processors to determine whether the interaction by the hand of the user with the virtual object is analogous to a touch input recognizable by the touch device (paragraph [0069], recognizable gestures may be stored or organized in the form of a gesture dictionary accessible by head mounted devices. Such a gesture dictionary may store movement data or patterns for recognizing gestures that may include pokes, pats, taps, pushes, guiding, flicks, turning, rotating, grabbing and pulling, two hands with palms open for panning images, drawing (e.g., finger painting), forming shapes with fingers (e.g., an "OK" sign), and swipes). Consider claim 3, the combination of Maciocci and Itani discloses wherein the touch input recognizable by the touch device comprises a surface-constrained input on a touch surface of the touch device (paragraph [0069], recognizable gestures may be stored or organized in the form of a gesture dictionary accessible by head mounted devices. The gesture dictionary may be accomplished on, in close proximity to, or addressing the direction of (in relation to the user) the apparent location of a virtual object in a generated display). Consider claim 4, the combination of Maciocci and Itani discloses wherein the touch input recognizable by the touch device comprises: a touch down motion of an end-effector on a screen of the touch device (paragraph [0069], recognizable gestures may be stored or organized in the form of a gesture dictionary accessible by head mounted devices. Such a gesture dictionary may store movement data or patterns for recognizing gestures that may include taps). Consider claim 5, the combination of Maciocci and Itani discloses wherein the touch constra input recognizable by the touch device comprises a swipe motion on the screen of the touch device (paragraph [0069], recognizable gestures may be stored or organized in the form of a gesture dictionary accessible by head mounted devices. Such a gesture dictionary may store movement data or patterns for recognizing gestures that may include swipes and paragraph [0314], Fig. 2, projected image 14a may be a virtual touch screen 14a and may also include a virtual input device on the virtual object). Consider claim 6, the combination of Maciocci and Itani discloses wherein the touch input recognizable by the touch device comprises a peripheral device input (paragraph [0124], Fig. 5A, user control block 515 may gather user control inputs to the system, for example audio commands, gestures, and input devices (e.g., keyboard, mouse). In an embodiment, the user control block 515 may include or be configured to access a gesture dictionary to interpret user body part movements identified by the scene manager 510). Consider claim 7, the combination of Maciocci and Itani discloses wherein the peripheral device input comprises a keystroke on a keyboard coupled to the touch device (paragraph [0124], Fig. 5A, user control block 515 may gather user control inputs to the system, for example audio commands, gestures, and input devices (e.g., keyboard, mouse). In an embodiment, the user control block 515 may include or be configured to access a gesture dictionary to interpret user body part movements identified by the scene manager 510). Consider claim 8, the combination of Maciocci and Itani discloses wherein the peripheral device input comprises a mouse click on a mouse coupled to the touch device (paragraph [0124], Fig. 5A, user control block 515 may gather user control inputs to the system, for example audio commands, gestures, and input devices (e.g., keyboard, mouse). In an embodiment, the user control block 515 may include or be configured to access a gesture dictionary to interpret user body part movements identified by the scene manager 510). Consider claim 9, the combination of Maciocci and Itani discloses the instructions further executable by the one or more processors to determine a virtual boundary for the virtual object, wherein the interaction by the hand with the virtual object comprises a gesture or a posture of the hand at or within the virtual boundary (paragraph [0069], user elects to display the virtual object 14 anchored to a designated anchor surface and the projection of virtual objects positioned at/on designated locations within the surrounding environment can create the experience of virtual reality and enable user interactions with the virtual object). Claims 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Maciocci in view of Cheong et al. (Pub. No.: US 2015/0293592). Consider claim 12, Maciocci does not specifically disclose wherein, to generate the touch input data, the instructions are further executable by the one or more processors to determine a bounce acceleration of a finger of the hand, the bounce acceleration indicated by the interaction data. Cheong discloses wherein, to generate the touch input data, the instructions are further executable by the one or more processors to determine a bounce acceleration of a finger of the hand, the bounce acceleration indicated by the interaction data (paragraph [0187], classifying a moving speed according to a touch position change). Therefore, in order to provide for a state of the input signal, it would have been obvious to one having ordinary skill in the design, at the time of invention, to have applied the same technique as suggested by Cheong wherein, to generate the touch input data, the instructions are further executable by the one or more processors to determine a bounce acceleration of a finger of the hand, the bounce acceleration indicated by the interaction data, see teaching found in Cheong, paragraph [0187]. Consider claim 13, the combination of Maciocci and Cheong discloses wherein the bounce acceleration is based on a rapid reversal of a motion of the finger indicated by the interaction data (Cheong, paragraph [0187], classifying a change in touch input signal according to a time). Consider claim 14, the combination of Maciocci and Cheong discloses wherein the touch input data is indicative of: a first touch device instruction associated with a first bounce acceleration; or a second touch device instruction associated with a second bounce acceleration, wherein: the second bounce acceleration is different from the first bounce acceleration; and the second touch device instruction is different from the first touch device instruction (Cheong, paragraph [0187], classifying a moving speed according to a touch position change). Consider claim 15, Maciocci does not specifically disclose wherein the interaction data is indicative of: a touch down motion wherein a finger of the hand moves downwardly from a first plane to a second plane; a liftoff motion wherein the finger moves upwardly from the second plane to the first plane or a third plane; or a swipe motion wherein the finger moves along the second plane from a first point to a second point. Cheong discloses wherein the interaction data is indicative of: a touch down motion wherein a finger of the hand moves downwardly from a first plane to a second plane (paragraph [0187], touch down input); a liftoff motion wherein the finger moves upwardly from the second plane to the first plane or a third plane (paragraph [0187], touch release input); or a swipe motion wherein the finger moves along the second plane from a first point to a second point (paragraph [0187], touch drag input). Therefore, in order to provide for a state of the input signal, it would have been obvious to one having ordinary skill in the design, at the time of invention, to have applied the same technique as suggested by Cheong wherein, to generate the touch input data, the instructions are further executable by the one or more processors to determine a bounce acceleration of a finger of the hand, the bounce acceleration indicated by the interaction data, see teaching found in Cheong, paragraph [0187]. Consider claim 16, the combination of Maciocci and Cheong discloses wherein, to generate the touch input data, the instructions are further executable by the one or more processors to: determine a threshold distance range for the finger to move along the second plane; and generate the touch input data in response to the finger moving along the second plane within the threshold distance range (Maciocci, paragraph [0282], the haptic support module 170 may perform a control to output a specific haptic information based haptic feedback in a specified distance or a specific area before a touch drag event or a hovering position movement event enters a specific area). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GERALD JOHNSON whose telephone number is (571)270-7685. The examiner can normally be reached Monday-Friday 8am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carey Michael can be reached at (571)270-7235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Gerald Johnson/ Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Dec 10, 2024
Application Filed
Nov 14, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599438
DETERMINING END POINT LOCATIONS FOR A STENT
2y 5m to grant Granted Apr 14, 2026
Patent 12575735
DEVICES, SYSTEMS, AND METHODS FOR TUMOR VISUALIZATION AND REMOVAL
2y 5m to grant Granted Mar 17, 2026
Patent 12575746
BLOOD PRESSURE MEASUREMENT DEVICE AND METHOD FOR MEASURING BLOOD PRESSURE
2y 5m to grant Granted Mar 17, 2026
Patent 12581413
POWER SAVING MECHANISMS IN NR
2y 5m to grant Granted Mar 17, 2026
Patent 12569156
DEVICE FOR MICROWAVE FIELD DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
87%
With Interview (+9.2%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 641 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month