Prosecution Insights
Last updated: April 19, 2026
Application No. 18/128,127

VIRTUAL OBJECT INTERACTION IN AUGMENTED REALITY

Non-Final OA §103
Filed
Mar 29, 2023
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Rockwell Automation Technologies Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/02/2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-7, 9, 11-13, 15 and 17-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schwarz et al., U.S. Patent Number 10,234,935 B2 (hereinafter Schwarz ‘935), Tang et al., U.S. Patent Publication Number 2020/0225830 A1, further in view of Pinchon et al., U.S. Patent Number 11,294,475 B1. Regarding claim 1, Schwarz ‘935 discloses a method, comprising: receiving, via at least one processor, image data comprising one or more virtual objects (col. 5, lines 62-63, receive optical or motion data for environmental analysis and gesture detection or motion data; col. 17, lines 33-34, information received from virtual object rendering component; FIG. 2A-6C; 11, 12); determining, via the at least one processor, a distance between a user extremity and the one or more virtual objects based on the image data (col. 17, lines 35-35, a distance between the user and the intended target identified at block 910 is determined; col. 17, lines 41-45, calculated distance between an extension of the user (e.g., the user’s hand, foot, or a hand-held object; FIG 2B); detecting, via the at least one processor, a first gesture of the user extremity based on the image data, wherein the first gesture comprises a pinch between a thumb of the user extremity and a finger of the user extremity (col. 11, lines 20-21, detect a recognized gesture (e.g., a pinch); (FIGS. 3B; 4C; 5B; and 6B); generating, via the at least one processor, a virtual sphere to be displayed within or on a virtual object of the one or more virtual objects, wherein the virtual sphere binds to the virtual object while being displayed within or on the virtual object (col. 11, lines 26-28, FIGS. 4A-4D and 6A-6C, a virtual object can be rendered having control points that facilitate interaction therewith; col. 12, line 66 – col. 13, line 1, a rendered virtual object representing a soda can having eight control points configured thereon); determining, via the at least one processor, a position of crosshairs to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity (col. 14, lines 15, positions the gaze crosshairs); determining, via the at least one processor, that the distance is less than a threshold (col. 14, lines 52-56, because the soda can is rendered at a relative distance that is within a predefined threshold distance (e.g., less than the user’s arm length), the interaction mediating component of FIG. 1 can select the most appropriate interaction methodology for interacting with the soda can); (col. 14, lines 62-63, the user has reached out his right hand to directly interact with a control point of the soda can); detecting, via the at least one processor, movement of the position of the user extremity based on the image data (col. 15, lines 5-7, the user has performed a rotation gesture (e.g., a pinch and rotate) corresponding to the control point associated with the soda can); and adjusting, via the at least one processor, a position of the virtual object based on the movement of the position of the virtual sphere and the position of the user extremity, wherein the adjustment of the position of the virtual object comprises a rotational movement relative to the virtual sphere (col. 15, lines 7-9, this natural interaction has caused a modification of the soda can, particularly a rotation along an axis thereof; FIG. 5A-6C; col. 14, lines 64-67, the user has made a pinch gesture with his right hand, whereby his fingers pinch the closest control point of the soda to effectuate a modification (e.g., a rotation) of the soda can. However, it is noted that Schwarz ‘935 discloses FIGS. 4A-4D and 6A-6C, a virtual object can be rendered having control points that facilitate interaction therewith; col. 12, line 66 – col. 13, line 1, a rendered virtual object representing a soda can having eight control points configured thereon one or more control points generated and col. 14, lines 14-16, user makes a pinch gesture with his right hand while he positions the gaze position crosshairs on the control point of the soda can. Schwarz ‘935 fails to discloses determining, via the at least one processor, a position of the virtual sphere to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity; locking, via the at least one processor, the position of the virtual sphere to a position of the user extremity in response to determining that the distance is less than the threshold. Tang discloses a method, comprising: determining, via the at least one processor, a distance between a user extremity and the one or more virtual objects based on the image data (paragraph 0021, determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user; it will be appreciated that the distance may be with respect to various parts of the user); detecting, via the at least one processor, a first gesture of the user extremity based on the image data, wherein the first gesture comprises a pinch between a thumb of the user extremity and a finger of the user extremity (paragraph 0022, user may be able to perform a predetermined gesture, such as point at the virtual object, pinching, swiping, and so forth, for example, in order to select the virtual object; FIG. 1); generating, via the at least one processor, a virtual sphere to be displayed within or on a virtual object of the one or more virtual objects, wherein the virtual sphere binds to the virtual object while being displayed within or on the virtual object (FIG. 4B; paragraph 0027, generate a virtual handle in the far interaction mode with respect to the virtual ray; a spherical node is generated as the virtual handle when the ray intersects the virtual object; paragraph 0053, in this aspect, additionally or alternatively, the virtual interaction object may be one of a plurality of objects that the processor may be configured to generate, and the plurality of virtual interaction objects may include at least one of a pinchable object, handles associated with the virtual object); (FIG. 3B); locking, via the at least one processor, the position of the virtual sphere to a position of the user extremity in response to determining that the distance is less than the threshold (paragraph 0026, generate a virtual ray from a hand of the user, the virtual ray locked with respect to movement of the hand of the user); detecting, via the at least one processor, movement of the position of the user extremity based on the image data (paragraph 0026, as the user moves her hand, the virtual ray may moves as directed by the user’s hand). It is noted that both Schwarz and Tang fail to disclose determining, via the at least one processor, a position of the virtual sphere to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity. Pinchon discloses generating, via the at least one processor, a virtual sphere to be displayed within or on a virtual object of the one or more virtual objects, wherein the virtual sphere binds to the virtual object while being displayed within or on the virtual object (FIG. 7); determining, via the at least one processor, a position of the virtual sphere to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity (col. 16, lines 8-15, monitor for ray casting triggers such as ray postures, gesture or actions (e.g., bringing finger and thumb together); can cast the ray, of the specified length, from the control point to an interaction point, along a line connecting the original point and the control point; col. 16, lines 27-30, can have different tools at the end of the short ray to manipulate the target object; the tools can adjust, expand, highlight, or manipulate the target object; FIG. 7). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to create in the alternative, the crosshairs as disclosed by Schwarz as the a spherical node generated as the virtual handle when the ray intersects the virtual object as disclosed by Tang, so that the processor may be configured to generate, and the plurality of virtual interaction objects may include at least one of a pinchable object, handles associated with the virtual object for direct interaction. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include position of the virtual sphere to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity, as disclosed by Pinchon as an alternative to the crosshairs to indicate manipulation and selection of the target. Regarding claim 3, Schwarz ‘935 discloses wherein adjusting the position of the virtual object comprises moving the virtual sphere associated with the virtual object in a three-dimensional coordinate plane with respect to the movement of the position of the user extremity (FIG. 6B-6C). Regarding claim 4, Schwarz ‘935 discloses comprising: generating, via the at least one processor, a pointer cursor visualization in the image data based on the user extremity in response to the distance being above the threshold; and adjusting, via the at least one processor, the position of the virtual object based on an additional movement of the pointer cursor visualization (col. 13, lines 41-51, the user has distanced himself away from the soda can; the rendered virtual object representing the soda can having eight control point configured thereon; HMD can determine that the soda can is now positioned (i.e., rendered) for perception beyond the reach of the user’s right hand, or in other words, beyond the predefined threshold distance; col. 13, lines 52-56, based on the determined change in position, the interaction mediating component can select the most appropriate interaction methodology for interacting with the soda can; as indicated by the gaze position cross hairs). Regarding claim 5, Schwarz ‘935 discloses wherein the first gesture corresponds to a command for adjusting the position of the virtual object. (FIG. 5C). Regarding claim 6, Schwarz ‘935 discloses comprising detecting a second gesture associated with the user extremity based on the image data (6C). Regarding claim 7, Schwarz ‘935 discloses comprising disassociating the virtual sphere from the virtual object in response to detecting the second gesture (col. 16, lines 37-55, each of the varying hand characteristics can be processed as contextual information for determining whether a particular rendered virtual object is an intended target and/or whether a particular interaction methodology is most appropriate; user is making a fist with his fingers, which can be recognized by the HMD; the failure of a fully-extended hand can be deduced in contextual information to determine that the user has not indicated a definite desire to engage a particular rendered virtual object). Regarding claim 21, Schwarz ‘935 discloses comprising: generating, via the at least one processor, a pointer cursor visualization based on the user extremity in response to the distance being above an additional threshold (Col. 3, lines 24-30, ray casting, by way of example, is a feature typically employed by various hypernatural interaction methodologies for interacting with distant objects; with ray casting, a virtual light ray of sorts, project from a user’s hand or head for example, can enable a user to interact with object that are far away or out of arms reach; col. 4, lines 22-30, user’s relative proximity to the intended target; a calculated distance between the user, or an extension of the user, and the intended target; a comparison of the relative distance can be made against a threshold distance; determine whether a particular interaction methodology is most appropriate for the intended target; col. 4, lines 46-47, can employ any implemented natural interaction methodology (e.g., direct gesture interaction with object control points). Tang discloses generating, via the at least one processor, a pointer cursor visualization (paragraph 0026, generate a virtual ray from the hand of the user). Regarding claim 22, Schwarz ‘935 discloses comprising: determining, via the at least one processor, that the pointer cursor visualization is not intersecting the virtual object; and querying, via the at least one processor, a range from the second position of the user extremity for an interaction, wherein the interaction comprises the range being within the threshold distance and the intersection of the pointer cursor visualization with the virtual object (col. 12, lines 16-32, each soda can is positioned at different distances D1, D2 and D3, relative to the position of the user; user’s left arm is depicted as being partially extended an having at least enough reach to naturally interact with the nearest positioned soda can; a natural interaction for interacting with the first soda can would be the most appropriate; the other soda cans are perceived as being positioned beyond the user’s reach; instead, a hyper-natural interaction methodology to interact with the more distant soda cans 220b, 220c may provide a better interactive experience; col. 12, lines 46-56, determine which object is most proximate; intended target can be the rendered virtual object that is determined to intersect with a gaze position; gaze position can be adjustably position by the user to intersect with a particular rendered virtual object). Regarding claim 23, Schwarz ‘935 discloses comprising: detecting, via the at least one processor, a second gesture of the user extremity or an additional user extremity; and adjusting, via the at least one processor, the position of the virtual object to a position of the additional user extremity in response to detecting the second gesture (col. 11, lines 20-21, detect a recognized gesture (e.g., a pinch and adjust) performed directly on a control point). Regarding claims 9, 11-13, they are rejected based upon similar rational as above claims 1, 3, 4 and 7 respectively. Schwarz ‘935 further discloses a tangible, non-transitory, computer-readable medium configured to store instruction executable by at least one processor in a computing device (col. 20, lines 1-7). Regarding claims 15 and 17-20, they are rejected based upon similar rational as above claims 1, 3-5 and 7 respectively. Schwarz’ 935 further discloses a system (1202, HMD device), comprising: an image sensor configured to capture image data; and a processor configured to perform operations. Claim(s) 8 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Schwarz ‘935, Tang and Pinchon as applied to claim 7 above, and further in view of Schwarz et al., U.S. Patent Number 10,140,776 B2, hereinafter Schwarz ‘776. Regarding claim 8, it is noted that Schwarz ‘935 discloses col. 16, lines 37-55, each of the varying hand characteristics can be processed as contextual information for determining whether a particular rendered virtual object is an intended target and/or whether a particular interaction methodology is most appropriate; user is making a fist with his fingers, which can be recognized by the HMD; the failure of a fully-extended hand can be deduced in contextual information to determine that the user has not indicated a definite desire to engage a particular rendered virtual object. However, Schwarz ‘935 fails to disclose wherein the second gesture corresponds to the user extremity releasing the virtual object. Schwarz ‘776 discloses wherein the second gesture corresponds to the user extremity releasing the virtual object (col. 15, lines 35-39, target control point can track movement of the control object until a release gesture is detected; col. 19, lines 39-41, gesture being any other state other than a pinch gesture, the gesture management module can determine a release of the gesture). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include in the contextual information as disclosed by Schwarz ‘935, as the user intention to not engage a virtual object, the target control point can track movement of the control object until a release gesture is detected, as disclosed by Schwarz ‘776, to configure context for a gesture being any other state other than a pinch gesture, the gesture management module can determine a release of the gesture. Regarding claim 14, it is rejected based on similar rational as above claim 8. Response to Arguments Applicant’s arguments, see pages 10-11, filed 03/05/2025, with respect to the rejection(s) of claim(s) 1-20 under 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 103, Schwarz ‘935, Tang, in view of Pinchon, and Schwarz ‘935, Tang, Pinchon, and Schwarz ’776. Applicant argues Pinchon merely discloses “ the interaction point 904 is displayed at the target object, without displaying the origin point the control point, or the ray, in col. 16, lines 24-25, and that the prior art fails to disclose a position of the virtual sphere to be displayed on or within the virtual object based on a line of direction that corresponds to the thumb of the user extremity and the finger of the user extremity. Examiner responds Pinchon discloses (col. 16, lines 8-15, monitor for ray casting triggers such as ray postures, gesture or actions (e.g., bringing finger and thumb together); can cast the ray, of the specified length, from the control point to an interaction point, along a line connecting the original point and the control point; col. 16, lines 27-30, can have different tools at the end of the short ray to manipulate the target object; the tools can adjust, expand, highlight, or manipulate the target object; FIG. 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
Dec 02, 2024
Non-Final Rejection — §103
Feb 26, 2025
Examiner Interview Summary
Feb 26, 2025
Applicant Interview (Telephonic)
Mar 05, 2025
Response Filed
May 29, 2025
Final Rejection — §103
Jul 16, 2025
Interview Requested
Aug 04, 2025
Response after Non-Final Action
Sep 02, 2025
Request for Continued Examination
Sep 03, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103
Mar 26, 2026
Interview Requested
Apr 01, 2026
Examiner Interview Summary
Apr 01, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month