DETAILED ACTION
1. This Office Action is responsive to claims filed for No. 19/085,332 on December 2, 2025. Please note Claims 1-20 are pending and have been examined.
America Invents Act
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Restriction Requirement
3. Applicant's election with traverse of Species 1, Figure 6 and Claims 9-20 in the reply filed on December 2, 2025 is acknowledged. The traversal is on the ground(s) that Figures 6 and 7 (the two cited species) are not distinct. This is not found persuasive because the Examiner has noted the differences between the cited Species, namely Figures 6 and 7, and why there is a serious burden to have to consider both of these. As noted, Claim 1 aligns more closely with Figure 7, whereas Claims 9+ align more closely with Figure 6. However, it is a species requirement and it is on Applicant to discuss how or why the Figures themselves are not indistinct from each other. However, Applicant has not done so here and instead focuses on comparing the claims. Further still, it seems Applicant does not argue Examiner’s aligning of the claims to the figures. As such, the traversal seems to be directed to irrelevant/incorrect aspects.
The requirement is still deemed proper and is therefore made FINAL.
Information Disclosure Statement
4. The information disclosure statements (IDS) submitted on April 7, 2025, April 7, 2025 (two were filed on this date, for clarification) and December 2, 2025 were filed. Accordingly, the information disclosure statements are being considered by the examiner.
Allowable Subject Matter
5. Claims 11, 12, 15 and 18 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 11 recites details of additional cameras, additional/second fields of view and specifically what is done in these with the additional motion of the graphics input tool, etc. This level of detail is not taught. Claim 12 is dependent from Claim 11.
Claim 15 recites details of individual candidate objects, values of characteristics and having individual measures of similarity. This level of detail is not taught by the prior art.
Claim 18 recites, among other details, aspect of a mesh collider which is not taught by the prior art.
Claim Rejections - 35 USC § 102
6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
7. Claims 9, 10, 13, 14 and 16 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Erivantcev et al. ( US 2022/0291753 A1 ).
Erivantcev teaches in Claim 9:
A computing apparatus comprising:
one or more processors; and memory storing instructions that, when executed by the one or more processors ( [0241]-[0242] disclose the microprocessor is coupled to the cache memory and [0247] discloses the execution of code/instructions ), cause the computing apparatus to perform operations comprising:
obtaining camera data that corresponds to one or more images of a field of view captured by a camera ( Figure 2, [0076] discloses using an optical input device 133 to project content on the field of view of the surrounding area in front of the user. [0188], [0427] discloses the use of the optical input device, a camera, etc to capture a field of view of. Furthermore, please see Figure 20, [0179] which details block 201 );
analyzing the camera data to determine an augmented reality (AR) graphics display surface included in the field of view, the AR graphics display surface being a two-dimensional surface having at least a minimum length and at least a minimum width ( Figure 2, [0076] discloses a display screen 116 projected by the AR glasses on the field of view of the surrounding area in front of the user. The projected screen has a two-dimensional surface with a length and width, as shown );
analyzing the camera data to detect a graphics input tool included in the field of view ( Figure 20, [0181] discloses monitoring the user’s eye gaze direction vector to determine geometry data which are the objects in the virtual reality content with which the user is allowed to interact with, commands to operate the objects and gestures usable to invoke the respective commands. Please note [0164], etc which discloses various context factors and [0178] discloses a sensor manager 103 to control an application 105 which can run on a computing device, server system, etc. Respectfully, these are examples of capturing the user’s field of view, determining aspects such as context, application specifics, etc, to determine the appropriate display data in the user’s field of view. Figure 2, [0075], [0043] discloses motion input module 121 (read as a graphics input tool) and Figure 3, [0078] discloses motion of input module 121 within the field of view to determine inputs );
determining, based on the camera data, motion of the graphics input tool while the graphics input tool is within a threshold distance of the AR graphics display surface ( Figure 4, [0082]-[0084 discloses a motion input of the motion input module 121 to select a window and a subsequent command to operate the window, using swipe gestures, etc, which are examples of a path of motion. In general, the motion is tracked, i.e. path tracking, as shown in Figures 4-8, etc. By using such inputs, aspects of interest, such as motions, etc, can be determined. As Figures 4-8 show, the inputs are only accepted/processed when the input module 121 is within a threshold distance of the projected display screen 116 ); and
causing display of a user interface overlaid on the AR graphics display surface, the user interface including an AR graphic that corresponds to the motion of the graphics input tool. ( Figures 4-7, [0082]+ disclose motion-type gestures of the motion input module 121 which can allow the user to interact with the displayed contents. Such gestures result in changes to the displayed contents. Examples provided are detailed in Figure 4, [0124]-[0127] which detail swipe gestures on the module 121 which can cause interactions with main menus, etc (menus have shape and/or contours, etc). Furthermore, please note the combination below as well )
Erivantcev teaches in Claim 10:
The computing apparatus of claim 9, wherein the memory stores additional instructions that, when executed by the one or more processors, causes the computing apparatus to perform additional operations comprising:
determining that the field of view of the camera has changed from a first field of view that includes the AR graphics display surface to a second field of view in which the AR graphics display surface is absent; and causing the user interface to be modified by removing the AR graphic from the user interface in response to the field of view of the camera changing from the first field of view to the second field of view. ( Respectfully, Erivantcev ([0011]), etc teach of using cameras to determine the field of view and this determines the context mode activated by the user, [0076]. The inputs from the motion input module and can be interpreted differently by the sensor manager. To clarify, depending on what is detected within the field of view or not (read as absent), the AR experience is adjusted (read as removing) )
Erivantcev teaches in Claim 13:
The computing apparatus of claim 10, wherein the memory stores additional instructions that, when executed by the one or more processors, causes the computing apparatus to perform additional operations comprising:
determining that the field of view of the camera has changed from second field of view back to the first field of view; and causing the user interface to be modified by adding the AR graphic back into the user interface in response to the field of view of the camera changing from the second field of view to the first field of view. ( The same reasoning above is also applicable here. Erivantcev teaches of adjusting the reality experience based on what is captured in the field of view. For elements within the field of view, the graphical interface is adjusted (read as re-displayed, etc) )
Erivantcev teaches in Claim 14:
One or more computer-readable storage media storing computer-readable instructions that, when executed by one or more processors ( [0023] discloses a computing device interacting with a controlled device in a VR/AR/MR/XR setting. [0241] discloses a microprocessor. [0241]-[0242] disclose the microprocessor is coupled to the cache memory and [0247] discloses the execution of code/instructions ), cause the one or more processors to perform operations comprising:
obtaining camera data that corresponds to one or more images of a field of view captured by a camera ( Figure 2, [0076] discloses using an optical input device 133 to project content on the field of view of the surrounding area in front of the user. [0188], [0427] discloses the use of the optical input device, a camera, etc to capture a field of view of. Furthermore, please see Figure 20, [0179] which details block 201 );
analyzing the camera data to determine an augmented reality (AR) graphics display surface included in the field of view, the AR graphics display surface being a two-dimensional surface having at least a minimum length and at least a minimum width ( Figure 2, [0076] discloses a display screen 116 projected by the AR glasses on the field of view of the surrounding area in front of the user. The projected screen has a two-dimensional surface with a length and width, as shown );
analyzing the camera data to detect a graphics input tool included in the field of view ( Figure 20, [0181] discloses monitoring the user’s eye gaze direction vector to determine geometry data which are the objects in the virtual reality content with which the user is allowed to interact with, commands to operate the objects and gestures usable to invoke the respective commands. Please note [0164], etc which discloses various context factors and [0178] discloses a sensor manager 103 to control an application 105 which can run on a computing device, server system, etc. Respectfully, these are examples of capturing the user’s field of view, determining aspects such as context, application specifics, etc, to determine the appropriate display data in the user’s field of view. Figure 2, [0075], [0043] discloses motion input module 121 (read as a graphics input tool) and Figure 3, [0078] discloses motion of input module 121 within the field of view to determine inputs );
determining, based on the camera data, motion of the graphics input tool while the graphics input tool is within a threshold distance of the AR graphics display surface ( Figure 4, [0082]-[0084 discloses a motion input of the motion input module 121 to select a window and a subsequent command to operate the window, using swipe gestures, etc, which are examples of a path of motion. In general, the motion is tracked, i.e. path tracking, as shown in Figures 4-8, etc. By using such inputs, aspects of interest, such as motions, etc, can be determined. As Figures 4-8 show, the inputs are only accepted/processed when the input module 121 is within a threshold distance of the projected display screen 116 ); and
causing display of a user interface overlaid on the AR graphics display surface, the user interface including an AR graphic that corresponds to the motion of the graphics input tool. ( Figures 4-7, [0082]+ disclose motion-type gestures of the motion input module 121 which can allow the user to interact with the displayed contents. Such gestures result in changes to the displayed contents. Examples provided are detailed in Figure 4, [0124]-[0127] which detail swipe gestures on the module 121 which can cause interactions with main menus, etc (menus have shape and/or contours, etc). Furthermore, please note the combination below as well )
Erivantcev teaches in Claim 16:
The one or more computer-readable storage media of claim 14, storing additional computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform additional operations comprising:
determining, based on at least one of an orientation of the camera or a location of the camera, a direction of a gaze of a user of a device that includes the camera; and determining, based on the direction of the gaze of the user, the AR graphics display surface. ( [0179] discloses a camera monitoring the eye gaze of a user. The projected display is based on the user’s gaze )
Claim Rejections - 35 USC § 103
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
10. Claim 17 rejected under 35 U.S.C. 103 as being unpatentable over Erivantcev
et al. ( US 2022/0291753 A1 ), as applied to Claim 14, further in view of Gavriliuc et al.
( US 2017/0371432 A1 ).
Erivantcev teaches in Claim 17:
The one or more computer-readable storage media of claim 14, storing additional computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform additional operations comprising:
receiving one or more application programming interface (API) calls for an object tracking service to track movement of objects included in the field of view; determining, by an object detection service, one or more objects of interest included in the field of view ( [0034] discloses 121 can generate inputs of interest and [0043], [0179] discloses a sensor manager which can access camera monitoring of the user’s eye gaze to determine video inputs, as well as other types of inputs. By using such inputs, aspects of interest, such as motions, etc, can be determined. Please note the combination below for more details on an object of interest(s). To clarify, when using a camera and determining eye gaze, or in general, motion, a series of video frames are used ); and
causing, by the object tracking service, the one or more objects of interest to be labeled such that movement of the one or more objects of interest is tracked across a plurality of video frame captured by the camera ( Please see above with respect to the plurality of video frames); but
Erivantcev does not explicitly teach “the one or more objects of interest to be labeled such that movement of the one or more objects of interest…”.
Initially, Erivantcev teaches in [0003] of various types of reality, including virtual reality, augmented reality, mixed reality and/or extended reality and one of ordinary skill in the art realizes these often include physical objects mixed with virtual objects.
To emphasize, in the same field of endeavor, reality systems, Gavriliuc teaches to have a projected augmented reality system, ( Gavriliuc, Figure 5, [0045] ). Here, virtual objects/image-based content, such as 510A, 510B and 510C, as well as associated text-based content/labels, such as 520A, 520B, 520C and 520D, can be generated in a setting which also has physical elements, such as physical dresser 532. Using the physical items as a baseline, other elements, such as lamp 510A, can be placed on top of the dresser, enhancing the experience. To clarify, within the augmented reality system, the user can see and interact with virtual objects relative to physical objects (part of the display surface shown to the user). These objects, both physical and virtual can have shape/contours, etc, which adjust as the user interacts with the environment, such as by using mixed-input pointing device 506 (akin to Erivantcev). Furthermore, Gavriliuc teaches in Figure 5 of a plurality of objects of interest, such as what items are labeled with messages, as detailed in [0045], etc. Respectfully, the user can interact with a number of objects in the virtual space and it is a design choice what is determined to be an object of interest, including the handheld device that both Erivantcev an Gavriliuc both teach of. One of ordinary skill in the art would realize the combination teaches of being able to determine objects of interest and such an object being/including the handheld device is simply one of many options, given the capabilities to do. Gavriliuc also teaches of using depth cameras to determine where the user is gazing in order to determine the objects of interest to be able to interact with (such as writing a message, as shown in Figure 5).
Therefore, it would have been obvious to one of ordinary skill in the art, at the effective filed date of the invention, to implement the mixing of physical objects with virtual objects, as taught by Gavriliuc, with the motivation that this simulates (enhances) the user experience of being able to interact with physical objects in real time and also have projected content, ( Gavriliuc, [0045] ).
11. Claims 19 and 20 rejected under 35 U.S.C. 103 as being unpatentable over Erivantcev
et al. ( US 2022/0291753 A1 ), as applied to Claim 14, further in view of Armstrong-Muntner ( US 2014/0078109 A1 ).
As perClaim 19:
Erivantcev does not explicitly teach of “determining, using one or more graphics input tool state models, a state of the graphics input tool, the state of the graphics input tool indicating one or more characteristics of markings produced in response to the motion of the graphics input tool or indicating that marking functionality of the graphics input tool has been activated.”
However, in the same field of endeavor, input devices, Armstrong-Muntner teaches of various modes (rad as tool state models) for a stylus 360 to take on, ( Armstrong-Muntner, Figures 7A, 9A, etc, [0068] ). In response to the different models, different outputs occur on the screen as the user interacts with it (indicating that marking functionality has been activated).
Therefore it would have been obvious to one of ordinary skill in the art, at the effective filed date of the invention, to implement the various tool state models, as taught by Armstrong-Muntner, with the motivation that different output modes and writing characteristics can be achieved on the display, ( Armstrong-Muntner, Figures 7A/9A, etc ).
As per Claim 20:
Erivantcev does not explicitly teach “wherein the state of the graphics input tool corresponds to an amount of bending of a tip of the graphics input tool and indicates an additional width of the markings produced by the motion of the graphics input tool.”
Initially: Respectfully, Erivantcev teaches of handheld aspects which can provide input and a way to interact with the virtual space, notably focusing on the path of motion, (see Erivantcev, Figure 3, etc). Notably, have a tip which bends as it interacts with virtual objects, or with physical objects as well, is well known in the art and the amount of deformation being analyzed and impacting the path/shape are related, just as they are for physical tips of paint brushes, etc, (see Applicant’s [0078] detailing this issue). Respectfully, examiner asserts Official Notice to this being well known.
To expand on the Official Notice, please note Armstrong-Muntner teaches of using a stylus 104, ( Armstrong-Muntner, Figures 7-9, [0062] ). Notably, as the user presses on a stylus, the nib may bend of flex and depending on the amount of pressure, i.e. the amount of bending changes. This results in a different in the output line on the graphical interface, as shown in the various figures. Respectfully, this aspect of soft tip/nibs which can actuate based on force/pressure aspects is well known and Armstrong-Muntner reinforces this well known concept to provide features to the interface, ( Armstrong-Muntner, [0003] ).
Conclusions
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS P JOSEPH whose telephone number is (571)270-1459. The examiner can normally be reached Monday - Friday 5:30 - 3:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS P JOSEPH/Primary Examiner, Art Unit 2621