Prosecution Insights
Last updated: April 19, 2026
Application No. 17/941,293

3D CURSOR FUNCTIONALITY FOR AUGMENTED REALITY CONTENT IN MESSAGING SYSTEMS

Non-Final OA §103§DP
Filed
Sep 09, 2022
Examiner
BADER, ROBERT N.
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
7 (Non-Final)
44%
Grant Probability
Moderate
7-8
OA Rounds
3y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
173 granted / 393 resolved
-18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/26 has been entered. Information Disclosure Statement The information disclosure statement filed 2/11/26 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. One of the IDS filed on 2/11/26 refers to a video published on YouTube. The entered publication document for said video does not meet Applicant’s burden for documenting a vide, e.g. MPEP 609.04(a) II, final paragraph, indicates “For example, published information, such as the visual output of a software program or a video, may be submitted only if reduced to writing, such as in the form of screen shots and/or a transcript.”, and Applicant’s document shows a single image capture, and provides no transcript or written description meeting the requirement of a reduction to writing for the published video content. The IDS entry has been placed in the application file, but the video citation lacking sufficient documentation has not been considered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-16, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2013/0265220 A1 (hereinafter Fleischmann) in view of U.S. Patent Application Publication 2020/0226814 A1 (hereinafter Tang) in view of “InReach: Navigating and Manipulating 3D Models using Natural Body Gestures in a Remote Collaboration Setup” by Anette Lia Freiin von Kapri (hereinafter Kapri) in view of U.S. Patent 11,017,611 B1 (hereinafter Mount) in view of U.S. Patent Application Publication 2010/0050120 A1 (hereinafter Ohazama) in view of U.S. Patent Application Publication 2018/0210628 A1 (hereinafter McPhee). Regarding claim 1, the limitations “A method, comprising: detecting, using one or more hardware processors, a location and a position of a representation of a finger in a set of frames captured by a camera of a client device … generating, using the one or more hardware processors … a first virtual object based at least in part on the location and the position of the representation of the finger” are taught by Fleischmann (Fleischmann, e.g. abstract, paragraphs 16-65, discloses a system for 3D user interfaces including a 3D tracking module for tracking the user’s hands, fingers, and head, e.g. paragraphs 16, 21-23, 25-27, 36-39, 55-57, as well as presenting a 3D display volume including virtual object(s) representing the user’s hand and fingers, e.g. paragraphs 19, 20, 24, 28-33, 40, 41, 44-55 . That is, the 3D tracking module performs the claimed detection using camera(s), and the display system generates the virtual hand representation object based on the detection, i.e. the claimed virtual object.) The limitation “the first virtual object being included in video frame data … the video frame data being displayed to the user on a display of the client device, the display of the client device being positioned to be in front of the face of the user captured by the camera of the client device” is taught by Fleischmann (Fleischmann, e.g. paragraphs 28-33, discusses different display system embodiments, including, e.g. figures 1, 2, 4, paragraphs 28-30, 32, where the user(s) is(are) facing a client device display displaying the 3D display volume including the virtual hand object, corresponding to the claimed user/display arrangement, as well as other embodiments such as a user wearing an HMD, e.g. paragraph 28.) The limitations (addressed out of order) “detecting, using the one or more hardware processors, a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of [a] second virtual object; modifying, in response to the first collision event, and using the one or more processors, a set of [display parameters] of the second virtual object to a second set of [display parameters], the second set of [display parameters] being different to the set of [display parameters]” are taught by Fleischmann (Fleischmann, e.g. paragraphs 44-54, describes a variety of scenarios wherein the user’s movement of their hand and fingers, reflected in the movement of the virtual hand object, results in an intersection/collision with a second virtual object, causing the system to modify the second virtual object in response, e.g. modified transparency, paragraphs 44, 47, modified position, paragraphs 46, 51, and modified selection, paragraph 54, corresponding to the claimed detection of a collision event of the virtual hand object with a second virtual object and in response modifying display parameters of the second virtual object.) The limitations “A method, comprising: detecting, using one or more hardware processors, a location and a position of a representation of a finger in a set of frames captured by a camera of a client device … generating, using the one or more hardware processors and in response to receiving user input … a first virtual object based at least in part on the location and the position of the representation of the finger, the first virtual object extending from the representation of the finger; the first virtual object being included in video frame data … the video frame data being displayed to the user … detecting, using the one or more hardware processors, a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of [a] second virtual object; in response to the first collision event, modifying, using the one or more processors, a set of [display parameters] of the second virtual object to a second set of [display parameters], the second set of [display parameters] being different to the set of [display parameters]” are not explicitly taught by Fleischmann (As discussed above, Fleischmann’s system includes a virtual hand object, representing the detected movement of the user’s hand, which intersects/collides with second virtual object(s) to cause a change in the display parameters of the second virtual objects. Further, with respect to Tang, discussed further below, it was noted that Fleischmann’s 3D interaction techniques are equally compatible with the claimed user facing the display arrangement and HMD display embodiments. However, Fleischmann does not explicitly suggest a virtual hand representation including a virtual object extending from the virtual hand/finger representation object used to change to the display parameters of the second virtual object.) However, these limitations are taught by Tang (Tang, e.g. abstract, paragraphs 12-75, describes an augmented reality system for targeting and manipulating virtual objects displayed in the real-world environment using an HMD, including a 3D tracking module using cameras to track the user’s hands/fingers, e.g. paragraphs 16-22, 29-34, and further, e.g. paragraphs 23-28, 35-46, generating a virtual cursor object, e.g. paragraph 36, by casting a ray based on a portion of the user’s hand, which may be the user’s finger, e.g. paragraph 39, i.e. the claimed first virtual object extending from the representation of the user’s finger, which may be presented in response to user input, e.g. Tang, paragraph 42, indicates the ray may be hidden or displayed based on the user’s semantic gestures. Further, Tang, e.g. paragraphs 43-58, teaches that the user controls the cursor position to collide with control point(s) on virtual objects displayed in the scene in order to target and select the control point(s) for manipulation, where manipulation may involve changing the dimensions of the virtual object, e.g. by scaling or stretching, as in paragraphs 52-53. That is, there is an initial/first location/position of the user’s finger which places the first/cursor virtual object at a collision point with the second virtual object, targeting the object’s control point and allowing the user to select the control point for manipulation, followed by the user performing a gesture indicating a type of modification that may include changing the second virtual object’s dimensions to a second different set of dimensions.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system to include Tang’s raycasting virtual object manipulation techniques in order to provide the user with additional virtual object manipulation tools, i.e. Fleischmann discloses exemplary virtual object manipulation tools using the tracked hands/fingers of a user, e.g. paragraphs 44-54, including selecting and moving objects, and Tang discloses raycasting based techniques for analogous manipulation including selection and movement, but additionally rotation, scaling, stretching, deletion, creation, etc., e.g. Tang, paragraph 28, which would be useful to Fleischmann’s exemplary user. It is additionally noted that Fleischmann, e.g. paragraph 45, indicates it would be beneficial for the user to be able to perform the manipulation without requiring the user’s hands to be in the primary interaction space, and Tang’s raycasting techniques allow the user to manipulate objects using their hands/fingers at a relative distance, e.g. Tang, paragraph 15, indicates the user’s hands may be at their sides, and as in figure 6, the user’s hands do not need to directly intersect the object to perform the manipulations. In Fleischmann’s modified system, the user would be able to access Tang’s raycasting interface by using a semantic gesture input as in Tang, paragraph 42, analogous to Fleischmann’s indirect manipulation example of paragraph 44, wherein the virtual representation of the user’s hand would include the virtual ray extending from the representation of the user’s finger, as in Tang, paragraph 39, used to manipulate the display parameters of second virtual object(s) by intersecting/colliding the virtual ray with the control point(s) of the virtual object and performing gestures to move the control point and modify the second virtual object’s display parameters, including the dimensions thereof, e.g. Tang, paragraphs 52, 53. Finally, as noted above, Fleischmann’s 3D interaction techniques are equally compatible with the claimed user facing the display arrangement and HMD display embodiments, such that although Tang’s disclosure is directed to an HMD embodiment of a 3D user interface using hand/finger tracking analogous to Fleischmann’s HMD display embodiment, one of ordinary skill in the art would have understood that Tang’s raycasting virtual object manipulation techniques are equally compatible with Fleischmann’s user facing the display embodiment(s) and HMD embodiment, i.e. Fleischmann uses the same 3D user tracking module for all embodiments. The limitation “the first virtual object being included in video frame data including a particular representation of a face of a user captured by the video camera, the video frame data being displayed to the user on a display of the client device, the display of the client device being positioned to be in front of the face of the user captured by the camera of the client device” is not explicitly taught by Fleischmann (As noted above, Fleischmann, e.g. paragraphs 28-33, discusses different display system embodiments, including, e.g. figures 1, 2, 4, paragraphs 28-30, 32, where the user(s) is(are) facing a client device display displaying the 3D display volume including the virtual hand object, corresponding to the claimed user/display arrangement. While Fleischmann’s 3D display volume is displayed to the user on the display facing the user, e.g. figures 1, 2, and is based on the video data captured by the cameras, i.e. the 3D tracking module captures images of the user using cameras integrated in the display, Fleischmann does not teach that the 3D display volume includes a representation of the face of the user captured by the camera, only representations of the user’s hands/fingers, e.g. paragraph 41.) However, this limitation is taught by Kapri (Kapri, e.g. sections 1, 3-6, discloses the InReach system for mixed-reality remote collaboration, which includes a 3D display volume comprising manipulatable virtual objects and virtual representations of the entire body of each of the participating users, e.g. figure 21. Kapri’s InReach system is analogous to the mixed-reality interfaces of Fleischmann and Tang in several ways, including tracking the users’ hands and fingers with cameras, e.g. section 5, 5.1, 5.2, virtual objects parameters manipulated in response to intersection/collision of the representation(s) of the users hand(s) with the virtual objects, e.g. section 3.2, figure 22, and detecting gestures of the hands as input, e.g. sections 3.2, 5.2, figure 35. Kapri further teaches, e.g. sections 3, 3.1, 5.1, 5.4, figure 1, that the users are rendered into the 3D display space as 3D avatar models mirroring the pose of the user, and textured with the image data captured by the cameras, i.e. the claimed video frame data including the virtual object(s) representing the user, including their face, and displayed on a display facing the user’s front. Kapri teaches the mirrored 3D avatar technique has advantages with respect to remote collaboration, e.g. sections 1.1, 1.2, including that the user does not have to split their attention, can use their body to point to and manipulate the data with bare hands and natural gestures, and can see themselves in relation to the data.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system, including Tang’s raycasting virtual object manipulation techniques, to support Kapri’s mirrored 3D avatar technique as an alternative virtual representation of the user(s) tracked by Fleischmann’s user interface system, in order to enable improved remote collaboration techniques as taught by Kapri, e.g. sections 1.1, 1.2, within Fleischmann’s system. In Fleischmann’s modified system, when used for a remote collaboration application, the 3D display volume would be displayed using Kapri’s mirrored 3D avatar technique, i.e. the 3D display volume would be mirrored, and the 3D virtual avatar model representing the tracked pose of the user, and textured with the captured video data thereof, would be placed in the mirrored 3D display volume for rendering, corresponding to the claimed arrangement where the user(s) is(are) facing a client device display displaying the 3D display volume including the mirrored virtual 3D avatar model representing the user, including the representation of the face of the user captured by the camera. Further, the user of Fleischmann’s modified system could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, corresponding to the claimed generating step as a whole, i.e. the first virtual ray/object extending from representation of the user’s finger is included in the video frame data, i.e. the rendered/displayed 3D display volume, the video frame data includes the representation of the face of the user captured by the camera, i.e. the mirrored 3D avatar model textured with captured image data, and the video frame data is displayed on the display of the client device comprising the camera(s) and facing the user captured in the images, e.g. Fleischmann, figures 1, 2. The limitations “the video frame data being displayed with the representation of the finger below the particular representation of the face, and at least a portion of the first virtual object, that extends from the representation of the finger, overlapping with the particular representation of the face” are taught by Fleischmann in view of Tang and Kapri (As discussed above, in Fleischmann’s modified system, when used for a remote collaboration application, the 3D display volume would be displayed using Kapri’s mirrored 3D avatar technique, i.e. the 3D display volume would be mirrored, and the 3D virtual avatar model representing the tracked pose of the user, and textured with the captured video data thereof, would be placed in the mirrored 3D display volume for rendering, corresponding to the claimed arrangement where the user(s) is(are) facing a client device display displaying the 3D display volume including the mirrored virtual 3D avatar model representing the user, including the representation of the face of the user captured by the camera. Further, the user could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, corresponding to the claimed generating step. i.e. the first virtual ray/object extending from representation of the user’s finger is included in the video frame data, the video frame data includes the representation of the face of the user captured by the camera. Tang, e.g. paragraphs 34-39, teaches that the virtual ray/object may be defined based on any two detected components of the user, where one of the components is the user’s fingertip, and the other may be the shoulder, elbow, palm, wrist, knuckle, etc. as in paragraph 34, such that the determined virtual ray/object may pass through the space in front of the user’s face, and by extension, in the 3D display volume including the mirrored 3D avatar model representing the face of the user and the virtual ray/object extending from the representation of the user’s finger would appear to overlap the representation of the face.) The limitation (addressed out of order) “generating a first set of virtual objects, each of the first set of virtual objects being positioned equidistance from at least one other virtual object in the first set of virtual objects, the first set of virtual objects including a first particular virtual object and a second virtual object … receiving a selection input corresponding to selection of one of the first set of virtual objects; and applying, in response to the selection input, the selected virtual object as augmented reality content to the video frame data” is not explicitly taught by Fleischmann in view of Tang (It is noted that Tang, e.g. paragraph 28, indicates that the user may delete or create new virtual objects, which further suggests the possibility of loading a stored previously created object, as is common with virtual editing system, i.e. selecting a stored/previously created object for inclusion in the augmented reality display, corresponding to the claimed selection of one virtual object for application as augmented reality content to the video frame data. While Tang does not address generating a set of stored/previously created virtual objects, per se, or placing them in the scene positioned equidistant from at least one other virtual object in the set, Mount teaches that a menu for loading virtual objects into a virtual object scene being edited by the user may present a set of virtual objects positioned equidistant from each other in a 2D grid.) However, this limitation is taught by Mount (Mount, e.g. abstract, cols 2-50, describes a virtual scene manipulation system wherein the user may view an augmented or virtual reality scene having virtual objects, e.g. col 2, line 46 - col 4, line 6, and use ray casting based targeting/selection/manipulation operations by targeting the grab points on the virtual objects, e.g. col 9, lines 11-36, cols 13-22. Further, Mount, e.g. cols 30-31, 38, figures 13, 14, 19, teaches that a set of virtual objects which the user may select for adding to the virtual scene my be presented as a 2D grid of objects, where the objects may be grabbed to be placed in the scene as in figure 19.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system, including Tang’s raycasting virtual object manipulation techniques, supporting Kapri’s mirrored 3D avatar technique, to include Mount’s two-dimensional grid display of virtual objects for adding to the virtual scene as part of Tang’s raycasting virtual object manipulation in order to allow Fleischmann’s modified system to load pre-existing virtual objects in addition to creating and modifying virtual objects, which one of ordinary skill in the art would recognize is a conventional feature of virtual editing systems. In the modified system, Mount’s two-dimensional grid display of virtual objects includes a plurality of virtual objects, i.e. the claimed first particular virtual object and second virtual object, and the user would be able to select/grab virtual objects from the two-dimensional grid display of virtual objects for inclusion in the augmented reality scene, corresponding to the claimed selection of one of the first set of virtual objects, and in response applying the selected virtual object as augmented reality content to the video frame data. The limitations “the first particular virtual object and the second virtual object overlapping the particular representation of the face; detecting, using the one or more hardware processors, a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of the second virtual object; in response to the first collision event, modifying, using the one or more hardware processors, a set of dimensions of the second virtual object to a second set of dimensions to provide visual feedback indicating targeting of the second virtual object, the second set of dimensions being different to the set of dimensions … the second virtual object overlapping a first portion of the particular representation of the face, and the first particular virtual object overlapping a second portion of the particular representation of the face, wherein the second virtual object remains within the first set of virtual objects during the modifying” are partially taught by Fleischmann in view of Tang, Kapri, and Mount (As discussed above, Tang, in addition to using the virtual ray for virtual object manipulation, teaches that the virtual ray can be used to target a virtual object for selection, e.g. paragraphs 45, 47, 48, 49, analogous to Mount, e.g. cols 30-31, 38, figures 13, 14, 18, 19, teaching that the user selects one of the virtual objects in the set of grid objects in part by intersecting the virtual object with a virtual ray, such that in the modified system, the user may open the virtual object grid menu and target one of the virtual objects in the virtual object grid by moving their finger to a second location/position causing the virtual ray to intersect one of the virtual objects in the virtual object grid, i.e. the claimed first collision event of the first virtual object intersecting the second virtual object. Further, as discussed above with respect to Kapri, in Fleischmann’s modified system, when used for a remote collaboration application, the 3D display volume would be displayed using Kapri’s mirrored 3D avatar technique and the user could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, such that the virtual objects in the virtual object grid, would be rendered as overlaying the representation of the user’s face in at least some instances, i.e. depending on the relative positioning of the user’s avatar and the virtual object grid, the first particular virtual object and second virtual object could overlap different portions of the representations of the user’s face. While Tang, e.g. paragraphs 47, 49, teaches that visual feedback may be provided to the user by modifying the appearance of the targeted object, i.e. modifying the targeted/second virtual object in response to the targeting/first collision event and rendering/displaying a second scene wherein the targeted/second virtual object is displayed in a modified manner to provide visual feedback indicating targeting of the targeted/second object in comparison to the first scene, Tang does not explicitly teach that the visual feedback modification of the targeted/second virtual object is a change in the dimensions of the targeted/second virtual object. Further, while Mount, e.g. figure 19, shows that after selection, the selected virtual object may be displayed with modified dimensions, figure 19 shows that the selected virtual object is replicated outside of the virtual object grid, rather than modified within the virtual object grid.) However, this limitation is taught by Ohazama (Ohazama, e.g. abstract, paragraphs 28-40,figures 2-5, 10, describes a user interface for selecting from a group of displayed icons, wherein visual feedback is provided to the user when one of the icons is indicated by moving the cursor over the icon, analogous to Tang’s visual feedback of virtual objects targeted by the virtual ray, where indicated icons are animated with a “growing” effect by scaling the icon to a larger size, i.e. in response to being indicated the icon provides visual feedback by modifying the display dimensions of the icon. It is additionally noted that, analogous to Mount’s virtual object grid, Ohazama’s icons are presented equidistantly from neighboring icons, as shown in the figures.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system, including Tang’s raycasting virtual object manipulation techniques, supporting Kapri’s mirrored 3D avatar technique, including Mount’s two-dimensional grid display of virtual objects, to use Ohazama’s indicated icon growing visual feedback effect to provide visual feedback to the user using the virtual ray to target virtual objects in Mount’s virtual object grid because Tang, e.g. paragraph 47, teaches that the appearance of the targeted object may be modified to provide visual feedback, and Ohazama’s growing effect is an analogous appearance alteration applied in an analogous indicating/targeting operation of a user interface. In the modified system, the modification of the targeted/second virtual object in response to the targeting/first collision event would be a modification to the dimensions of the targeted/second object as taught by Ohazama, and by extension, in the rendered/displayed second scene the targeted/second virtual object is displayed with modified/increased dimensions in comparison to the first scene, providing visual feedback indicating the targeting of the targeted/second object. The limitation “the second virtual object with the second set of dimensions overlapping the first particular virtual object” is taught by Fleischmann in view of Mount and Ohazama (Ohazama, e.g. paragraph 31, figure 3, indicates that the maximum size for growing an indicated object are predetermined, i.e. the scaling is between the size of a small image 104 and the size of a large image 105. Ohazama does not teach, or otherwise suggest, that the size of the large image should be selected to prevent overlapping neighboring icons, such that one of ordinary skill in the art, in implementing Fleischmann’s modified system as discussed above, could define the large size for modifying the dimensions of the targeted/second object as taught by Ohazama to be large enough to overlap with neighboring virtual objects in Mount’s virtual object grid, i.e. as claimed, the second virtual object with the second set of dimensions overlapping the first particular virtual object.) The limitations “detecting, using the one or more processors, a second location and a second position of the representation of the finger in a second set of frames captured by the camera of the client device, the second location and the second position being different from the location and the position … detecting based at least in part on the second location and the second position, using the one or more hardware processors, a second collision event corresponding to a collider of the first virtual object intersecting with a third collider of a third virtual object from the first set of virtual objects that are positioned equidistance from at least one other virtual object from the set of virtual objects; modifying, in response to the second collision event and using the one or more hardware processors, a set of dimensions of the third virtual object, from the first set of virtual objects, to a third set of dimensions to provide visual feedback indicating targeting of the third virtual object, the third set of dimensions being different to the set of dimensions; wherein the third virtual object remains within the first set of virtual objects during the modifying; rendering, using the one or more hardware processors, the third virtual object based on the third set of dimensions within a third scene, the third scene comprising a modified scene from a second scene, the third virtual object being rendered as overlaying the particular representation of the face; and providing, using the one or more hardware processors, for display the rendered third virtual object within the third scene” are taught by Fleischmann in view of Tang, Kapri, Mount, and Ohazama (As discussed above, in Fleischmann’s modified system, the user would be able to access Tang’s raycasting interface, analogous to Fleischmann’s indirect manipulation example of paragraph 44, wherein the virtual representation of the user’s hand would include the virtual ray extending from the user’s finger, as in Tang, paragraph 39, wherein Mount’s two-dimensional grid display of virtual objects for adding to the virtual scene can be accessed as part of Tang’s raycasting virtual object manipulation interface, and Ohazama’s indicated icon growing visual feedback effect is used to provide visual feedback by “growing” the virtual objects in Mount’s virtual object grid which are targeted by the virtual ray. That is, analogous to the above mapping of the virtual ray/second virtual object intersection event identifying the second virtual object as the targeted/indicated virtual object and triggering the “growing” effect which modifies the dimensions of the targeted/indicated/second virtual object to provide visual feedback indicating targeting of the second virtual object, the user could subsequently move their finger to a third location, causing the virtual ray to intersect a third virtual object in Mount’s virtual object grid, causing the third virtual object to be targeted/indicated, and triggering the “growing” effect which modifies the dimensions of the targeted/indicated/third virtual object to provide visual feedback indicating targeting of the third virtual object, i.e. the claimed detecting the second location/position of the finger, detecting the second collision event based on the first virtual object intersecting a collider of a third virtual object in the set of virtual objects, and rendering the third virtual object with modified dimensions providing visual feedback indicating the targeting in response to the second collision. Further, as discussed above with respect to Kapri, in Fleischmann’s modified system, when used for a remote collaboration application, the 3D display volume would be displayed using Kapri’s mirrored 3D avatar technique and the user could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, such that the targeted third virtual object, as well as the other virtual objects in the virtual object grid, would be rendered as overlaying the representation of the users face in at least some instances, i.e. depending on the relative positioning of the user’s avatar and the virtual object grid, the first particular virtual object, the second virtual object, and the third virtual object could overlap different portions of the representations of the user’s face.) The limitation “modifying, in response to detecting that the second location and the second position, the set of dimensions of the second virtual object from the second set of dimensions back toward the set of dimensions” is taught by Fleischmann in view of Ohazama (Ohazama, e.g. paragraphs 31, 42-44, as discussed above, teaches that the “growing” effect includes animating an increase in size of the indicated icon from a small size to a large size, i.e. in Fleischmann’s modified system, the targeted/indicated second virtual object is animated by increasing enlarging the dimensions of the second virtual object to provide visual feedback indicating targeting/selection. Further, Ohazama, e.g. paragraphs 42-44, teaches that when the icon is no longer indicated, it is animated with a “shrinking” effect to reduce the dimensions from the large image size to the small image size, such that when the second location/position are detected, indicating the change of targeting/indication from the second virtual object to the third virtual object, the dimensions of the second virtual object would be reduced back toward the original/first the set of dimensions, providing visual feedback that the second object is no longer targeting/indicated.) The limitations (addressed out of order) “A method, comprising: detecting, using one or more hardware processors, a location and a position of a representation of a finger in a set of frames captured by a camera of a client device executing a messaging application; generating, using the one or more hardware processors and in response to receiving user input within the messaging application, a first virtual object based at least in part on the location and the position of the representation of the finger … receiving a selection input corresponding to selection of one of the first set of virtual objects; and applying, in response to the selection input, the selected virtual object as augmented reality content to the video frame data for inclusion in a message within the messaging application.” are not explicitly taught by Fleishmann, Tang, Kapri, and Mount (As discussed above, in Fleischmann’s modified system, the user would be able to access Tang’s raycasting interface by using a semantic gesture input as in Tang, paragraph 42, i.e. generating the first virtual object in response to receiving a user input. Further, as discussed above, Tang, e.g. paragraph 28, teaches that virtual objects may be added to the set of virtual objects being displayed, and Mount discloses a two-dimensional grid menu for selecting virtual objects for inclusion in an augmented/virtual reality scene, such that in Fleischmann’s modified system, Mount’s two-dimensional grid display of virtual objects includes the plurality of virtual objects, i.e. the claimed first particular virtual object and second virtual object, and the user would be able to select/grab virtual objects from the two-dimensional grid display of virtual objects for inclusion in the augmented reality scene, corresponding to the claimed selection of one of the first set of virtual objects, and in response applying the selected virtual object as augmented reality content to the video frame data. Finally, while, as discussed above, the user of Fleischmann’s modified system could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, Kapri does not describe using messaging, per se, as part of the remote collaboration session, i.e. Kapri’s disclosure is focused on visual/spatial aspects, and only briefly discusses transfer of audio data between the collaboration systems, e.g. section 5.5. Fleischmann, Tang, Mount, and Ohazama also do not discuss augmented reality messaging applications, per se.) However, this limitation is taught by McPhee (McPhee, e.g. abstract, paragraphs 11-61, describes a system for implementing messaging application(s) using multiple client devices, e.g. paragraphs 16-20, where the users may annotate or modify media content by adding virtual object overlays, e.g. paragraphs 29, 30, where the media content may be a live video stream captured through camera and displayed on the client device, e.g. paragraphs 42, 60, 61, i.e. analogous to Kapri’s remote collaboration sessions, McPhee’s messaging application supports real-time video streaming/display as one type of supported media. Further, McPhee, e.g. paragraphs 41-44, figures 5A, 5B, teaches that a user can add virtual objects to the three-dimensional real-world scene by selecting the virtual objects from a gallery/menu of virtual objects and indicating a location to place the virtual object, i.e. analogous to dragging/dropping a virtual object from Mount’s two-dimensional grid display of virtual objects into the augmented reality scene as in Fleischmann’s modified system, McPhee teaches that in an augmented reality messaging application, the user can add virtual objects to the augmented reality scene by selecting a virtual object from a gallery/menu and indicating a location in the augmented reality scene for placing the selected virtual object.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system, including Tang’s raycasting virtual object manipulation techniques, supporting Kapri’s mirrored 3D avatar technique, including Mount’s two-dimensional grid display of virtual objects, using Ohazama’s indicated icon growing visual feedback effect, to operate as an interface with McPhee’s augmented reality messaging application(s), i.e. as noted above, the user of Fleischmann’s modified system could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, and the remote collaboration session using Kapri’s mirrored 3D avatar technique could be activated within McPhee’s augmented reality messaging application, thereby gaining the benefits of McPhee’s augmented reality messaging application, e.g. McPhee, paragraphs 25-40, and the benefits of Kapri’s mirrored 3D avatar technique, i.e. enable improved remote collaboration techniques as taught by Kapri, e.g. sections 1.1, 1.2. In Fleischmann’s modified system, Kapri’s mirrored 3D avatar technique, as well as Tang’s ray-casting based selection using Mount’s two-dimensional grid of visual objects and Ohazama’s icon growing visual feedback, could be activated within McPhee’s augmented reality messaging application by the user of the application. Regarding claim 2, the limitations “rendering the first virtual object within a first scene; rendering the second virtual object based on the second set of dimensions within the second scene, the second scene comprising a modified scene from the first scene; and providing, using the one or more hardware processors, for display the rendered second scene” are taught by Fleischmann in view of Tang, Mount, and Ohazama (As discussed in the claim 1 rejection above, Fleischmann, e.g. paragraphs 28, 40, 41, 44, teaches that the system renders images of the 3D display space for display on the display device, wherein the 3D display space, over time, includes the respective first, second, and third scenes having the virtual object(s) with changing dimensions. That is, before the first targeting/indication operation, a first scene is displayed including the first virtual ray/object extending from the representation of the user’s finger and the unmodified second virtual object in the virtual object grid, corresponding to the claimed first scene, and after the first targeting/indication operation, a second scene is displayed including the first virtual ray/object extending from the representation of the user’s finger and the targeted/indicated modified second virtual object, corresponding to the claimed second scene.) Regarding claims 3 and 4, the limitations “wherein the first scene comprises a first representation of a real world scene and the first virtual object” and “the second scene comprises the second representation of the real world scene and the second virtual object” are taught by Fleischmann in view of Tang and Kapri (As discussed in the claim 1 rejection above, the user of Fleischmann’s modified system could choose to use Tang’s raycasting object manipulation techniques during a remote collaboration session using Kapri’s mirrored 3D avatar technique, corresponding to the claimed generating step as a whole, i.e. the rendered/displayed 3D display volume of all three scenes includes the first virtual ray/object extending from the representation of the user’s finger, the mirrored 3D avatar model textured with captured image data representing the user in the real world scene, and the second virtual object(s) manipulated by the user, corresponding to the claim 3 and 4 requirements that the respective first and second scenes comprise a representation of the real world scene at that instant along with the first virtual ray/object and the second virtual object(s) being modified by the intersection/collisions.) Regarding claim 5, the limitation “wherein the first scene comprises real world video frame data and virtual object data, the virtual object data comprising information utilized for rendering the first virtual object or the second virtual object” are taught by Fleischmann in view of Tang, Kapri, and Mount (As discussed in the claim 1 rejection above, the user of Fleischmann’s modified system could choose to use Tang’s raycasting object manipulation techniques for selecting virtual objects from Mount’s virtual object grid, during a remote collaboration session using Kapri’s mirrored 3D avatar technique, corresponding to the claimed generating step as a whole, i.e. the rendered/displayed 3D display volume of all three scenes includes the first virtual ray/object extending from the representation of the user’s finger, the mirrored 3D avatar model textured with captured image data representing the user in the real world scene, and the second virtual object(s) in the virtual object grid being targeted/indicated by the user, corresponding to the claim requirement that the scene comprises real world video frame data (the 3D avatar model textured with captured image data), and virtual object data comprising information utilized for rendering the first virtual object (the virtual ray) or the second object (the second virtual object(s) targeted/indicated by the user).) Regarding claim 6, the limitation “moving the first virtual object to the second location and the second position based on a change in the location and the position of the representation of the finger to the second location and the second position” is taught by Fleischmann in view of Tang (As discussed in the claim 1 rejection above, in Fleischmann’s modified system, the user would be able to access Tang’s raycasting interface, analogous to Fleischmann’s indirect manipulation example of paragraph 44, wherein the virtual representation of the user’s hand would include the virtual ray extending from the user’s finger, as in Tang, paragraph 39, used to manipulate the display parameters of second virtual object(s) by intersecting/colliding the virtual ray with the control point(s) of the virtual object and performing gestures to move the control point and modify the second virtual object’s display parameters, including the dimensions thereof, e.g. Tang, paragraphs 52, 53. Both Fleischmann, e.g. paragraphs 36-39, and Tang, paragraphs 16-22, 29-34, teach using the system’s cameras to track the location/position of the user’s hands and finger bones, i.e. detecting first and second locations/positions of the user’s finger, where Tang further teaches with respect to the raycasting object manipulation techniques, e.g. paragraphs 23-28, 35-46, that the system generates the first virtual ray/object extending from the representation of the user’s finger, e.g. paragraphs 43-58, and that the user controls the ray position to collide with control point(s) on virtual objects displayed in the scene. That is, in response to a change in the detected location/position of the finger from the first location/position to the second location/position, the first virtual ray/object extending from representation of the user’s finger will move from the first location/position to the second location/position.) Regarding claim 9, the limitation “wherein modifying the set of dimensions of the second virtual object comprises enlarging the set of dimensions in at least one dimension” is taught by Fleischmann in view of Ohazama (Ohazama, e.g. paragraphs 31, 42-44, as discussed in the claim 1 rejection, teaches that the “growing” effect includes animating an increase in size of the indicated icon from a small size to a large size, i.e. in Fleischmann’s modified system, the targeted/indicated second virtual object is animated by increasing enlarging the dimensions of the second virtual object.) Regarding claim 10, the limitation “wherein modifying the set of dimensions of the second virtual object comprises reducing the set of dimensions in at least one dimension” is taught by Fleischmann in view of Ohazama (Ohazama, e.g. paragraphs 31, 42-44, as discussed in the claim 1 rejection, teaches that the “growing” effect includes animating an increase in size of the indicated icon from a small size to a large size, i.e. in Fleischmann’s modified system, the targeted/indicated second virtual object is animated by increasing enlarging the dimensions of the second virtual object. Further, Ohazama, e.g. paragraphs 42-44, teaches that when the icon is no longer indicated, it is animated with a “shrinking” effect to reduce the dimensions from the large image size to the small image size, such that the modifying of the set of dimensions of the second virtual object also comprises reducing the set of dimensions. It is noted that this corresponds to Applicant’s disclosed example of modifying dimensions of indicated objects in the grid of equidistant virtual objects, e.g. paragraph 127, figure 10, wherein object 1014, corresponding to the claimed second virtual object in the virtual object grid is initially increased in size when indicated and then reduced back to its original size when the user moves the claimed first virtual object to intersect the claimed third virtual object 1034.) Regarding claims 11 and 20, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, with Fleischmann further teaching implementation using a processor executing stored program instructions, e.g. paragraphs 56-62. Regarding claim 12, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above. Regarding claim 13, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above. Regarding claim 14, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 4 above. Regarding claim 15, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 5 above. Regarding claim 16, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 6 above. Regarding claim 19, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 9 above. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2013/0265220 A1 (hereinafter Fleischmann) in view of U.S. Patent Application Publication 2020/0226814 A1 (hereinafter Tang) in view of “InReach: Navigating and Manipulating 3D Models using Natural Body Gestures in a Remote Collaboration Setup” by Anette Lia Freiin von Kapri (hereinafter Kapri) in view of U.S. Patent 11,017,611 B1 (hereinafter Mount) in view of U.S. Patent Application Publication 2010/0050120 A1 (hereinafter Ohazama) in view of U.S. Patent Application Publication 2018/0210628 A1 (hereinafter McPhee) as applied to claims 6 and 16 above, and further in view of “Precise and Rapid Interaction through Scaled Manipulation in Immersive Virtual Environments” by Scott Frees, et al. (hereinafter Frees) Regarding claim 7, the limitation “wherein moving the first virtual object to the second location and the second position is restricted to a single axis of movement and the moving does not move the first virtual object along a second axis” is implicitly taught by Fleischmann in view of Tang, Mount, and Ohazama (As discussed in the claim 1 rejection above, in Fleischmann’s modified system, the user would be able to access Tang’s raycasting interface, analogous to Fleischmann’s indirect manipulation example of paragraph 44, wherein the virtual representation of the user’s hand would include the virtual ray extending from the user’s finger, as in Tang, paragraph 39, used to target/indicate the second/third virtual object(s) of the virtual object grid by intersecting/colliding the virtual ray with the virtual object(s), triggering the increase in second/third virtual object size using Ohazama’s “growing” effect. Both Fleischmann, e.g. paragraphs 36-39, and Tang, paragraphs 16-22, 29-34, teach using the system’s cameras to track the location/position of the user’s hands and finger bones, i.e. detecting first and second locations/positions of the user’s finger, where Tang further teaches with respect to the raycasting object manipulation techniques, e.g. paragraphs 23-28, 35-46, that the system generates the first virtual ray/object extending from the representation of the user’s finger, e.g. paragraphs 43-58, and that the user controls the ray position to collide with control point(s) on virtual objects displayed in the scene. While not explicitly discussed by Tang, a user could move their finger along a single axis of movement from the first location/position to the second location/position, e.g. the user could be resting their arm on a table making fine control such as the claimed 1-dimensional movement more convenient. Further, the targeted/indicated second and third virtual objects in Mount’s grid of virtual objects, provide visual feedback using Ohazama’s “growing” effect, i.e. the movement from the first location/position to the second location/position would change from targeting/growing a second virtual object in the set of virtual objects to targeting/growing a third virtual object in the set of virtual objects. In the interest of compact prosecution, Frees is cited for teaching the PRISM technique for enabling the user to precisely control movement along each dimension by adjusting their hand velocity along each dimension.) However, this limitation is taught by Frees (Frees, abstract, sections 1, 3-6, describes the PRISM technique, Precise and Rapid Interaction through Scaled Manipulation for direct hand interactions in virtual environments. Frees, section 3, explains that the technique works by scaling movement of the hand/cursor separately in each dimension depending on the velocity of the hand in the respective dimension, where velocities below a minimum velocity threshold negate movement in the respective dimension, thereby allowing the user to limit movement of the hand/cursor to a single axis, and by extension, achieve increased precision of control, analogous to the above noted scenario where the user could be resting their arm on a table making fine control such as the claimed 1-dimensional movement more convenient. Frees, section 4.4, indicates that users agreed that the PRISM technique was an improvement over direct manipulation techniques.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fleischmann’s 3D user interface system, including Tang’s raycasting virtual object manipulation techniques, supporting Kapri’s mirrored 3D avatar technique, including Mount’s two-dimensional grid display of virtual objects, using Ohazama’s indicated icon growing visual feedback effect, operating as an interface with McPhee’s augmented reality messaging application(s), to include Frees’ PRISM technique for improved control of the virtual ray. In the modified system, as noted above, the PRISM technique would be used to scale movement of the hand/cursor separately in each dimension depending on the velocity of the hand in the respective dimension, where velocities below a minimum velocity threshold negate movement in the respective dimension, thereby allowing the user to limit movement of the hand/cursor to a single axis between the first and second position, as claimed. Regarding claim 17, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 7 above. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-7, 9-17, 19, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12,456,263 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because instant independent claims 1, 11, 20 recite the combined limitations of patented independent claims 1, 10, and 18, and instant depending claims 2, 9, 12 and 19, except for the patented independent claim limitations requiring rendering the first object in a first scene and modifying the first scene to render the second scene which are recited in instant depending claims 2 and 12, indicating that patented independent claims 1, 10, and 18 are within the scope of the corresponding instant independent claims. Further, instant depending claims 3, 4, 5, 9, 10, 13, 14, 15, and 19 recite the same limitations as respective patented depending claims 3, 4, 5, 7, 8, 12, 13, 14, and 17, and instant depending claims 6, 7, 16, and 17 include as the first claimed movement of claims 7 and 17, the movement recited in patented depending claims 6 and 15, indicating those patented claims are within the scope of the corresponding instant claim. Response to Arguments Applicant’s arguments, see page 12, filed 2/17/26, with respect to the rejection(s) of claim(s) 1-7, 9-17, 19, 20 under 35 U.S.C. 103(a) in view of Fleischmann, Tang, Kapri, Mount, Ohazama, and Scott have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Fleischmann, Tang, Kapri, Mount, Ohazama, McPhee, and Scott Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT BADER/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 09, 2022
Application Filed
Mar 16, 2024
Non-Final Rejection — §103, §DP
Jun 20, 2024
Response Filed
Jun 26, 2024
Final Rejection — §103, §DP
Oct 01, 2024
Request for Continued Examination
Oct 02, 2024
Response after Non-Final Action
Nov 20, 2024
Non-Final Rejection — §103, §DP
Feb 26, 2025
Response Filed
Mar 03, 2025
Final Rejection — §103, §DP
Jun 02, 2025
Applicant Interview (Telephonic)
Jun 02, 2025
Examiner Interview Summary
Jun 06, 2025
Request for Continued Examination
Jun 10, 2025
Response after Non-Final Action
Jun 12, 2025
Non-Final Rejection — §103, §DP
Sep 16, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103, §DP
Feb 04, 2026
Applicant Interview (Telephonic)
Feb 04, 2026
Examiner Interview Summary
Feb 17, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586334
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586335
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12541916
METHOD FOR ASSESSING THE PHYSICALLY BASED SIMULATION QUALITY OF A GLAZED OBJECT
2y 5m to grant Granted Feb 03, 2026
Patent 12536728
SHADOW MAP BASED LATE STAGE REPROJECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12505615
GENERATING THREE-DIMENSIONAL MODELS USING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+26.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month