Prosecution Insights
Last updated: April 19, 2026
Application No. 18/317,802

KINEMATIC JOINT AND CHAIN PHYSICS TOOLS FOR AUGMENTED REALITY

Non-Final OA §103
Filed
May 15, 2023
Examiner
STATZ, BENJAMIN TOM
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
2y 9m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 2 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
65.2%
+25.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION This office action is responsive to applicant’s amendment/response filed 12/09/2025. Claims 1-20 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims the benefit of US Provisional Application No. 63/493,296, filed 03/30/2023. Claims 1-20 have been afforded the benefit of this filing date. Response to Arguments Applicant’s arguments, see 10-11, filed 12/09/2025, with respect to the rejection(s) of claim(s) 1, 11, and 12 under 35 U.S.C. 103 have been fully considered and are persuasive. The amendments to the independent claims, including a more detailed description of the data structure of the chain model, have overcome the previous rejection. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Kanyuk et al. (US 20090309882 A1) and Zohar et al. (US 20230074826 A1), which teach the additional limitations included in the amended claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7-13, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (US 20100289807 A1, hereinafter "Yu") in view of Kanyuk et al. (US 20090309882 A1, hereinafter "Kanyuk") and Zohar et al. (US 20230074826 A1, hereinafter "Zohar"). Regarding claim 1, Yu teaches: A computing device for kinematic simulation, the computing device comprising: a display (fig. 2, para. 0034 “The touch screen display 50 may be embodied as any known touch screen display.”); and a processor coupled to a storage system that stores instructions, which, upon execution by the processor (fig. 2, para. 0034 “the touch screen interface 54 may be embodied in software as instructions that are stored in the memory device 58 and executed by the processor 52”), cause the processor to: present a simulation interface comprising a plurality of graphical control elements, each graphical control element configured to adjust at least one physical parameter (fig. 3, para. 0047 “The touch screen interface (e.g., touch screen interface 54) of the apparatus may include a softness/rigidity icon 71, a frangibility icon 73, an elasticity icon 75, a fluidity icon 77, a hinges icon 79, a more icon 81 and an intensity adjustment icon 83 having a scroll bar 82. The icons represent different physical attributes that may be applied to one or more graphical elements to adjust the graphical elements (e.g., graphical objects). The icons 71, 73, 75, 77, 79 and 81 may be controlled and operated by the physical attribute controller 65 and icon 83 may be controlled by the degree control module 74. The scroll bar 82 of the icon 83 may adjust the intensity of a selected physical attribute.”); receive a selection from a user using the plurality of graphical control elements (fig. 3, para. 0048 “Each of the icons of FIG. 3 may be selected and activated by a stylus, finger, pen, pencil or any other pointing device coming into contact with a respective portion of the touch screen display (e.g., touch screen display 50) in a manner sufficient to register a touch.”); update a model based on the received selection (fig. 3, para. 0051 “upon selection of one or more of the icons 71, 73, 75, 77, 79 and 81, the physical attribute controller 65 is capable of assigning corresponding physical attributes to a selected graphical element (e.g., graphical element 85)”); and display and animate the updated model on the display using a physics engine during a kinematic motion simulation (fig. 4-6, para. 0052-0063 describe the ability to perform interactive animated demonstrations of the updated physical properties of the graphical element, para. 0066 “The graphical object having the transferred physical attributes and properties may be used in an animation generated by the animations module 67. The animation may be part of a video clip, video game or any other entertainment media consisting of sequential sequences of images”), wherein the simulation interface enables adjustment of parameters for individual elements within the model (para. 0038 and 0051 teach that parameters are only assigned to objects which are currently selected). Yu does not explicitly teach the following limitations: that the computing device performs kinematic joint simulation, or that the processor is caused to present a chain simulation interface, update a chain model and display and animate the updated chain model; or that the interface enables adjustment of parameters within the chain model, or wherein the chain model is stored as a data structure comprising: a plurality of rigid body mesh models, each rigid body mesh model comprising: mesh geometry data; a first set of physical parameters; and information describing a relationship with a different rigid body mesh model; and a plurality of spring elements, each spring element comprising: a second set of physical parameters; and information describing two rigid body mesh models connected by the spring element; or wherein the chain model is arranged in a tree configuration such that edges of the tree configuration are represented by the plurality of spring elements, to thereby form a rigged skeleton, where movement of a parent rigid body mesh model during the kinematic motion simulation generates forces in at least one adjoining spring element that in turn induces movement of a dependent rigid body mesh model. Kanyuk teaches a chain model as part of a kinematic joint simulation (fig. 4), with parameters adjustable via an interface ([0029] “In certain aspects, these controls and others are provided to animators or other users as part of a graphical user interface. In general, filters can be applied to any of a translation control, rate control and/or acceleration control (including rotation, rotational rate and rotational acceleration controls).”), wherein the chain model is stored as a data structure ([0030] “FIG. 4 illustrates an embodiment for propagating reaction forces down nodes of a hierarchical or tree type structure, such as limbs of a character. As shown, a translational acceleration control (Tz) is used to control rotational reaction (e.g., about the x-axis) for various nodes of a structure or skeleton.”) comprising: a plurality of rigid body models ([0018] “…in one embodiment, a crowd pipeline is extended to incorporate rigid body dynamics (e.g., using ODE (the Open Dynamics Engine)), and implement a fuzzy logic-based spring physics system in a 3D animation system such as Massive, which was adapted to drive skeletal chains and key-frame animated motion cycles.”, suggesting that the skeletal elements of fig. 4 are represented as rigid bodies), each rigid body model comprising: a first set of physical parameters ([0030] “Many robots or other types of characters have limbs that would naturally react to forces differently from the body as a whole. Thus, in one embodiment, different signals and/or filters (e.g., width and magnitude/weighting) are applied to different body parts to adjust for different masses of the body parts.”); and information describing a relationship with a different rigid body model (fig. 4 shows connections between elements of a skeletal model; [0030] “FIG. 4 illustrates an embodiment for propagating reaction forces down nodes of a hierarchical or tree type structure, such as limbs of a character.”); and a plurality of spring elements ([0017] “In certain aspects, agents reactions to physical forces are modeled as springs.”; [0030] “Also, to model this, in one aspect, fuzzy logic springs (similar to the above springs) are applied to individual joint rotations…”), each spring element comprising: a second set of physical parameters ([0030] “…with varying amplitudes and frequencies.”); and information describing two rigid body models connected by the spring element (fig. 4 shows joints represented as springs, where an individual joint connects two skeletal elements); wherein the chain model is arranged in a tree configuration such that edges of the tree configuration are represented by the plurality of spring elements, to thereby form a rigged skeleton, where movement of a parent rigid body model during the kinematic motion simulation generates forces in at least one adjoining spring element that in turn induces movement of a dependent rigid body model (fig. 4; [0030] “FIG. 4 illustrates an embodiment for propagating reaction forces down nodes of a hierarchical or tree type structure, such as limbs of a character. As shown, a translational acceleration control (Tz) is used to control rotational reaction (e.g., about the x-axis) for various nodes of a structure or skeleton. For example, for each successive node, larger filters, smaller amplitudes and/or longer delays can be used to process the signal as the skeletal chain is traversed. This causes the limbs to have overlapping and delayed motion relative to the root node of the agent/character.”). Yu and Kanyuk are analogous to the claimed invention because they are in the same field of producing a 3D physical/kinematic simulation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu, comprising a graphical interface and physical simulation with modifiable properties, to incorporate the teachings of Kanyuk to include hierarchical structures consisting of rigid bodies connected with springs, including the ability to adjust the parameters of each element within the hierarchical structure. The motivation would have been to add the ability to more realistically simulate linked objects such as humanoid limbs, as taught by Kanyuk. The combination of Yu in view of Kanyuk does not explicitly teach that the rigid body models are mesh models, each rigid body mesh model comprising: mesh geometry data. Zohar teaches a physical simulation involving connections between mesh models comprising mesh geometry data and the interaction of the forces applied to them (fig. 9, [0058] “An external mesh deformation system 224 deforms an external mesh based on changes to a body mesh of an object depicted in an image or video in real time and based on an external force simulation model. In an example, the external mesh deformation system 224 deforms a first portion of the external mesh that is attached to or that overlaps a portion of the body mesh based on movement information associated with the body mesh. The external mesh deformation system 224 deforms a second portion of the external mesh that does not overlap any portion of the body mesh (e.g., that dangles freely in the air but is attached to the first portion) based on an external force simulation model.”). Zohar and the combination of Yu in view of Kanyuk are analogous to the claimed invention because they are in the same field of producing a 3D physical/kinematic simulation. Additionally, the use of mesh models to visually represent 3D graphical objects is well known in the art. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk with the teachings of Zohar to visually depict the rigid bodies in the humanoid skeletal structure taught by Kanyuk using mesh models. The motivation would have been to incorporate them into the visual display of Yu. Regarding claim 2, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, wherein the selection (Yu fig. 3, [0048]) includes values for physical parameters of the spring elements (Yu [0049] teaches adjusting parameters that give a graphical element “spring-like properties”; Kanyuk [0030] “…to model this, in one aspect, fuzzy logic springs (similar to the above springs) are applied to individual joint rotations, with varying amplitudes and frequencies.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar with the additional teachings of Kanyuk to add the ability to set the physical parameters of the spring elements in the simulated chain structures using the graphical interface. The motivation would have been to simulate the movement of humanoid characters with different sizes and/or weights. Regarding claim 3, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, wherein the plurality of graphical control elements comprises a graphical control element configured to adjust a physical parameter selected from the group consisting of: stiffness, dampening, elasticity, and inertia (Yu para. 0047 “The touch screen interface (e.g., touch screen interface 54) of the apparatus may include a softness/rigidity icon 71, a frangibility icon 73, an elasticity icon 75, a fluidity icon 77, a hinges icon 79, a more icon 81 and an intensity adjustment icon 83 having a scroll bar 82. The icons represent different physical attributes that may be applied to one or more graphical elements to adjust the graphical elements (e.g., graphical objects). The icons 71, 73, 75, 77, 79 and 81 may be controlled and operated by the physical attribute controller 65 and icon 83 may be controlled by the degree control module 74. The scroll bar 82 of the icon 83 may adjust the intensity of a selected physical attribute”). Regarding claim 4, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, further comprising a camera, wherein the chain model is displayed and animated on the display in real-time as an overlay on video data received from the camera (Zohar fig. 8, para. 0072 “As described above, augmentation data includes AR content items, overlays, image transformations, AR images, AR logos or emblems, and similar terms that refer to modifications that may be applied to image data (e.g., videos or images). This includes real-time modifications, which modify an image as it is captured using device sensors (e.g., one or multiple cameras) of a client device 102 and then displayed on a screen of the client device 102 with the modifications”, para. 0111 “The AR effect module 519 can receive a user input that selects a given AR graphic (e.g., an AR purse, AR necklace, AR cloth arm band, AR belt, and so forth) to add in real time to an underlying image or video. The external mesh module 530 can access a database and search the database for an external mesh associated with the given AR graphic. The external mesh module 530 can obtain placement information for the external mesh. The placement information can specify where to place the AR graphic in the image or video in relation to or relative to the real-world object and which portions of the AR graphic are deformed based on movement of the real-world object and which other portions of the AR graphic are deformed based on external force simulation (e.g., wind, collision, physics, gravity, and so forth)”). Yu, Kanyuk, and Zohar are all analogous to the claimed invention because they are in the same field of producing a 3D physical/kinematic simulation, including 3D models being affected by motion or external forces. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the additional teachings of Zohar to add augmented reality capabilities to allow the animated physical simulation to be overlaid onto video footage in real time. The motivation would have been to create new applications for the invention, including being used in AR video games or instant messaging systems (Zohar para. 0002). Regarding claim 7, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, wherein the plurality of graphical control elements comprises a graphical control element for indicating an anchor object to which a rigid body mesh model in the plurality of rigid body mesh models is relatively positioned (Zohar: User is able to select their choice of AR graphic to attach to a real-world object/person, where each AR graphic can be associated with a particular positioning; user may also select which real-world objects/people the AR graphic is attached to: para. 0029 “The camera displays one or more real-time images or a video to a user along with one or more icons or identifiers of one or more AR experiences. The user can select a given one of the identifiers to launch the corresponding AR experience or perform a desired image modification”, para. 0037 “The messaging client 104 can receive a user selection of an AR graphic to add to the image or video. The messaging client 104 can obtain an external mesh associated with the AR graphic. The AR graphic can represent a fashion accessory or other item that has a first portion attached to a depicted object, such as a person, and a second portion that hangs freely or dangles from the first portion of the depicted object”, para. 0077 “A second mesh (also referred to as an external mesh) can be associated with an AR graphic or effect to be applied to the real-world object. The second mesh can be associated with placement information that specifies how the second mesh is placed or positioned in 3D space relative to the first mesh”, para. 0083 “In some examples, individual bodies/persons, among a group of multiple bodies/persons, may be individually modified, or such modifications may be individually toggled by tapping or selecting the individual body/person or a series of individual bodies/persons displayed within the graphical user interface”). Yu, Kanyuk, and Zohar are all analogous to the claimed invention because they are in the same field of producing a 3D physical/kinematic simulation, including 3D models being affected by motion or external forces. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the additional teachings of Zohar to add augmented reality features including the ability to “attach” simulated 3D models to real-life objects in an image or video. The motivation would have been to create new applications for the invention, including being used in AR video games or instant messaging systems (Zohar para. 0002). Regarding claim 8, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 7, further comprising a camera, wherein a location of the anchor object is determined by applying a trained machine learning model to video data received from the camera (Zohar, machine learning is used to generate a 3D model from a video to which an AR graphic can be attached: para. 0103 “The set of input data is obtained from one or more database(s) (FIG. 3) during the training phases and is obtained from an RGB camera of a client device 102 when an AR/VR application is being used”, para. 0104 “The machine learning technique module 512 extracts one or more features from the given training image or video to estimate a 3D body mesh of the person(s) or user(s) depicted in the image or video”, para. 0111 “The external mesh module 530 can access a database and search the database for an external mesh associated with the given AR graphic. The external mesh module 530 can obtain placement information for the external mesh.”, para. 0112 “the placement information can specify an edge or body part of the 3D body mesh corresponding to the real-world graphic (e.g., a left arm, a right arm, a head, and so forth) that is attached to or overlaps the first portion(s) of the external mesh”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the additional teachings of Zohar to use machine learning to implement the AR features discussed in claim 7. The motivation would have been to implement the most effective method of automating the operation of these features. Regarding claim 9, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 7, wherein the anchor object is a body part of a person selected from the group consisting of: hand, face, eye, ear, nose, mouth, torso, arm, leg, foot, and buttocks of a user (Zohar para. 0037 “The messaging client 104 can receive a user selection of an AR graphic to add to the image or video. The messaging client 104 can obtain an external mesh associated with the AR graphic. The AR graphic can represent a fashion accessory or other item that has a first portion attached to a depicted object, such as a person, and a second portion that hangs freely or dangles from the first portion of the depicted object. For example, the AR graphic can represent earrings which are attached to a person's ears or other body part and have a portion that dangles freely in the air. For example, the AR graphic can represent a belt which wraps around a waist of a person's body or other body part and has a portion that dangles freely in the air. For example, the AR graphic can represent a hair tie which wraps around hair of a person's body or other body part and has a portion that dangles freely in the air. For example, the AR graphic can represent a fantasy item or object that is attached to a person's body or other body part (such as an AR tail, extra limbs, extra head, long fur, and so forth) and has a portion that dangles freely in the air. The AR graphic can represent bunny ears that are attached to the person at one end and dangle freely at another end. The AR graphic can represent a purse or handbag which has a strap that overlaps or is attached to a body of a person depicted in the image or video and that has a container portion (the purse) that dangles freely from the strap”, para. 0073 “Data and various systems using AR content items or other such transform systems to modify content using this data can thus involve detection of real-world objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked… In other examples, tracking of points on an object may be used to place an image or texture (which may be 2D or 3D) at the tracked position.”, para. 0111 “The AR effect module 519 can receive a user input that selects a given AR graphic (e.g., an AR purse, AR necklace, AR cloth arm band, AR belt, and so forth) to add in real time to an underlying image or video”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the additional teachings of Zohar to focus the invention on attaching virtual personal accessories. The motivation would have been to apply it towards generating personalized graphics for social media, as taught by Zohar. Regarding claim 10, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 7, wherein the chain model is an item selected from the group consisting of: earring, necklace, pendant, bracelet, necktie, wig, crown, hat, and hood (Zohar para. 0037 “For example, the AR graphic can represent earrings which are attached to a person's ears or other body part and have a portion that dangles freely in the air”, para. 0038 “similar techniques can be applied to any other AR items or article of clothing or fashion item, such as a dress, pants, shorts, skirts, jackets, t-shirts, blouses, glasses, jewelry, earrings, bunny ears, a hat, ear muffs, and so forth”, para. 0111 “The AR effect module 519 can receive a user input that selects a given AR graphic (e.g., an AR purse, AR necklace, AR cloth arm band, AR belt, and so forth) to add in real time to an underlying image or video”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the additional teachings of Zohar to focus the invention on attaching virtual personal accessories. The motivation would have been to apply it towards generating personalized graphics for social media, as taught by Zohar. Regarding claims 11, 12, 13, 16, 17, 18, and 19, they are rejected using the same references, rationale, and motivation to combine as claims 1, 3, 4, 7, 8, 9, and 10 respectively because their limitations substantially correspond to the limitations of their respectively associated claims. Regarding claim 20, Yu teaches a mobile device for kinematic joint simulation (fig. 3, para. 0046 “FIG. 3 illustrates an exemplary embodiment of an apparatus having a touch screen interface and touch screen display according to an exemplary embodiment. In one exemplary embodiment, the apparatus of FIG. 3 may be employed on a mobile terminal (e.g., mobile terminal 10) capable of communication with other devices via a network.”). The remainder of the claim limitations substantially correspond to the limitations of claims 1-4; therefore, claim 20 is rejected using the same references, rationale, and motivation to combine as claims 1-4. Claim(s) 5 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (US 20100289807 A1) in view of Kanyuk (US 20090309882 A1) and Zohar (US 20230074826 A1) as applied to claims 1 and 11 above, and further in view of Breton et al. (US 20100122243 A1, hereinafter "Breton"). Regarding claim 5, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, but does not explicitly teach: wherein the plurality of graphical control elements comprises a graphical control element comprising a plurality of presets, wherein each preset corresponds to a predetermined selection for at least one of the plurality of graphical control elements. Breton teaches: wherein the plurality of graphical control elements comprises a graphical control element comprising a plurality of presets (fig. 3; [0025] “Each material class 110 may include a material definition 106 and one or more material presets 116”, para. 0044 “The user may then select a specific preset 116 to apply to a construct within the graphics scene by selecting the thumbnail 126 associated with the preset 116 with a mouse or otherwise indicating with another user input device 140”), wherein each preset corresponds to a predetermined selection for at least one of the plurality of graphical control elements (fig. 5; [0025] “The material definition 106 includes provides a template having a global set of appearance attributes, referred to herein as "material parameters," that can be shared by a set of similar materials. A preset 116 is an "instance" of the material class 110, where each material parameter of the material class 110 is assigned a specific value… Additionally, the values of the material parameters associated with the preset 116 may be modified by a user of the rendering application 104, thereby adjusting the result of rendering the preset 116”). Breton and the combination of Yu in view of Kanyuk and Zohar are considered to be analogous to the claimed invention because they are in the same field of producing 3D graphical models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the teachings of Breton to include a library of presets to allow the user to set multiple graphical parameters to predefined values simultaneously. The motivation would have been to improve convenience for the user, letting them quickly access various common settings, or granting a less experienced user access to groups of settings they know will work well. Regarding claim 14, it is rejected using the same references, rationale, and motivation to combine as claim 5 because its limitations substantially correspond to the limitations of claim 5. Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yu (US 20100289807 A1) in view of Kanyuk (US 20090309882 A1) and Zohar (US 20230074826 A1) as applied to claims 1 and 11 above, and further in view of Hardman (Force Fields and Jump Pads, 2020). Regarding claim 6, the combination of Yu in view of Kanyuk and Zohar teaches: The computing device of claim 1, as well as a plurality of graphical control elements and a chain model as previously discussed in claim 1, and calculation of forces acting upon the chain model as also previously discussed in claim 1. The combination of Yu in view of Kanyuk and Zohar does not explicitly teach: wherein the plurality of graphical control elements comprises a graphical control element configured to indicate a force applied on the chain model and at least one graphical control element for indicating whether the force applied is in local space, world space, or relative to an object. Hardman teaches methods for specifying a force applied on a model (pg. 546 “We can change the force mode, the amount of force applied…”) and for indicating whether the force applied is in local space, world space (pg. 546 “…and whether the force is applied locally (Space.Self) or in world space (Space.World)”), or relative to an object (pg. 546 “We can make the force field affect only the player, only Rigidbodies, or both.”), in the context of programming physics interactions for a 3D video game. Hardman and the combination of Yu in view of Kanyuk and Zohar are analogous to the claimed invention because they are in the same field of applying physics to simulated 3D objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention of Yu in view of Kanyuk and Zohar to incorporate the teachings of Hardman to add additional graphical control elements to allow the user to control the generation of a force acting on the chain model, as well as to specify which relative coordinate system in which the force is being applied. The motivation would have been to allow for more precise testing of the physical/kinematic simulation, expanding upon the interactive testing of Yu (fig. 4-6, para. 0052-0063), which is not quantifiable. Regarding claim 15, it is rejected using the same references, rationale, and motivation to combine as claim 6 because its limitations substantially correspond to the limitations of claim 6. References Cited The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bertails et al. ("Adaptive Wisp Tree: a multiresolution control structure for simulating dynamic clustering in hair motion." SCA '03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation (26 July 2003), pp. 207-213. https://dl.acm.org/doi/10.5555/846276.846305) teaches the use of a hierarchical tree structure comprising nodes connected with either simulated springs or rigid links. Choe et al. ("Simulating complex hair with robust collision handling". SCA '05: Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation (29 July 2005), pp. 153-160. https://doi.org/10.1145/1073368.1073389) teaches a chain model for simulating the motion of hair, which includes rigid bodies connected by springs. Choe also teaches the possibility of incorporating the hierarchical tree structure of Bertails. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BENJAMIN TOM STATZ/ Examiner, Art Unit 2611 /TAMMY PAIGE GODDARD/ Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
May 24, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Oct 22, 2025
Final Rejection — §103
Dec 09, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month