Prosecution Insights
Last updated: April 19, 2026
Application No. 18/690,577

VIRTUAL PROP DISPLAY METHOD AND APPARATUS

Final Rejection §103
Filed
Mar 08, 2024
Examiner
AHMAD, NAUMAN UDDIN
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Shanghai Hode Information Technology Co. Ltd.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment filed 01/23/2026 which has been entered and made of record. Claims 1, 3, 5-7, 9-11, 13-14, 16 and 18-22 have been amended. No claim has been newly added. Claims 1-20 are pending in the application. Claims 4 and 17 have been cancelled. Applicant’s amendments to the Claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed October 27th, 2025. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 13 and 14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (due to applicant’s arguments directed to newly amend limitation(s) which is addressed by new prior art presented in this Office Action). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 6, 9-11, 13-14, 19-20, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (U.S. Patent Application Publication No. 2020/0094138), hereinafter referenced as Chen, in view of SEYREK PIERRE et al. (U.S. Patent Application Publication No. 2024/0029347), hereinafter referenced as SEYREK PIERRE, Mourkogiannis et al. (U.S. Patent Application Publication No. 2021/0409535), hereinafter referenced as Mourkogiannis, and Kezele et al. (U.S. Patent Application Publication No. 2016/0012643), hereinafter referenced as Kezele. Regarding claim 1, Chen teaches a method of displaying virtual props, comprising: (abstract teaches "game picture display method and apparatus, a storage medium and an electronic device."); one of ordinary skill in the art would understand that game picture comprises virtual props; receiving a video stream, and recognizing a target video frame from the video stream (paragraph 54 and fig. 1, step 102 teach "camera module is called to perform image capture on a user according to the human body information capturing instruction to obtain a color image and a depth image of a continuous frame" and paragraph 55 teaches "camera module may be used to capture an image at a first shooting frequency, and detect whether a character (i.e., a user) appears in the image in real time"); continuous frame and realtime show video stream and the target video frame is the image capture on a user which is consistent with definition in applicant's disclosure on page 22, lines 7-8: "character image in the video frame is determined as the target video frame"; determining skeleton point information based on parsing the target video frame (paragraph 56 and fig. 1, step 103 teaches "During the image capture, skeleton point information of the user is determined according to the color image and the depth image."); target is still the user/character as described above and skeleton point information is determined/generated by analyzing/parsing the frame; wherein the preset pose information indicates a preset pose among the plurality of preset poses (paragraph 123 above mentions “preset starting posture” and is plural from the combination below); and displaying the 3D virtual prop in the target video frame based on the anchor matrix (paragraph 74 teaches "the game picture is displayed in the first window"); game picture is inclusive of prop (would have to be displayed in a location which would be from anchor matrix from the below reference, Kezele so that the prop is accurately displayed in a correct location). However, Chen fails to explicitly teach generating target skeleton point information by converting the skeleton point information into three-dimensional (3D) skeleton point information, wherein the 3D skeleton point information comprises 3D coordinates of skeleton points; determining a virtual prop based on a data table in response to determining that the target skeleton point information conforms to preset pose information, wherein the virtual prop comprises a 3D virtual prop, wherein the data table comprises information associated with a plurality of virtual props and information indicative of a plurality of preset poses corresponding to the plurality of virtual props; and obtaining information indicative of the 3D virtual prop and anchor information of the 3D virtual prop; computing an anchor matrix based on the anchor information of the 3D virtual prop and the 3D coordinates of the skeleton points; However, SEYREK PIERRE teaches generating target skeleton point information by converting the skeleton point information into three-dimensional (3D) skeleton point information, (SEYREK PIERRE, paragraph 7 teaches "plurality of 2D skeletons are generated, and the calculation of an estimated 3D position for nodes in a 2D skeleton comprises selecting one or more pairs of 2D images obtained from respective pairs of the plurality of cameras, and for which respective 2D skeletons have been generated, and for selected pairs of 2D images, calculating 3D positions for nodes in a 3D skeleton for corresponding pairs of nodes in the corresponding 2D skeletons"); this shows converting the 2D to-be-processed skeleton point information to 3D skeleton point information (converted to target skeleton point information); wherein the 3D skeleton point information comprises 3D coordinates of skeleton points (SEYREK PIERRE, paragraph 12 teaches “determine the 3D coordinates of the nodes of a 3D skeleton from the estimated 3D positions of the nodes of the one or more 2D skeletons” and paragraph 69 teaches “after the 3D coordinates have been determined for each node in the 3D skeleton, the 3D skeleton as a whole can be generated”); this shows 3D coordinates of skeleton points included in the 3D skeleton point information. SEYREK PIERRE is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of converting skeleton information into 3D. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify of Chen’s invention with the skeleton conversion to 3D techniques of SEYREK PIERRE to improve recognition rates, or by making them more efficient (SEYREK PIERRE, paragraph 72). This would be due to more details in the third dimension. However, the combination of Chen and SEYREK PIERRE fails to teach determining a virtual prop based on a data table in response to determining that the target skeleton point information conforms to preset pose information, wherein the virtual prop comprises a 3D virtual prop, wherein the data table comprises information associated with a plurality of virtual props and information indicative of a plurality of preset poses corresponding to the plurality of virtual props; and obtaining information indicative of the 3D virtual prop and anchor information of the 3D virtual prop; computing an anchor matrix based on the anchor information of the 3D virtual prop and the 3D coordinates of the skeleton points; However, Mourkogiannis teaches determining a virtual prop based on a data table in response to determining that the target skeleton point information conforms to preset pose information, (Mourkogiannis, paragraph 69 teaches "the avatar characteristics table 318 is configured to store attribute(s) in association with the avatar poses. For example, a particular avatar pose may be associated with one or more predefined words using metadata labels, designations, and the like. For example, the predefined words include a name of objects associated with the avatar pose"); this shows virtual objects associated with specific avatar predefined/preset poses meaning the object would be determined in correspondence to the specific pose from the characteristic table, additionally, another reason this would also be in response to the target skeleton point information conforming to the preset pose information because Chen paragraph 123 teaches "The server determines a limb posture corresponding to the two-dimensional coordinates returned at a current moment, and judges whether the limb posture matches a preset starting posture”, which shows conforming skeleton point information (or target skeleton point information from SEYREK PIERRE) to preset pose information which would have virtual prop associated with it (due to above Mourkogiannis citation of avatar pose associated with objects) drawn from the table of Mourkogiannis; wherein the data table comprises information associated with a plurality of virtual props and information indicative of a plurality of preset poses corresponding to the plurality of virtual props (Mourkogiannis, paragraph 69 teaches "table 318 further provides for stores a set of available avatar poses. The set of avatar poses represent different types of predefined activities that a user of the client device 102 may be participating in. The avatar characteristics table 318 may store multiple avatar poses that are available for each predefined activity. Examples of predefined activities include, but are not limited to: reading, working, studying, socializing, eating/drinking (e.g., in general, or with respect to a specific type of food/drink), exercising, playing a sport (e.g., in general, or with respect to a specific type of sport), playing a game"); this shows preset/predefined poses being stored in the table which would also contain specific virtual props/objects relating to the activity that the pose corresponds to; Mourkogiannis is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of associating virtual objects to poses and storing in a table. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Chen and SEYREK PIERRE with the virtual object table techniques of Mourkogiannis to allow a user to create, edit and/or otherwise maintain a personalized avatar (Mourkogiannis, paragraph 41). This would add versatility and enhance user engagement. However, the combination of Chen, SEYREK PIERRE and Mourkogiannis fails to explicitly teach wherein the virtual prop comprises a 3D virtual prop, and obtaining information indicative of the 3D virtual prop (although, Chen, paragraph 159 teaches "the limb motion condition of the user may be analyzed according to the skeleton point information, related characters or props in an online game are controlled according to the limb motion condition"); and anchor information of the 3D virtual prop; computing an anchor matrix based on the anchor information of the 3D virtual prop and the 3D coordinates of the skeleton points; However, Kezele teaches wherein the virtual prop comprises a 3D virtual prop, and obtaining information indicative of the 3D virtual prop and anchor information of the 3D virtual prop (Kezele, paragraph 66 teaches “virtual 3D object…coordinates of expected virtual 3D objects placement are assumed known from perceptual alignment (anchoring) with real 3D reference objects of known coordinates” paragraph 62 teaches “position of real world objects in the tracker coordinate system is obtained through a tracking mechanism that can be based on different types of tracking sensors (i.e. sensors and/or trackers), as listed above. Information from the tracking sensors is combined into a composite image as provided by a virtual camera. In essence, the virtual camera defines the field-of-view that defines the view of at least the virtual object provided in the HMD” and paragraph 133 teaches “position of the reference (anchoring) object is known through processing the data obtained from the tracking-sensor, in real-time.”); this shows obtainment of position/information of 3D virtual object/prop and anchor position/information as well from tracking; computing an anchor matrix based on the anchor information of the 3D virtual prop and the 3D coordinates of the skeleton points (Kezele, paragraph 112 teaches “calibration matrices. These matrices are consequently used to anchor the 3D virtual object to the 3D real, reference object.”; this shows matrix acting as anchor matrix and it is based on the position/information of the anchor of 3D virtual object/prop which is at a 3D coordinate of skeleton points when viewed in combination with the references above since in the above combination the prop sits on such. Kezele is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of anchoring 3D digital props to certain locations. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Chen, SEYREK PIERRE and Mourkogiannis with the anchoring techniques of Kezele to ensure calibration procedure is fast, simple, and user-friendly (Kezele, paragraph 112). This would ensure a more well-coordinated program/system. Regarding claim 6, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele teaches wherein the anchor matrix indicates coordinates of displaying the 3D virtual prop in the target video frame (Kezele, paragraph 13 teaches “modeling left and right calibration matrices (with intrinsic and extrinsic parameters modeled separately in separate calibration matrices) that define 3D-to-2D point correspondences between 3D coordinates of a real, reference object in a defined world coordinate system, and the 2D position of a corresponding virtual object in the left and right projected images of the OST HMD. The 3D pose of the virtual object that results from stereoscopic projection of the left and right 2D images that comprise the 3D virtual object is assumed known in the world coordinate system (due to prior 3D anchoring of the virtual object to its corresponding 3D reference object)”); this shows the aforementioned matrix indicates coordinates of where to display the 3D virtual prop in the target video frame. The same motivations used in claim 1 apply here in claim 6. Regarding claim 9, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele teaches wherein the recognizing a target video frame in the video stream comprises: determining a preset recognition rule (Chen, paragraph 46 and fig. 1, step 101 teaches "101: A human body information capturing instruction sent by a server is received."); human body information capturing instruction acts as preset recognition rule because one of ordinary skill in the art would understand the human body capturing instruction first recognizes/detects the human body then captured so that the right thing can be captured and this is a rule because it captures once the body is detected; and recognizing the target video frame based on the preset recognition rule, wherein the target video frame is a video frame in the video stream that conforms to the preset recognition rule (Chen, paragraph 54 and fig. 1, step 102 teach "camera module is called to perform image capture on a user according to the human body information capturing instruction to obtain a color image and a depth image of a continuous frame" and paragraph 55 teaches "camera module may be used to capture an image at a first shooting frequency, and detect whether a character (i.e., a user) appears in the image in real time"); continuous frame and realtime shows video frame which is processed in subsequent steps and this is based on user detected/recognized rule from the step 101 instruction meaning the target video frame here also conforms to the rule of having user recognized. Regarding claim 10, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele teaches further comprising: parsing the target video frame to obtain a set of skeleton point information, (Chen, paragraph 56 and fig. 1, step 103 teaches "During the image capture, skeleton point information of the user is determined according to the color image and the depth image.", and paragraph 124 teaches "the server needs to parse the packed data first, restore two-dimensional coordinates of 17 skeleton points, and then determine the limb posture of the user according to a relative coordinate position between the skeleton points"); skeleton point information being obtained during capture shows parsing the target video frame, and a set of 17 points are obtained; wherein the set of skeleton point information comprises information indicative of locations of skeleton points in the target video frame (Chen, paragraph 182 teaches "skeleton point information mainly includes related information of the skeletal points, such as two-dimensional coordinates or spatial coordinates of the skeleton points, where the skeletal points may be key nodes of the human skeleton, which may be artificially set. For example, the skeleton point may include a head node representing the head, shoulder nodes representing the left and right shoulders respectively, a waist node representing the waist, and the like."); the skeleton information provides the skeleton points listed (with location such as head, shoulder, etc); determining to-be-processed skeleton point information among the set of skeleton point information based on the preset pose information (Chen, paragraph 124 teaches "When the limb posture is the same as the preset starting posture, it is indicated that the user is ready, and the game can be started"); limb posture shows skeleton point information (among set of skeleton point information) being determined/used (under a condition if it's same as preset starting posture) and this depends on/is based on preset starting posture/preset pose information; and generating the target skeleton point information by converting the to-be-processed skeleton point information into 3D skeleton point information (Chen, paragraph 126 teaches "server may first judge whether the coordinate data is complete. If it is incomplete, such as missing the two-dimensional coordinates of a certain skeleton point, the missing skeleton points may be subjected to position prediction", and SEYREK PIERRE, paragraph 7 teaches "plurality of 2D skeletons are generated, and the calculation of an estimated 3D position for nodes in a 2D skeleton comprises selecting one or more pairs of 2D images obtained from respective pairs of the plurality of cameras, and for which respective 2D skeletons have been generated, and for selected pairs of 2D images, calculating 3D positions for nodes in a 3D skeleton for corresponding pairs of nodes in the corresponding 2D skeletons"); position prediction shows generating the skeleton point information and this also shows converting the 2D to-be-processed skeleton point information to 3D skeleton point information. The same motivations used in claim 2 apply here in claim 10. Regarding claim 11, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele teaches wherein the method further comprises: in response to determining that the target skeleton point information does not conform to the preset pose information, iterating the recognizing the target video frame in the video stream (Chen, paragraph 96 teaches "In response to judging that the limb posture does not match the preset starting posture, two-dimensional coordinates of skeleton points returned at a next moment are acquired as two-dimensional coordinates of skeleton points at a current moment,"); limb/target skeleton point information not matching the preset starting pose means the two do not conform and in response to that, next moment means iterating to next frame in the video. Regarding claim 13, the computing device claim 13 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Chen, fig. 10 teaches device 500 with memory 501 and processor 506 to execute instructions. Regarding claim 14, non-transitory computer-readable storage medium claim 14 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Chen, claim 20 teaches "non-transitory computer-readable storage medium, having a plurality of instructions stored therein, the instructions being adapted to be loaded by a processor to perform". Regarding claim 19, the computing device claim 19 recites similar limitations as method claim 6, and thus is rejected under similar rationale. Regarding claim 20, the computing device claim 20 recites similar limitations as method claim 10, and thus is rejected under similar rationale. Regarding claim 22, non-transitory computer-readable storage medium claim 22 recites similar limitations as method claim 10, and thus is rejected under similar rationale. Claim(s) 2-3, 7, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele as applied to claim 1 and 13 above, and further in view of Han et al. (U.S. Patent Application Publication No. 2020/0160613), hereinafter referenced as Han. Regarding claim 2, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele fails to teach wherein the method further comprises: calculating at least one of a target pose proportion or a target pose angle based on the target skeleton point information; and determining whether the target skeleton point information conforms to the preset pose information based on the at least one of the target pose proportion or the target pose angle. However, Han teaches wherein the method further comprises: calculating at least one of a target pose proportion or a target pose angle based on the target skeleton point information (Han, paragraph 145 teaches "an error function for minimizing the distance between the template avatar and the 3D scan model of the user, and β is a weight therefor. E.sub.s is a function for minimizing the sum of the rotation angles of joints,"); here the scan model acts as target skeleton point information from Chen since it is scanned in and the calculation for the angle is based on such since there's an error for function for minimizing distance between target skeleton point information/3D scan model and template avatar/preset pose information; and determining whether the target skeleton point information conforms to the preset pose information based on the at least one of the target pose proportion or the target pose angle (Han, paragraph 205 teaches "apparatus for generating a 3D avatar may change the pose of the template avatar using the point-to-point correspondence and the skeleton information of the template avatar such that the pose of the template avatar matches the pose of the 3D scan model, thereby performing skeleton-based human pose registration." and paragraph 71 teaches “registration module 122 may optimize the joint angle between bones and the coordinates of joints in order to match the pose of the template avatar with the pose of the 3D scan model of the user.”); this point-to-point correspondence is from the error function for minimizing distance (and rotation angles so the matching occurs) and the two poses matching means determination would be made that the target skeleton point information/3D scan model conforms to the preset pose information/template avatar, and paragraph 71 specifically reiterates the optimization and correspondence is done based on the joint angles. Han is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of matching reference/template and imaged/scanned poses using information on bones such as angles. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele with the preset pose information including angles and preset pose techniques of Han to ensure improving the accuracy and completeness of a 3D avatar in which the body shape of a user is reflected (Han, paragraph 61). This means a better user experience due to more accurate avatar (and props thereof) creation. Regarding claim 3, the combination of Chen, SEYREK PIERRE, Mourkogiannis, Kezele and Han teaches further comprising determining target skeleton vectors based on the target skeleton point information (Han, paragraph 173 teaches "head circumference, a chest measurement, or the like, may be calculated using a plane defined with a point, corresponding to a position defined in the skeleton information of the 3D avatar, and with a normal vector defined with the orientation of the skeleton. Also, the ratio of the length of an arm to the length of a leg, the ratio of a chest measurement to a hip circumference, and the like may be calculated using the skeleton information"); taking length of arm then length of leg shows determining two target skeleton vectors first [consistent with definition in applicant disclosure para. [0073], “The target skeleton vector is a skeleton length between skeleton points obtained through calculation based on the target skeleton point information”] and this is in subsequent step to/based on target skeleton point information; calculating at least one of a target pose proportion or a target pose angle based on the target skeleton vectors (Han, paragraph 173 teaches "the ratio of the length of an arm to the length of a leg, the ratio of a chest measurement to a hip circumference, and the like may be calculated using the skeleton information, and the volume of the user avatar may be determined using the vertex information"); ratio of arm to leg shows target pose proportion and in subsequent step to/based on target skeleton vectors (thus using information from there); determining whether the target pose proportion conforms to pose proportion information (Han, paragraph 173 teaches "the body shape information of the 3D avatar may be automatically calculated based on the skeleton information of the 3D avatar, whereby consistent body shape information of each user may be maintained. Also, the body shape information of the 3D avatar may be used to track changes in the body shape of the user over time."); for consistent shape to be maintained it would have to be determined that target pose proportion (ratio of leg to arm from above) conforms to the pose proportion information (changes in the body shape of the user over time and lengths derived from such); and determining whether the target pose angle conforms to pose angle information (Han, paragraph 82 teaches "local non-rigid registration module 123 adjusts the position of each of the vertices of the template avatar M using an affine transformation matrix, thereby minimizing the distance from the 3D scan model D of the user", paragraph 83 teaches "local non-rigid registration module 123 may define an error function as shown in Equation (6)" and paragraph 84 teaches "Equation (6)...an error function for minimizing the distance between the template avatar and the 3D scan model of the user, and β denotes a weight therefor. E.sub.s denotes a requirement for maintaining the basic shape of the template avatar.", also, figs. 7-10 visualize this); affine transformation shows angle being adjusted and this is done for minimizing distance/aligning meaning it would have to be determined when the target pose angle (template avatar rotation/angle from affine transformation) conforms to pose angle information (3D scan model of user angle/rotation), in addition, figs. 7-10 visualize this since rotating and transforming (like affine transformation) includes the angles and that's the minimizing occurring, so when it aligns, it is determined the angle conforms to pose angle information(pose angle has to be conforming according to the figures). The same motivations used in claim 2 apply here in claim 3. Regarding claim 7, the combination of Chen, SEYREK PIERRE, Mourkogiannis, Kezele and Han teaches wherein before the receiving the video stream, the method further comprises: determining the at least one of the pose angle information or the pose proportion information (Han, paragraph 106 teaches "the avatar generation module 130 may automatically measure the volume of the body, the ratio between the body parts, and the circumferences of the body parts from the consistent body shape based on the skeleton."); ratio between body parts being measured shows pose proportion information determined and this happens before receiving the to-be-processed video stream because this is done using techniques such as in Han paragraphs 11-14, such as “matching the 3D scan model and the previously stored template avatar” (Han, paragraph 11), which means previously stored information would come prior to the current video stream to-be-processed is received (since previously stored is already stored from last time); and generating the preset pose information based on the at least one of the pose angle information or the pose proportion information (paragraph 26 teaches "processors may change the pose of the template avatar using the point-to-point correspondence and the skeleton information of the template avatar such that the pose of the template avatar matches the pose of the 3D scan model.", paragraph 81 teaches "adjusts the position of each of the vertices of the template avatar using an affine transformation matrix for each vertex", paragraph 139 teaches "template avatar may include the correlation between each predefined bone and each predefined vertex." and paragraph 140 teaches "because rigging is applied to the template avatar, a change in the pose of bones may cause a change in the pose of the template avatar."); since template avatar is preset pose information as aforementioned in claim 1, this shows generating preset pose information (the changed template avatar) based on pose angle (affine transformation) as well as pose proportion information (correlation between each bone and vertices). The same motivations used in claim 2 apply here in claim 7. Regarding claim 16, the computing device claim 16 recites similar limitations as method claim 3, and thus is rejected under similar rationale. Claim(s) 5, 8, 18, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele as applied to claims 1 and 13-14 above, and further in view of Cheng et al. (U.S. Patent Application Publication No. 2022/0322775), hereinafter referenced as Cheng. Regarding claim 5, the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele fails to teach further comprising: determining the anchor information of the 3D virtual prop based on a preset virtual prop anchor of the 3D virtual prop and the target skeleton point information, wherein the preset virtual prop anchor of the 3D virtual prop indicates a center point of the 3D virtual prop in the preset pose, and wherein the anchor information of the 3D virtual prop comprise skeleton point location information and offset information for display of a virtual prop anchor of the 3D virtual prop in the target video frame; However, Cheng teaches further comprising: determining the anchor information of the 3D virtual prop based on a preset virtual prop anchor of the 3D virtual prop and the target skeleton point information, (Cheng, paragraph 167 teaches "the trunk part overwriting unit 1702 performs the overwriting...or in a manner that the fourth preset boundary and the fourth preset boundary center of the to-be-tried-on apparel data of the trunk part of the two-dimensional human body basic posture coincide respectively with the fourth preset boundary and the fourth preset boundary center of the original apparel data of the trunk part of the two-dimensional human body posture"); this shows boundary center of clothes to be tried on aligning with boundary center of trunk of posture which shows determining anchor information of the virtual prop/clothes as the center and based on preset center information and trunk/skeleton point information when viewed in combination; wherein the preset virtual prop anchor of the 3D virtual prop indicates a center point of the 3D virtual prop in the preset pose, (Cheng, paragraph 167 teaches "center of the to-be-tried-on apparel data of the trunk part of the two-dimensional human body basic posture coincide respectively with the fourth preset boundary and the fourth preset boundary center of the original apparel data of the trunk part of the two-dimensional human body posture"); this shows center point of clothing/virtual prop in the pose for the clothing/virtual prop preset anchor; and wherein the anchor information of the 3D virtual prop comprise skeleton point location information and offset information for display of a virtual prop anchor of the 3D virtual prop in the target video frame; (Cheng, paragraph 60 teaches "joint points are calibrated on the human body posture model, and a human body boundary range is delimited on the human body posture model, so as to obtain a two-dimensional human body posture of the fitted human body containing position information of skeleton points, i.e. the data of the projection of the three-dimensional human body posture on the second plane." and paragraph 97 teaches "two-dimensional human body basic posture containing the to-be-tried-on apparel data is adjusted to be consistent with a posture of the arm part"); this shows the aforementioned anchor would be skeleton point location and the adjustment shows the offset information for anchor where the clothes/virtual prop is attached, also, this correlates to the trunk part from paragraph 104 being the preset virtual prop anchor because total center is the center and arm part is rotated for the offset/adjustment. Cheng is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of using center point as an anchor for a virtual object on a pose of a human. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Chen, SEYREK PIERRE, Mourkogiannis, and Kezele with the anchor techniques of Cheng to ensure further improving the authenticity of virtual fitting (Cheng, paragraph 167). Regarding claim 18, the computing device claim 18 recites similar limitations as method claim 5, and thus is rejected under similar rationale. Regarding claim 21, non-transitory computer-readable storage medium claim 21 recites similar limitations as method claim 5, and thus is rejected under similar rationale. Regarding claim 8, the combination of Chen, SEYREK PIERRE, Mourkogiannis, Kezele and Cheng teaches method further comprises pre- generating the virtual prop information of the virtual prop corresponding to the preset pose information, (Cheng, paragraph 74 teaches "The cloth physical properties of to-be-tried-on apparel and the original apparel worn by the fitted human body are known pre-prepared data."); this shows the apparel/virtual prop to be tried on is pre-generated and this corresponds to preset pose since it is fit onto human body basic posture in later steps as described below; and wherein the pre-generating the virtual prop information of the virtual prop corresponding to the preset pose information comprises: obtaining virtual prop model information of the virtual prop and a virtual prop anchor of the virtual prop (Cheng, paragraph 74 teaches "Assigning the cloth physical property of apparel may be performed, for example, by receiving an instruction of a user. For example, the user inputs an instruction by clicking. For another example, the user encodes the cloth physical property of apparel in advance, so that the instruction can be input by entering the number of the cloth physical property of apparel)." and paragraph 76 teaches "finally the cloth physical property of to-be-tried-on apparel is assigned to the portion of each preset body part of the two-dimensional human body posture covered by original apparel"); user encoding physical property as instruction received shows obtaining of virtual prop model information of the virtual prop and assigning portion of apparel to body posture shows anchor of the virtual prop/apparel; configuring virtual prop anchor information of the virtual prop anchor relative to skeleton points based on the preset pose information (Cheng, paragraph 167 teaches "center of the to-be-tried-on apparel data of the trunk part of the two-dimensional human body basic posture coincide respectively with the fourth preset boundary and the fourth preset boundary center of the original apparel data of the trunk part of the two-dimensional human body posture"); this shows center point of clothing/virtual prop in the pose for the clothing/virtual prop anchor information and this human body posture would have skeleton points that are based on preset pose information from Chen when viewed in combination; and generating the virtual prop information corresponding to the virtual prop based on the virtual prop model information and the virtual prop anchor information (Cheng, paragraph 76 teaches "It is only necessary to identify the two-dimensional human body posture of the fitted human body, and the portion of each preset body part of the two-dimensional human body posture covered by original apparel is directly assigned with the cloth physical property of to-be-tried-on apparel"); covering original apparel with to-be-tried-on apparel means to-be-tried-on apparel is generated and this is based on previous steps mentioned thus based on virtual prop model information and virtual prop anchor information. The same motivations used in claim 5 apply here in claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. McNamara et al. (U.S. Patent Application Publication No. 2010/0271368) paragraph 14 and 17-18 teach “reference array…scan anchor location for the 3D scan; [0018] (f) on the basis of the reference data, determining an anchoring transformation for applying the 3D scan to a virtual object in the virtual environment such that the scan anchor location is fixed with respect to a corresponding object anchor”; this shows array/matrix for anchoring a 3D virtual object at anchor location/information. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /N.U.A./Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 08, 2024
Application Filed
Oct 23, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592036
BLENDING ELEVATION DATA INTO A SEAMLESS HEIGHTFIELD
2y 5m to grant Granted Mar 31, 2026
Patent 12530807
METHODS AND SYSTEMS FOR COMPRESSING DIGITAL ELEVATION MODEL DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12518472
DEFORMABLE NEURAL RADIANCE FIELDS
2y 5m to grant Granted Jan 06, 2026
Patent 12518482
VIRTUAL REPRESENTATIVE CONDITIONING SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12505601
CONTENT DISPLAY CONTROL DEVICE, CONTENT DISPLAY CONTROL METHOD, AND STORAGE MEDIUM STORING CONTENT DISPLAY CONTROL PROGRAM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
98%
With Interview (+19.8%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month