Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 27, 2026 has been entered.
Response to Arguments
Applicant’s arguments and amendment have persuasively overcome the almost all of the 112(b) rejections. The remaining issues are addressed below.
112(a)
Applicant argues:
For example, "the interactive visual effects system 102 can receive additional images 122 that include in a first candidate pose 302 and a second candidate pose 304." Specification, [0031].
Examiner responds:
There is support for some of the images having a candidate pose, but the examiner has not identified support the claimed “each”’s guarantee that all of the received images have a pose.
Applicant argues:
Decreasing the size of a visual effect rendered into an image is clearly "modifying an appearance of the visual effect."
Examiner responds:
“Modifying” is a genus of the various ways that the visual effect could be modified. 112(a) support for a genus requires supporting the whole genus, not just one species. MPEP 2163(II)(A)(3)(a)(ii) explains that claiming a genus requires support for a representative number of species. Here, changing the size is not representative of the full scope of ways that these effects may be modified.
112(b)
Applicant argues:
As such, the second object being defined as the same type as the object logically follows that it would have the same set of points, and thus comparing "a key joint" to "a corresponding key joint" would be comparing the same point on the two poses.
Examiner responds:
Reciting an objective standard, such as “is” or “the same” overcomes this rejection. However, here, different people may have different opinions as to whether, e.g., a left elbow joint “corresponds” to a right elbow joint.
101
Applicant argues:
The claims go beyond "mental processes," because they cannot practically be performed in the mind or "with pencil and paper."
Examiner responds:
The examiner believes that one could imagine pictures of posed objects, analyze the pictures, then imagine visual effects on the pictures. For example, one could sketch a person imagine as in Fig. 5 and then could imagine the visual effects on pictured person.
Applicant argues:
Generating a node architecture with a trigger that activates when the pose in one or more images matches the pose in an initial image is not something that a person can do in their mind or with a pencil and paper.
Examiner responds:
The examiner’s understanding is that the “nodes” are software modules. See, e.g., specification [0023] and [0024]. The nodes perform functions that one could do mentally.
Applicant argues:
Further, "rendering an output image" is not something that a person can do in their mind or with a pencil and paper.
Examiner responds:
Steps performed on a computer are still mental steps. TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) interacts with digital images (in a more complex manner), but was still ineligible.
Applicant’s arguments over Challinor are addressed in the below updated claim mapping.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1, 8 and 15 recite “receiving one or more additional images that each include a candidate pose.” See the response to arguments.
Claims 1, 8 and 15 recite rendering an output image with a visual effect followed by modifying an appearance of the visual effect. See the response to arguments.
Dependent claims are likewise rejected.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 8, and 15 recite “points of interest,” but this is a subjective term. MPEP 2173.05(b)(IV). One way to overcome this rejection is to specify an objective standard, such as what is shown. Another way to overcome this rejection is to show that “points of interest” is a term of art that has an accepted meaning. The examiner’s search suggests that this is not the situation.
Claims 1, 8, and 15 recite “node,” “node architecture” and “recognizer node,” but “node” is not being used in a manner consistent with its plain meaning. https://en.wikipedia.org/wiki/Node_(computer_science) states “A node is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.” “Node” can also refer to a vertex in a graph (such as the circles shown in Figs. 1B and 1C of Challinor). But the claim does not identify what the node is of (other than a “node architecture” which does not add any clarity). Therefore, “node,” “node architecture” and “recognizer node” are new terminology. MPEP 2173.05(a).
Claims 1, 8, and 15 recite “a pose-based trigger that activates when poses matching the pose are detected.” However, “the pose” that is detected has antecedent basis in the pose of the object in the received image, not a pose that the trigger is configured to detect. This is indefinite because the specification describes looking for a pre-configured pose, as opposed to the literal meaning of the claim. Specification, [0024] and MPEP 2173.05(e).
Claims 3, 10 and 17 recite “corresponding,” but this is a subjective term. MPEP 2173.05(b)(IV).
Claims 3, 10 and 17 recite “a same type,” but this is indefinite because there is not a defined list of types (and thus one cannot determine if the types are the same or not because one does not know what the types are to begin with).
Dependent claims are likewise rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Step 1: Claim 1 (and its dependents) recite a method, and processes are eligible subject matter. Claim 8 (and its dependents) recite a system, and machines are eligible subject matter. Claim 15 (and its dependents) recite a non-transitory computer-readable medium, and articles of manufacture are eligible subject matter.
Step 2A, prong one: All of the elements of claims 1-20 are a mental process. For example, one can look at a pose and decide if it matches another pose. Using a trained machine learning model, such as in claim 2, is a mental process as per example 47 (see claim 2, element d) from the July 2024 eligibility examples. (https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. For example, claim 8’s memory component and processing device are generic computer components. Note that, as per the below examiner’s note, the visual effects are not entitled to weight, and thus fail to distinguish from a mental process.
Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 (all claims) are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by U.S. Pat. 9358456 (“Challinor”).
Examiner Note
MPEP 2111.05(III) directs “However, where the claim as a whole is directed to conveying a message or meaning to a human reader independent of the intended computer system, and/or the computer-readable medium merely serves as a support for information or data, no functional relationship exists.” Here, all of the claim elements related to “visual effects” are printed matter and thus not entitled to patentable weight. In the interest of compact prosecution, the Office Action has mapped them anyway.
The broadest reasonable interpretation of “location” includes interpreting it to be the entire screen, e.g., “the effect is located on the screen.”
1. A method comprising:
receiving an image including at least one object having a pose, the pose defined by an orientation and a position of one or more points of interest of the object within the image; (Challinor, claim 1 “receive information from a camera system reflecting a position of the first player in response to the first prompt”)
generating a set of key joint data for the object, wherein the key joint data includes the orientation and the position of the one or more points of interest of the object within a coordinate space; (Challinor, Figs. 1B and 1C)
creating a vector representation of the set of key joint data; (Challinor, Figs. 1B and 1C)
generating a node architecture based on a received selection of nodes, the node architecture including a recognizer node with a pose-based trigger that activates when poses matching the pose are detected; (Challinor, Figs. 1B and 1C. Figs. 4 and 5 show how the skeletons are tracked into dance moves.)
receiving one or more additional images that each include a candidate pose; (Challinor, claim 1 “receive information from the camera system reflecting a position of the second player in response to the second prompt”)
detecting, by the recognizer node, a match between one of the candidate poses in the one or more additional images and the pose by computing a similarity between the vector representation of the set of key joint data and respective vector representations for each of the candidate poses; and (Challinor, claim 1 “compare, by the machine, the input frame to the target frame to determine a comparison value”)
rendering an output image with a visual effect applied to a location of at least one key joint within coordinate space based on the match. (Challinor, Fig. 14. See also, 14:65-15:3, “To assist the user in completing moves correctly, per-limb feedback can be given to the user when performing a move. In some embodiments, if the user is not satisfying a filter for a limb, the game can render a red outline around the on-screen dancer's corresponding limb to demonstrate to the user where they need to make an adjustment.”)
2. The method of claim 1, wherein generating a set of key joint data for the object comprises:
applying a trained machine learning model to the image including the object, (Challinor, 4:5 “FIG. 1B depicts an example of a skeleton provided by a MICROSOFT KINECT” Microsoft Kinect uses machine learning, see, e.g., https://people.math.harvard.edu/archive/21a_fall_13/exhibits/mackormick/kinect.pdf, slide 4 [also attached as an appendix to this Office Action])
wherein applying the trained machine learning model comprises:
detecting a type of the object, the type indicating a set of points that are defined for the object; (Challinor, 4:5 “FIG. 1B depicts an example of a skeleton provided by a MICROSOFT KINECT” Challinor’s skeleton teaches the claimed type of the object.)
detecting a position and orientation for each point in the set of points; and (Challinor, Figs. 1B and 1C)
inserting the position and orientation of each point into the set of key joint data. (Challinor, Figs. 1B and 1C)
3. The method of claim 2, wherein detecting a match between one of the candidate poses in one or more additional images and the pose comprises:
for each candidate pose of the candidate poses (This claim reads on a single candidate pose)
comparing a key joint of the pose of the object within the image to a corresponding key joint of a candidate pose of a candidate object that is of a same type as the type of the object; and (Challinor, Fig. 12)
The examiner notes that while the above claim element recites “a candidate pose,” “the” may have been intended.
determining, based on the comparing, that the key joint of the pose matches the corresponding key joint of the candidate pose. (Challinor, Fig. 12)
4. The method of claim 3, wherein rendering the output image with the visual effect applied to the location of the at least one key joint within the coordinate space based on the match comprises:
selecting the visual effect for insertion into the image; and (Challinor, Fig. 14. Fig. 14 shows that the score is a visual effect, claim 2 states that the score is a result of the comparison value (as above, the comparison value teaches the claimed detecting a match))
in response to determining, based on the comparing, that key joint of the pose matches the corresponding key joint of the candidate pose, inserting the selected visual effect into the image. (Challinor, Fig. 14. Challinor’s increased score teaches the claimed selected visual effect.)
5. The method of claim 4, inserting the selected visual effect into the image comprises:
identifying at least one key joint of the pose where the visual effect is to be added; and (Challinor, Fig. 14. Challinor’s animated dancer teaches the claimed visual effect, the effects correspond to where the player is)
applying the visual effect at a position of the at least one key joint in the image. (Challinor, Fig. 14. Challinor’s animated dancer teaches the claimed visual effect)
6. The method of claim 1 further comprising:
receiving a second image including the object having an additional pose, the additional pose defined by an additional orientation and an additional position of the object within the image; (Challinor, Fig. 9, “Has the player performed the dance move four times?” 915)
generating an additional set of key joint data that represents the additional orientation and additional position of the one or more points of interest for the object; (Challinor, Fig. 9, “Capture the dance move with the camera and provide corresponding skeleton to game platform” 910. Challinor’s skeleton teaches the claimed key joint data, orientation, position and points of interest.)
creating a vector representation of the additional set of key joint data; (Challinor, Fig. 9, “Capture the dance move with the camera and provide corresponding skeleton to game platform” 910. Challinor’s skeleton teaches the claimed vector representation.)
in response to receiving the one or more additional images, detecting an occurrence of the pose at a first time; (Challinor, Fig. 9, second performance 925)
in response to receiving the one or more additional images, detecting an occurrence of the additional pose at a second time; and (Challinor, Fig. 9, third performance 925)
rendering an output image with a second visual effect based on the occurrence of the pose and the additional pose. (Challinor, 34:55-34:63 “In some embodiments, the game platform is configured (e.g., vis-à-vis computer source code) to create icons such as flashcards that can be used to instruct the second player how to perform the first player's dance move. As described more fully above (e.g., with respect to FIGS. 5-7), one or more icons can be displayed at one or more fixed or predetermined locations on the display. For example, the icons can be displayed as previous, current, and next dance move as described above.”)
7. The method of claim 6, wherein detecting an occurrence of the pose at a first time comprises comparing the vector representation of the additional set of key joint data to the set of key joint data that represents the orientation and position of one or more points of interest associated with the object. (Challinor, Fig. 9, “Are the three performances similar?”)
Claims 8-20 are rejected for the same rationale as their counterpart claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Pat. Pub. 20230010480 (“Li”) – Fig. 11 anticipates claim 1, see, e.g., [0086]-[0089]
U.S. Pat. 8241118 – titled “System for promoting physical activity employing virtual interactive arena”
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ORANGE/Primary Examiner, Art Unit 2663