Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“data set generation unit” in claim 10;
“data set determination unit” in claim 10;
“relevance determination unit” in claim 10; and
“motion analysis unit” in claim 16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 4-10 and 13-17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Us patent application publication no. 2023/0274660 to Torok et al. (hereinafter Torok).
For claim 1, a method for analyzing a user's motion (see, e.g., FIG. 3), the method comprising the steps of:
generating a motion data set for a motion of a user on the basis of at least one image of the motion of the user (see, e.g., pars. 95-100 and 102 and FIGS. 1a-b and 2-3, which teach providing data on a move of a trainee from a trainee video as a sequence of frames including poses defined by a set of joints);
determining a partial motion data set for a partial motion of the user by separating the motion data set by partial motions of the user (see, e.g., pars. 99-100 and 105-112 and FIGS. 1b and 2-4, which teach determining a set of matching/candidate frames from the trainee video, wherein the matching/candidate frames that separate the sequence of the frames into distinguishable subsets of frames); and
determining a first relevance between the partial motion of the user and a reference partial motion by comparing the partial motion data set with a reference partial motion data set determined by separating a reference motion data set by reference partial motions (see, e.g., pars. 99-100, 102-103, 105, 107-113, 116-126, 129, and 131 and FIGS. 1b and 2-7, which teach determining a similarity between the matching/candidate trainee frames and the trainer keyframes, wherein the keyframes separate the sequence of the frames into distinguishable subsets of frames).
For claim 10, Torok as applied teaches a system for analyzing a user's motion (see, e.g., par. 95 and FIG. 1a), the system comprising:
a data set generation unit configured to generate a motion data set for a motion of a user on the basis of at least one image of the motion of the user (see, e.g., pars. 95-100 and 102 and FIGS. 1a-b and 2, which teach providing, via the processor and memory circuitry, data on a move of a trainee from a trainee video as a sequence of frames including poses defined by a set of joints);
a data set determination unit configured to determine a partial motion data set for a partial motion of the user by separating the motion data set by partial motions of the user see, e.g., pars. 99-100 and 105-112 and FIGS. 1b and 2-4, which teach determining, via the processor and memory circuitry, a set of matching/candidate frames from the trainee video, wherein the matching/candidate frames that separate the sequence of the frames into distinguishable subsets of frames); and
a relevance determination unit configured to determine a first relevance between the partial motion of the user and a reference partial motion by comparing the partial motion data set with a reference partial motion data set determined by separating a reference motion data set by reference partial motions (see, e.g., pars. 99-100, 102-103, 105, 107-113, 116-126, 129, and 131 and FIGS. 1b and 2-7, which teach determining, via the processor and memory circuitry, a similarity between the matching/candidate trainee frames and the trainer keyframes, wherein the keyframes separate the sequence of the frames into distinguishable subsets of frames).
For claims 4 and 13, Torok as applied discloses that in the step of determining the first relevance, the first relevance is determined by comparing body part orientation data extracted from the partial motion data set with reference body part orientation data extracted from the reference partial motion data set (see, e.g., pars. 116-123 and FIGS. 3, 5 and 6, which teach determining the similarity score by comparing body parts in a pose from the candidate/matching trainee frames with corresponding body parts in a pose from the trainer keyframes).
For claims 5 and 14, Torok as applied discloses that the reference partial motion associated with the partial motion data set is determined with reference to the first relevance (see, e.g., pars. 116-123 and FIGS. 3, 5 and 6, which teach the angular differences between the trainer keyframes and the matching/candidate trainee frames is determined for the similarity score).
For claims 6 and 15, Torok as applied discloses that in the step of determining the first relevance, a second relevance between an overall motion of the user and a reference overall motion is further determined by comparing the motion data set and the reference motion data set (see, e.g., pars. 93, 99-103, 108, 113-114, 117-126, and 128 and FIG. 6, which teach aggregating similarity scores of the trainee video frames into an overall similarity score and a move performance score).
For claims 7 and 16, Torok as applied discloses:
generating motion analysis information with reference to at least one of the first relevance and the second relevance (see, e.g., pars. 108 and 129-131 and FIG. 7, which teach performing a similarity analysis based on the similarity scores).
For claims 8 and 17, Torok as applied discloses that the motion analysis information includes at least one of an evaluation score for each of the user's partial motions, an evaluation score for the user's overall motion, and an evaluation score for a sequence in which the partial motions are performed by the user (see, e.g., pars. 108 and 129-131 and FIG. 7, which teach that the similarity analysis includes analyzing the similarity scores).
For claim 9, Torok as applied discloses a non-transitory computer-readable recording medium having stored thereon a computer program for executing the method of Claim 1 (see, e.g., pars. 66, 92, 93, and 101 and the rejection of claim 1).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-3 and 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Torok in view of us patent application publication no. 2022/0172478 to Sun et al. (hereinafter Sun).
For claims 2 and 11, while Torok as applied teaches that the at least one image includes two or more images (see, e.g., pars. 94-102, which teach that the trainee video includes two or more images/frames), it does not explicitly teach that the two or more images are acquired from two or more image sensors.
Sun in the analogous art teaches acquiring the video images from multiple imaging devices and synchronizing the video images using the joint coordinates in each of the video images (see, e.g., pars. 28-29 and FIG. 1 of Sun).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Torok to obtain and synchronize the trainee images from multiple cameras because doing so would facilitate the synchronization based on the movements in the image (see pars. 9-11 of Sun).
For claims 3 and 12, while Torok as applied teaches that the at least one image includes two or more images (see, e.g., pars. 94-102, which teach that the trainee video includes two or more images/frames), it does not explicitly teach that in response to the two or more images being acquired from different sensors, reference coordinates respectively applied to the two or more images are synchronized the two or more images are acquired from two or more image sensors.
Sun in the analogous art teaches acquiring the video images from multiple imaging devices and synchronizing the video images using the joint coordinates in each of the video images (see, e.g., pars. 28-29 and FIG. 1 of Sun).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Torok to obtain and synchronize the trainee images from multiple cameras because doing so would facilitate the synchronization based on the movements in the image (see pars. 9-11 of Sun).
Additional Citations
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Rosson et al. (us pat. pub. 2020/0167937)
Describes a method of processing a sequence of video frames showing motion of a subject to compare the motion of the subject with a reference motion. In one embodiment, the method comprises storing at least one reference motion data frame defining a reference motion, each reference motion data frame corresponding to respective first and second reference video frames in a sequence of video frames showing the reference motion and comprising a plurality of optical flow vectors, each optical flow vector corresponding to a respective area segment defined in the first reference video frame and a corresponding area segment defined in the second reference video frame and defining optical flow between the area segment defined in the first reference video frame and the area segment defined in the second reference video frame. The method further comprises receiving a sequence of video frames to be processed. The method further comprises processing at least one pair of the received video frames to generate a motion data frame defining motion of a subject between the pair of received video frames. Each pair of received video frames that is processed is processed by, for each area segment of the reference video frames, determining a corresponding area segment in a first video frame of the pair and a corresponding area segment in a second video frame of the pair. Each of the pairs of received video frames is further processed by, for each determined pair of corresponding area segments, comparing the area segments and generating an optical flow vector defining optical flow between the area segments. Each of the pairs of received video frames is further processed by generating a motion data frame for the pair of received video frames, the motion data frame comprising the optical flow vectors generated for the determined pairs of corresponding area segments. The method further comprises comparing the at least one reference motion data frame defining the reference motion to the at least one generated motion data frames defining the motion of the subject and generating a similarity metric for the motion of the subject and the reference motion.
Kawai et al. (us pat. pub. 2025/0029423)
Describes an action evaluation system. In one embodiment, the system includes at least one memory storing instructions, and at least one processor configured to execute the instructions to detect a predetermined unit action associated with a posture of a worker from skeleton data about a structure of a body of the worker extracted from image data obtained by capturing a series of work actions performed by the worker on a work object, determine whether or not the unit action is similar to a predetermined registration action and output a result of the determination.
Matsunaga (us pat. pub. 2022/0062702)
Describes an information processing apparatus. In one embodiment, the apparatus is provided that includes a motion estimation section configured to analyze data recorded of motions of multiple users so as to estimate the motions, a tag addition section configured to add tag data regarding the motions to at least part of the recorded data, and a motion evaluation section configured to evaluate the motions by comparing the motions with reference motions on the basis of the tag data.
Table 1
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Table 1 and Form 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WOO C RHIM/Examiner, Art Unit 2676