DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are present for examination.
Claim Objections
Claims 1, 8, 19, and 20 are objected to because of the following informalities:
Claims 1, 19, and 20: “the consistency” should be “a consistency”.
Claim 8: “either of both of” should be “either or both of”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-14, 19, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
MPEP 2106 III provides a flowchart for the subject matter eligibility test for product and processes. The claim analysis following the flowchart is as follows:
Regarding claim 1, it recites:
A computer implemented method comprising:
receiving motion capture data comprising a plurality of data points representing the positions of one or more objects over a period of time;
cleaning the motion capture data so as to ensure the consistency of the cleaned motion capture data with a physical constraint;
converting the cleaned motion capture data into a relational format; and
outputting the converted motion capture data.
Step 1: Is the claim to a process, machine, manufacture or composition of matter?
Yes. It recites a method, which is a process.
Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon?
Yes.
The step of cleaning the motion capture data so as to ensure the consistency of the cleaned motion capture data with a physical constraint is a mathematical concept because it involves mathematical relationship between the motion capture data and a physical constraint to ensure the consistency between the two.
The step of converting the cleaned motion capture data into a relational format is a mathematical concept because it involves mathematical operations of changing data format.
Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application?
No.
The step of receiving motion capture data comprising a plurality of data points representing the positions of one or more objects over a period of time is an additional element but an insignificant extra solution of data gathering.
The step of outputting the converted motion capture data is an additional element but an insignificant extra solution of outputting data.
Therefore, there are no additional elements that can integrate the abstract ides into practical application.
Therefore, this judicial exception is not integrated into a practical application because the additional elements in the claim are merely insignificant extra solutions.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
No.
As discussed above, the additional elements recited in the claim are merely insignificant extra solutions.
Therefore, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Therefore, claim 1 is not eligible subject matter under 35 USC 101.
Regarding claim 2, it depends from claim 1 and further recites wherein cleaning the motion capture data comprises: identifying one or more objects represented by one or more data points of the motion capture data; identifying a physical constraint associated with the identified objects; determining whether any of the data points representing the identified objects breach the identified physical constraint; and performing one or more cleaning operations on any data points that are determined to breach the identified physical constraint.
The steps of identifying and determining are mental process because a person can easily perform these steps mentally. The step of performing one or more cleaning operations on any data points that are determined to breach the identified physical constraint can be considered as a mathematical concept for removing data points from a data collection under certain mathematical conditions (such as to breach the identified physical constraint).
Therefore, it does not recite additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 3, it depends from claim 2 and further recites wherein the one or more cleaning operations comprises, for each data point that breaches the identified physical constraint, deleting the data point, replacing the data point, or adjusting the data point such that the adjusted data point no longer breaches the identified physical constraint.
The step of deleting, replacing, or adjusting data points can be considered mathematical concept because it involves mathematical operations of deleting or replacing data point from a mathematical data set/collection or changing the data point value to be consistent with the physical constraint.
Therefore, it does not recite additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 4, it depends from claim 2 and further recites wherein the identified physical constraint is one of: a length of an object; a size of an object; a shape of an object; a relative position or orientation of two or more objects; and a speed or acceleration of an object.
The physical constraints recited in claim 4 are mathematical values of physical constraint and do not introduce additional elements into the claim.
Therefore, it does not recite additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 5, it depends from claim 2 and further recites wherein determining whether any of the data points representing the identified object breach the identified physical constraint comprises: determining a threshold value associated with the identified physical constraints; and determining whether any of the data points representing the identified object exceed the threshold value.
The steps of determining can be considered as either mathematical concepts or mental process because they can be performed as mathematical calculations/relationships or mentally by a person.
Therefore, it does not recite additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 6, it depends from claim 5 and further recites wherein the motion capture data comprises a number of frames, and wherein the threshold value is one of: a distance between two data points or an angle formed between three data points within a single frame; and a distance moved by at least one data point equivalent to a threshold translation distance of the identified objects or a threshold rotation of the identified objects between two frames, preferably two consecutive frames.
These limitations merely limit the motion capture data in the data gathering and the threshold value recited in the abstract idea of mathematical concepts/mental process, which are not additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 7, it depends from claim 5 and further recites wherein the threshold value is based on at least one of the identified object and the identified physical constraint associated with that object.
These limitations merely limit the threshold value recited in the abstract idea of mathematical concepts/mental process, which are not additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 8, it depends from claim 3 and further recites wherein either of both of: adjusting the data point such that the adjusted data point no longer breaches the identified physical constraints is performed by an artificial intelligence, such as a neural network or machine learning model; and wherein identifying the one or more objects represented by the one or more data points of the motion capture data is performed by an artificial intelligence, such as a neural network or machine learning model.
The claim recites an artificial intelligence, such as a neural network or machine learning model to perform the adjusting the data point or identifying the one or more objects. The artificial intelligence can be considered as an additional elements but recited in such a high level that can only be considered as a generic computer component. Therefore, it does not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 9, it depends from claim 1 and further recites wherein converting the cleaned motion capture data into a relational format comprises extracting metadata into a metadata file, and wherein the method further comprises outputting the metadata file with the converted motion capture data.
These limitations can be considered as additional elements but merely insignificant solutions of collecting and/or outputting data. Therefore, they do not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 10, it depends from claim 9 and further recites wherein the motion capture data is divided into a plurality of frames, and wherein extracting metadata into a metadate file comprises extracting metadata consistent across at least some of the plurality of frames into the metadate file; and wherein optionally extracting metadata into a metadate file comprises extracting metadata consistent across each of the plurality of frames into the metadate file.
These limitations can be considered as additional elements but merely insignificant solutions of collecting and/or outputting data. Therefore, they do not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 11, it depends from claim 1 and further recites wherein the motion capture data comprises optical tracking data.
This limitation is merely limiting the data in the insignificant extra solution of data gathering. Therefore, it does not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 12, it depends from claim 1 and further recites wherein the motion capture data comprises a point cloud.
This limitation is merely limiting the data in the insignificant extra solution of data gathering. Therefore, it does not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 13, it depends from claim 1 and further recites wherein the converted motion capture data is output in a database.
This limitation is merely limiting the destination of data output, which is still considered as insignificant extra solution. Therefore, it does not integrate the abstract ideas into practical application or amount to significantly more.
Regarding claim 14, it depends from claim 1 and further recites wherein converting the motion capture data into a relational format includes inserting a file handle at predetermined intervals throughout the motion capture data.
This limitation is merely limit the relational format that the motion capture data is converted to, therefore is stiller part of the mathematical concept of data format conversion. Therefore, it does not recite additional elements that can integrate the abstract ideas into practical application or amount to significantly more.
Claim 19 recites similar limitations discussed above with respect to claim 1 but also recites non-transitory computer-readable medium, a computer system comprising one or more processors. These limitations can be considered as additional elements but generic computer elements. Therefore, they do not integrate the abstract ideas into practical application or amount to significantly more.
Claim 20 recites similar limitations discussed above with respect to claim 1 but also recites a computer system comprising one or more processors and non-transitory computer-readable medium. These limitations can be considered as additional elements but generic computer elements. Therefore, they do not integrate the abstract ideas into practical application or amount to significantly more.
Therefore, claims 1-14, 19, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
Claim 15 recites generating a visual representation of the event from the received converted motion capture data, which can be considered as an additional element that improves the visual representation of the event to be consistent with physical constraint. Therefore, it integrate the abstract ideas into practical application and eligible under 35 USC 101.
Claims 16-18 depend from claim 15 are thus eligible as well.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 6, 8, and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 6, the phrase "preferably" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 8, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Regarding claim 16, the phrase “optionally” renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
For examination purposes, the claims have been interpreted that the limitations following this terms are not included in the respective claims.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 8, 11, 12, 15, 16, and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Patent Publication No. 20100285877 A1 to Corazza.
Regarding claim 1, Corazza discloses A computer implemented method (Corazza, Abstract) comprising:
receiving motion capture data comprising a plurality of data points representing the positions of one or more objects over a period of time (Corazza, para. [0028], disclosing performing markerless motion capture by the optical device and data acquisition device, and streaming to a server, para. [0030], disclosing using a time of flight camera to perform markerless motion capture, para. [0034], disclosing raw motion data characterized by joint center points specified in terms of x, y, z coordinates and/or joint rotation parameters, para. [0039], disclosing receiving raw motion capture data from a data acquisition device, indicating the raw motion capture data be obtained by a time of flight camera over a period of time and can correspond to the received motion capture data comprising joint center points as a plurality of data points representing the positions of one or more joints corresponding to objects);
cleaning the motion capture data so as to ensure the consistency of the cleaned motion capture data with a physical constraint (Corazza, para. [0039], disclosing the received motion capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g. to avoid ground floor penetration), and collision detection (e.g. legs crossing), indicating the pre-processing can correspond to cleaning the motion capture data to ensure the consistency of the cleaned motion capture data with a physical constraint);
converting the cleaned motion capture data into a relational format (Corazza, para. [0039], disclosing the motion data is then converted into a hierarchical motion of a 3D character model using a quaternion formulation, para. [0040], disclosing following the pre-processing, performing high level mapping of the received motion data to a high-level descriptor of the motion, para. [0041], disclosing generating a low level descriptor of the animation by mapping the input motion data structure to a 3D character model, the high level and low level interaction are combined to provide the final motion data that is used to animate the 3D character model, para. [0042], disclosing the finalized motion data in the form of a quaternion based representation of the motion, indicating final motion data can correspond to the converted motion capture data being converted into a relational format (in the form of a quaternion based representation of the motion, with a hierarchical motion of a 3D character model, high-level and low-level mapping), using the pre-processed motion capture data as the cleaned motion capture data); and
outputting the converted motion capture data (Corazza, para. [0042], disclosing the finalized motion data can be streamed to data acquisition device so that its game engine client can render and display the animation).
Regarding claim 2, Corazza discloses the method of claim 1, wherein cleaning the motion capture data comprises: identifying one or more objects represented by one or more data points of the motion capture data (Corazza, para. [0034], disclosing raw motion data characterized by joint center points specified in terms of x, y, z coordinates and/or joint rotation parameters, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joints can correspond to one or more objects represented by the data points of the raw motion capture data being identified to check if the anatomical and physical constraints are satisfied or not); identifying a physical constraint associated with the identified objects (Corazza, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joint limits can correspond to a physical constraint associated with the corresponding joints as the identified objects); determining whether any of the data points representing the identified objects breach the identified physical constraint (Corazza, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joint limits can correspond to a physical constraint associated with the corresponding joints as the identified objects, and checking if the anatomical and physical constraints are satisfied or not can correspond to determining whether the joints corresponding to any of the data points representing the joints as the identified objects breach the joint limits as the identified physical constraint); and performing one or more cleaning operations on any data points that are determined to breach the identified physical constraint (Corazza, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joint limits can correspond to a physical constraint associated with the corresponding joints as the identified objects, and checking if the anatomical and physical constraints are satisfied or not can correspond to determining whether the joints corresponding to any of the data points representing the joints as the identified objects breach the joint limits as the identified physical constraint, and correcting the raw motion data can correspond to performing one or more cleaning operations on any data points that are determined to breach the identified physical constraint).
Regarding claim 3, Corazza discloses the method of claim 2, wherein the one or more cleaning operations comprises, for each data point that breaches the identified physical constraint, deleting the data point, replacing the data point, or adjusting the data point such that the adjusted data point no longer breaches the identified physical constraint (Corazza, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating correcting the raw motion data to enforce the anatomical and physical constraints can correspond to adjusting the data point such that the adjusted data point no longer breaches the identified physical constraint).
Regarding claim 4, Corazza discloses the method of claim 2, wherein the identified physical constraint is one of: a length of an object; a size of an object; a shape of an object; a relative position or orientation of two or more objects; and a speed or acceleration of an object (Corazza, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joint limits and/or legs crossing can correspond to a relative position or orientation of two or more objects (the relative positions and/or orientations of the joints and/or the legs will indicate whether the joint limits being satisfied or whether the legs are crossed).
Regarding claim 8, Corazza discloses the method of claim 3, wherein either of both of: adjusting the data point such that the adjusted data point no longer breaches the identified physical constraints is performed by an artificial intelligence, such as a neural network or machine learning model (Corazza, para. [0035], disclosing the server system process the motion capture data from the data acquisition devices to generate motion data, para. [0036], disclosing the server system interprets the motion data in a manner similar to the interpretation of instructions from a game controller, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating correcting the raw motion data to enforce the anatomical and physical constraints can correspond to adjusting the data point such that the adjusted data point no longer breaches the identified physical constraint, and the server system can correspond to an artificial intelligence because it is a system that can interpret the motion capture data and process it. Examiner Note: the “such as” limitations are interpreted as not required in the claim); and wherein identifying the one or more objects represented by the one or more data points of the motion capture data is performed by an artificial intelligence, such as a neural network or machine learning model (Corazza, para. [0034], disclosing raw motion data characterized by joint center points specified in terms of x, y, z coordinates and/or joint rotation parameters, para. [0035], disclosing the server system process the motion capture data from the data acquisition devices to generate motion data, para. [0036], disclosing the server system interprets the motion data in a manner similar to the interpretation of instructions from a game controller, para. [0039], disclosing the received capture data is pre-processed to enforce anatomical and physical constraints, if the anatomical and physical constraints are not satisfied, then the raw motion data can be corrected using techniques including but not limited to joint limits, automatic Inverse Kinematics editing (e.g., to avoid ground floor generation), and collision detection (e.g., legs crossing), indicating the joints can correspond to one or more objects represented by the data points of the raw motion capture data being identified to check if the anatomical and physical constraints are satisfied or not, and the server system performing such actions can correspond to an artificial intelligence because it is a system that can interpret the motion capture data and process it. Examiner Note: the “such as” limitations are interpreted as not required in the claim).
Regarding claim 11, Corazza discloses the method of claim 1, wherein the motion capture data comprises optical tracking data (Corazza, para. [0030], disclosing performing remote markerless motion capture using an optical device, which is a sensor or sensors using to capture motion of the performer, the optical device can be a single 3D camera such as a time of flight camera, indicating then motion capture data can be captured by the optical device of time of flight camera as optical tracking data).
Regarding claim 12, Corazza discloses the method of claim 1, wherein the motion capture data comprises a point cloud (Corazza, para. [0034], disclosing raw motion data characterized by joint center points specified in terms of x, y, z coordinates and/or joint rotation parameters, indicating the joint centers points can correspond to a point cloud).
Regarding claim 15, Corazza discloses the method of claim 1, wherein the steps of receiving motion capture data, cleaning the motion capture, converting the cleaned motion capture data into a relational format, and outputting the converted motion capture data are performed at by a first computer system (Corazza, para. [0039], disclosing receiving motion capture data from a data acquisition device and pre-processing the data to enforce anatomical and physical constraints, and converted the data into hierarchical motion, para. [0040], disclosing mapping the motion data to a high-level descriptor, para. [0041], disclosing mapping then motion data to low level descriptor, para. [0042], disclosing generating finalized motion data, then streaming to the data acquisition device, indicating the receiving, cleaning, converting and streaming as outputting are performed by the system performing these processes as the first computer system), the method further comprising, at a second computer system: receiving the output converted motion capture data (Corazza, para. [0042], disclosing generating finalized motion data, then streaming to the data acquisition device, indicating the finalized motion data as the output converted motion data is received at the data acquisition device as the second computer); and generating a visual representation of the event from the received converted motion capture data (Corazza, para. [0042], disclosing generating finalized motion data, then streaming to the data acquisition device, so that its game engine client can render and display the animation, indicating the rendering the animation can correspond to generating a visual representation of the event from the finalized motion data as the received converted motion capture data).
Regarding claim 16, Corazza discloses the method of claim 15, wherein the output converted motion capture data is received over a network, wherein optionally the network is the internet (Corazza, para. [0042], disclosing generating finalized motion data, then streaming to the data acquisition device, FIG. 2, showing motion capture device connected to back-end server through Internet).
Regarding claim 18, Corazza discloses the method of claim 15, wherein the motion capture data comprises 3D positional data (Corazza, para. [0034], disclosing raw motion data characterized by joint center points specified in terms of x, y, z coordinates and/or joint rotation parameters); and wherein generating a visual representation of the event comprises generating a 3D visual representation of the event (Corazza, para. [0042], disclosing generating finalized motion data, then streaming to the data acquisition device, so that its game engine client can render and display the animation, para. [0043], disclosing the rendered 3D character animations are displayed to the performer, para. [0045], disclosing a data acquisition device receiving motion data for rendering of 3D character animation).
Regarding claim 19, it recites similar limitations of claim 1 but in a non-transitory computer-readable medium form. The rationale of claim 1 rejection is applied to reject claim 19. In addition, Corazza discloses a computer and a remote server, which include one or more processors and memory (see Corazza, para. [0032]).
Regarding claim 20, it recites similar limitations of claim 1 but in a computer system form. The rationale of claim 1 rejection is applied to reject claim 19. In addition, Corazza discloses a computer and a remote server, which include one or more processors and memory (see Corazza, para. [0032]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of Chinese Patent Publication No. CN 115937255 A to Li et al.
Regarding claim 5, Corazza discloses the method of claim 2. However, Corazza does not expressly disclose wherein determining whether any of the data points representing the identified object breach the identified physical constraint comprises: determining a threshold value associated with the identified physical constraints; and determining whether any of the data points representing the identified object exceed the threshold value.
On the other hand, Li discloses wherein determining whether any of the data points representing the identified object breach the identified physical constraint comprises: determining a threshold value associated with the identified physical constraints (Li, Translation, para. [n0114], disclosing different joints having different degrees of rotational freedom, different rotational angle ranges, so there are joint rotational degree of freedom constraints and rotation angle range constraints, joint constraints can be used to determine whether inertial motion capture data exceeds the normal range of motion of human joints, thereby elimination abnormal data, indicating the normal range of motion of human joints can correspond to a threshold value associated with the joint constraints as the identified physical constraints, and can be determined so that whether the joints corresponding to any of the data points representing the human joints as the identified object breach the joint constraints as the identified physical constraint); and determining whether any of the data points representing the identified object exceed the threshold value (Li, Translation, para. [n0114], disclosing different joints having different degrees of rotational freedom, different rotational angle ranges, so there are joint rotational degree of freedom constraints and rotation angle range constraints, joint constraints can be used to determine whether inertial motion capture data exceeds the normal range of motion of human joints, thereby elimination abnormal data, indicating determining whether inertial motion capture data exceeds the normal range of motion of human joints can correspond to determining whether the human joints corresponding to any of the data points representing the human joint as the identified object exceed the normal range as the threshold value).
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza and Li. The suggestion/motivation would have been for constructing a human kinematic model that conforms to the basic movement characteristics of the human body, as suggested by Li (see Li, Translation, para. [n0112]).
Regarding claim 6, Corazza in view of Li discloses the method of claim 5, wherein the motion capture data comprises a number of frames (Corazza, para. [0007], disclosing the motion capture data has frame rate, indicating the motion capture data comprises a number of frames. Also, Li, Translation, para. [n0114], disclosing different joints having different degrees of rotational freedom, different rotational angle ranges, so there are joint rotational degree of freedom constraints and rotation angle range constraints, joint constraints can be used to determine whether inertial motion capture data exceeds the normal range of motion of human joints, thereby elimination abnormal data, indicating the motional capture data comprises a number of frames so that the joints have a rotational angle between two frames), and wherein the threshold value is one of: a distance between two data points or an angle formed between three data points within a single frame; and a distance moved by at least one data point equivalent to a threshold translation distance of the identified objects or a threshold rotation of the identified objects between two frames, preferably two consecutive frames (Li, Translation, para. [n0114], disclosing different joints having different degrees of rotational freedom, different rotational angle ranges, so there are joint rotational degree of freedom constraints and rotation angle range constraints, joint constraints can be used to determine whether inertial motion capture data exceeds the normal range of motion of human joints, thereby elimination abnormal data, indicating the normal rotational angle ranges can correspond to the threshold rotation of the joints as the identified objects between two frames. Examiner Note: “preferably two consecutive frames” is interpreted as not required limitation in the claim). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza and Li. The suggestion/motivation would have been for constructing a human kinematic model that conforms to the basic movement characteristics of the human body, as suggested by Li (see Li, Translation, para. [n0112]).
Regarding claim 7, Corazza in view of Li discloses the method of claim 5, wherein the threshold value is based on at least one of the identified object and the identified physical constraint associated with that object (Li, Translation, para. [n0114], disclosing different joints having different degrees of rotational freedom, different rotational angle ranges, so there are joint rotational degree of freedom constraints and rotation angle range constraints, joint constraints can be used to determine whether inertial motion capture data exceeds the normal range of motion of human joints, thereby elimination abnormal data, indicating the normal rotational angle ranges can correspond to the threshold value based on the joints as the at least one of the identified object and the rotational angle range constraint corresponding to the identified physical constraint associated with the joint as that object). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza and Li. The suggestion/motivation would have been for constructing a human kinematic model that conforms to the basic movement characteristics of the human body, as suggested by Li (see Li, Translation, para. [n0112]).
Claim(s) 9, 10, 13, 14, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Corazza in view of US Patent Publication No. 20150099252 A1 to Anderson et al.
Regarding claim 9, Corazza discloses the method of claim 1, wherein converting the cleaned motion capture data into a relational format comprises extracting metadata (Corazza, para. [0040], disclosing following the pre-processing, a high level mapping of the received motion data to a high-level descriptor of the motion is performed, meta-data information is extracted from the motion, the meta-data can include the results of a classifier that identifies similar motion in a pre-existing library of animations).
However, Corazza does not expressly disclose extracting metadata into a metadata file, and wherein the method further comprises outputting the metadata file with the converted motion capture data.
On the other hand, Anderson discloses extracting metadata into a metadata file, and wherein the method further comprises outputting the metadata file with the converted motion capture data (Anderson, FIG. 4A, showing recording and editing movement interface, para. [0058], disclosing the author can create and modify the movement, create key frames, identify parameters for the movement, para. [0059], disclosing the author can save the current movement as one or more media files and plain-text files containing time-stamped motion capture and keyframe metadata, indicating the time-stamped motion capture and keyframe metadata can be extracted into plain-text files, and the edited movement can correspond to converted motion capture data, the metadata file is output with the saved movement data as the converted motion capture data).
Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza with Anderson. The suggestion/motivation would have been to allow a user to create and share training content, as suggested by Anderson (see Anderson, para. [0024]).
Regarding claim 10, Corazza in view of Anderson discloses the method of claim 9, wherein the motion capture data is divided into a plurality of frames (Corazza, para. [0007], disclosing the motion capture data has frame rate, indicating the motion capture data is divided into a plurality of frames. Also Anderson, para. [0056], disclosing the keyframe marker illustrate one or more keyframes for the recorded movement, para. [0058], disclosing the author can create and modify the movement, create key frames, identify parameters for the movement), and wherein extracting metadata into a metadate file comprises extracting metadata consistent across at least some of the plurality of frames into the metadate file; and wherein optionally extracting metadata into a metadate file comprises extracting metadata consistent across each of the plurality of frames into the metadate file (Anderson, para. [0059], disclosing the author can save the current movement as one or more media files and plain-text files containing time-stamped motion capture and keyframe metadata, indicating the keyframe metadata is consistent across the keyframes corresponding to at least sone of the plurality of frames into metadata file. Examiner Note: the optional limitations is interpreted as not a required limitation for this claim). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza with Anderson. The suggestion/motivation would have been to allow a user to create and share training content, as suggested by Anderson (see Anderson, para. [0024]).
Regarding claim 13, Corazza discloses the method of claim 1. However, Corazza does not expressly disclose wherein the converted motion capture data is output in a database.
On the other hand, Anderson discloses the converted motion capture data is output in a database (Anderson, para. [0058], disclosing the author can create and modify the movement, create key frames, identify parameters for the movement, para. [0100], disclosing capturing motion and motion tracking data associated with a movement performed by a user, and storing the movement in the movement database). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza with Anderson. The suggestion/motivation would have been to allow a user to create and share training content, as suggested by Anderson (see Anderson, para. [0024]).
Regarding claim 14, Corazza discloses the method of claim 1. However, Corazza does not expressly disclose wherein converting the motion capture data into a relational format includes inserting a file handle at predetermined intervals throughout the motion capture data.
On the other hand, Anderson discloses inserting a file handle at predetermined intervals throughout the motion capture data (Anderson, para. [0056], disclosing specifying keyframes and corresponding important joints within the movement, indicating specifying the keyframes and important joints can correspond to inserting file handles indicating the keyframes and/or important joints at user determined timeline corresponding to predetermined intervals (predetermined by the user/author) throughout the recorded movement data as the motion capture data). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza with Anderson. The suggestion/motivation would have been to allow a user to create and share training content, as suggested by Anderson (see Anderson, para. [0024]).
Regarding claim 17, Corazza discloses the method of claim 1. However, Corazza does not expressly disclose wherein the output converted motion capture data is received over a cellular connection.
On the other hand, Anderson discloses the output converted motion capture data is received over a cellular connection (Anderson, para. [0025], disclosing a computer system 100 may be a mobile phone, para. [0030], disclosing system 100 can communicate with other systems via an electronic communication network and may include wired or wireless communication, para. [0033], disclosing scene rendering maybe provided from a server computer similar to system 100, para. [0037], disclosing movement training system 200 may implements computer system 100, para. [0039]-[0040], disclosing capturing video of movement information, indicating the movement data corresponding to the output converted motion capture data can be sent from/to other system 100, and it can be received over a cellular connection when the systems are implemented by mobile phones). Before the invention was effectively filed, it would have been obvious for a person skilled in the art to combine Corazza with Anderson. The suggestion/motivation would have been to allow a user to create and share training content, as suggested by Anderson (see Anderson, para. [0024]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAIXIA DU whose telephone number is (571)270-5646. The examiner can normally be reached Monday - Friday 8:00 am-4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAIXIA DU/Primary Examiner, Art Unit 2611