DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s response to the Non-final Office Action dated 07/31/2025, filed with the office on 12/23/2025, has been entered and made of record.
Response to Amendment
In light of Applicant’s amendment of the claims, the objections of record with respect to claims 2, 15 and 25 are withdrawn.
In light of Applicant’s amendment of the claims under 35 U.S.C. 112(b) with respect to claims 12 and 13 are withdrawn.
Status of Claims
Claims 1-5 and 7-26 are pending. Claims 1, 2, 7, 10, 12, 13, 15, 18, 25 and 26 are amended. Claim 6 is cancelled.
Response to Arguments
Applicant's arguments filed on December 23, 2025 with respect to rejection of claims under 35 U.S.C. 103 has been fully considered; but they are not found persuasive. Specifically, in page 7 of its reply, Applicant argues in sixth paragraph that the claimed estimation of a location of an obscured reference point by applying a statistical method cannot be performed mentally. Examiner respectfully disagrees. The broadest reasonable interpretation of the claim includes observing a person to estimate the location of a non-visible reference point with respect to known visible reference points, which can be done mentally. Therefore, applicant’s arguments are not found persuasive.
Applicant further argues in page 8, first paragraph that the steps of identifying, converting and translating reference points followed by constructing a coordinate-based model represents an unconventional technical solution, and therefore, the claims provide significantly more than an abstract idea. Examiner respectfully disagrees. The recited operations can be interpreted as mere data collection (i.e. x, y, z coordinates) and data plotting to construct a stick figure (coordinate-based model). Therefore, Applicant’s arguments are not found persuasive.
Applicant continues to argues in page 8, second paragraph that the claimed process addresses a specific problem of tracking obscured reference points through known anatomical relationship models and therefore, it cannot be addressed through mental observation. Examiner respectfully disagrees. The recited tracking of an obscured reference point using known anatomical relationship can be done mentally as the anatomy of a person is defined. For example, if both shoulders are visible and spine is obscured, the location of the spine can be estimated mentally to be in the middle, equally distant from the two known shoulders joints and the coordinates can also be computed based on the known coordinates of the shoulder joints. Therefore, Applicant’s arguments are not found persuasive.
Applicant continues to argues in page 9, first and second paragraph that the claims improve computer functionality and address privacy concerns by storing anonymous data of the change in coordinate values between subsequent frames, therefore, the technique provides reduced memory requirements and preserves privacy, which represents significantly more than merely applying abstract ideas on a generic computer. Examiner respectfully disagrees. Merely storing non-identifiable coordinate change data in a computer by computing the delta value of each joint coordinate between frames is merely regarded as adding insignificant extra-solution activities to the judicial exception, and do not apply, rely on, or use the judicial exception as an indication of integration of the judicial exception into a practical application. Therefore, Applicant’s arguments are not found persuasive.
Applicant’s amendment of independent Claims 1 and 13, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot.
Consequently, THIS ACTION IS MADE FINAL.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is: “a model analysis unit” in claim 13.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, these are being interpreted to cover the corresponding structures described in the applicant’s drawings: algorithms (flow charts) depicted in Fig. 8, and applicant’s specification: ¶0087: “processing hardware corresponding to the operations of the model generator 30, model analysis unit 50, and/or other functional units of the system 10” as performing the claimed functions, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3, 10, 13, 16 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With respect to claim 1, the claim recites “the one or more known reference points”. There is insufficient antecedent basis for “one or more known reference points” in the claim as the claim recites “known reference points”.
In addition, claim 1 recites “one or more previously established reference points”. Although the claim previously recites a single established reference point, there is insufficient antecedent basis for “one or more previously established reference points” in the claim.
Claim 1 further recites “generate time sequenced coordinates for the reference points”. There is insufficient antecedent basis for “the reference points” in the claim. Examiner believes the limitation should recite “generate time sequenced coordinates for the known and established reference points”.
With respect to claims 3 and 16, the claims recite “coordinates of the refence point”. There is insufficient antecedent basis for “the refence point” in the claims. Examiner believes the claims should be amended to recite “coordinates of the obscured reference point” or “coordinates of the established reference point”.
With respect to claim 10, the claim recites “the obscured reference points”. Although the independent claim 1 provides antecedent basis for a single obscured reference point, there is insufficient antecedent basis for multiple “obscured reference points” in the claims.
With respect to claim 13, the claim recites “an estimated established location of one or more other established reference points”. There is insufficient antecedent basis for “one or more other established reference points” in the claims.
With respect to claim 18, the claim recites “established reference points”. Although the independent claim 13 provides antecedent basis for a single established reference point converted from an obscured reference point, there is insufficient antecedent basis for multiple “established reference points” in the claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5 and 7-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, and 13 respectively recite a method and a system for tracking obscured reference point(s) in of a human subject. With respect to analysis of independent claims 1 and 11:
Step 1:
With regard to Step 1, the instant claims are directed to a method and a system. Therefore, the claim is directed to one of the statutory categories of invention.
Step 2A, Prong One:
With regard to 2A, Prong One, the limitations of “identifying locations of known reference points”, “estimating a location of an obscured reference point of a human subject”, “improving the accuracy of the estimated location”, “convert the obscured reference point to an established reference point comprising tracking changes”, “translating the locations of the known and established reference points into a coordinate space”, “generate time sequenced coordinates for the reference points”, and “constructing a coordinate-based subject model from the time sequenced change coordinates”5 as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind of a person, i.e., concepts performed in the human mind (including an observation, evaluation, judgement, opinion). That is, an analyst reviewing image frames of a human subject can identify certain points of interest, estimate obscured points of interests based on the known anatomical relationships of the points, track the known and estimated points across the image frames, translate the locations of tracked points into coordinates, generate time sequenced coordinates of those points, and construct a subject model by plotting the coordinate data of the tracked points. This is the concept that falls under the grouping of abstract ideas mental processes, i.e., a concept performed in the human mind, evaluation, judgement, and/or opinion of an analyst.
Step 2A, Prong Two:
The 2019 PEG defines the phrase “evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception”. Therefore, additional elements, or a combination of additional elements in the claim, are required to apply, rely on, or use the judicial exception. In the instant case, the additional elements/limitations in the claims, i.e., a processor and a memory in claim 13 merely regarded as adding insignificant extra-solution activities to the judicial exception, and do not apply, rely on, or use the judicial exception as an indication of integration of the judicial exception into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the claims recite an abstract idea.
Step 2B:
Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the steps, amount to no more than insignificant extra-solution activity. Mere instructions to apply an exception using generic components cannot provide an inventive concept. Therefore, claims 1 and 13 are not patent eligible.
Further, with regard to dependent claims 2-5, 7-12 and 14-26 viewed individually, these additional steps, under their broadest reasonable interpretation, cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 7-9, 11-22 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Vu et al. (US 2018/0049669 A1) in view of Barnes et al. (US 2017/0323472 A1).
Regarding claim 1, Vu teaches, A method comprising: identifying locations of known reference points (Vu, ¶0232: “markers emulate the methodology of tracking known joint positions. This provides a highly-accurate method for providing a ground-truth of the patient's posture”) and estimating a location of an obscured reference point of a human subject (Vu, ¶0223: “identify and refine potential joint locations by analyzing thermally intense regions of the body and limiting ambiguities within the depth image to provide better joint estimates within the occluded region”) in a sequence of video images captured of the subject over time, (Vu, ¶0187: “detecting the chest surface of the patient is derived from the acquisition of the sampled depth-image D.sub.s(t) (depth samples per-timestep”) wherein the known reference points are visible and the obscured reference point is obscured in the sequence of video images; (Vu, ¶0226: “if the known skeletal joint positions are provided for the observed thermal distribution, the patient's skeletal posture can be estimated even when the subject is highly occluded, has several ambiguous joint positions”) improving the accuracy of the estimated location to convert the obscured reference point to an established reference point (Vu, ¶0223: “To provide a reliable means of estimating occluded skeletal postures… performing accurate joint estimations”) comprising tracking changes in the sequence of video images using object detection (Vu, ¶0014: “method further comprises monitoring any changes in the subject's posture or position”) and known anatomical relationships with respect to the obscured reference point and one or more of the one or more known reference points or one or more previously established reference points; (Vu, ¶0226: “if the known skeletal joint positions are provided for the observed thermal distribution, the patient's skeletal posture can be estimated even when the subject is highly occluded, has several ambiguous joint positions”) translating the locations of the known and established reference points into a coordinate space of a coordinate system (Vu, ¶0187: “The samples collected from the depth-image, converted into three dimensional coordinates”). However, Vu does not explicitly teach, generate time sequenced coordinates for the reference points, wherein the time sequence coordinates comprise time sequenced change coordinates represented as translocation offsets from prior coordinate locations with the coordinate space; and constructing a coordinate-based subject model from the time sequenced change coordinates, wherein the subject model comprises an anonymized representation of the subject.
In an analogous field of endeavor, Barnes teaches, generate time sequenced coordinates for the reference points, (Barnes, ¶0007: “Time-stamped coordinates of the feature points in the workflow are acquired at each of the first plurality of time points”) wherein the time sequence coordinates comprise time sequenced change coordinates (Barnes, ¶0007: “thereby obtaining real time translational movement of the coordinates of the feature points”) represented as translocation offsets from prior coordinate locations with the coordinate space; (Barnes, ¶0084: “the time-stamped coordinates of features identified across the time-stamped images, and the translational movement of those coordinates across the time-stamped images”) and constructing a coordinate-based subject model from the time sequenced change coordinates, (Barnes, ¶0030: “constructs two or three-dimensional maps… where the constructed maps are used to create dense point clouds and/or generate textured meshes representing a subject”) wherein the subject model comprises an anonymized representation of the subject. (Barnes, ¶0156: “certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu using the teachings of Barnes to introduce constructing a subject model using time-stamped coordinates data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of accurately tracking the posture of a subject. Therefore, it would have been obvious to combine the analogous arts Vu and Barnes to obtain the invention in claim 1.
Regarding claim 2, Vu in view of Barnes teaches, The method of claim 1, further comprising transmitting the time sequence change coordinates over a network for analysis and/or storage. (Barnes, ¶0097: “client device 104 transmits the first dataset to the data repository 108 through the network 106”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu in view of Barnes using the additional teachings of Barnes to introduce data transmission. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of further analyzing or storing the transmitted data in a remote device. Therefore, it would have been obvious to combine the analogous arts Vu and Barnes to obtain the invention in claim 2.
Regarding claim 3, Vu in view of Barnes teaches, The method of claim 2, wherein the time sequenced change coordinates comprise coordinates of the refence point within the coordinate space at specified times within the time sequence. (Vu, ¶0039: “Each of the identified steps must be recalculated for each frame during the monitoring process. This provides an active representation of the patient as they are monitored and the resulting surface deformations closely illustrate the patient's breathing state”).
Regarding claim 4, Vu in view of Barnes teaches, The method of claim 2, wherein the time sequenced change coordinates comprise a translocation instructions within the coordinate system space from a prior coordinate location with the coordinate space. (Barnes, ¶0084: “the time-stamped coordinates of features identified across the time-stamped images, and the translational movement of those coordinates across the time-stamped images”). The proposed combination as well as the motivation for combining Vu and Barnes references presented in the rejection of claim 1, apply to claim 4 and are incorporated herein by reference. Thus, the method recited in claim 4 is met by Vu and Barnes.
Regarding claim 5, Vu in view of Barnes teaches, The method of claim 1, wherein the coordinate system comprises a grid system, a vector based system, or a grid system is based on pixels of an image sensor, digital camera, or digital image. (Vu, ¶0139: “a posture detection algorithm is used to detect the cross section vector of human chest movement”).
Regarding claim 7, Vu in view of Barnes teaches, The method of claim 1, wherein the subject model comprises a stick figure abstraction. (Vu, ¶0049: “skeletal posture estimations”; also see Fig. 26A-26D).
Regarding claim 8, Vu in view of Barnes teaches, The method of claim 7, further comprising analyzing the subject model to track movements, (Vu, ¶0067: “the system tracks the large-scale movements and posture changes of the person”) growth, behavior, or combination thereof. (Vu, ¶0178: “extracting a complete volumetric iso-surface that includes the deformation behavior of the patient's left thorax, right thorax, and abdominal region”).
Regarding claim 9, Vu in view of Barnes teaches, The method of claim 7, further comprising applying AI or machine learning (Vu, ¶0163: “A machine learning technique was used to realize area recognition”) to subject models of a population of subjects for early detection of conditions. (Vu, ¶0219: “the breathing volume waveforms was found to represent unique patterns of a participant, which can contribute to clinical analysis of the patient's condition”).
Regarding claim 11, Vu in view of Barnes teaches, A machine-readable medium carrying machine readable instructions, which when executed by a processor of a machine, causes the machine to carry out the method of claim 1. (Vu, ¶0018: “a kit comprising the apparatus of the invention and instructions for the operation of the apparatus. In certain embodiments, the kit further comprises a computer for processing the data collected by the apparatus”).
Regarding claim 12, Vu in view of Barnes teaches, A system comprising a processor and memory comprising instructions that when executed by the processor causes the system to perform the operations of claim 1. (Vu, ¶0018: “a kit comprising the apparatus of the invention and instructions for the operation of the apparatus. In certain embodiments, the kit comprises a computer for processing the data collected by the apparatus”).
Regarding claim 13, Vu teaches, A system configured to perform data abstraction (Vu, ¶0067: “the system tracks the large-scale movements and posture changes of the person”) and storing instructions that when executed by the processor cause the processor to perform operations comprising: (Vu, ¶0018: “a kit comprising the apparatus of the invention and instructions for the operation of the apparatus. In certain embodiments, the kit comprises a computer for processing the data collected by the apparatus”) identifying, with a model generator, a location of one or more known reference points of a human subject (Vu, ¶0232: “markers emulate the methodology of tracking known joint positions. This provides a highly-accurate method for providing a ground-truth of the patient's posture”) visible in one or more video image frames captured of the subject; (Vu, ¶0252: “The image sequences in FIGS. 38A-38F illustrate six common postures”) estimating a location of an obscured reference point of interest with respect to the subject obscured in one or more video image frames captured of the subject (Vu, ¶0223: “identify and refine potential joint locations by analyzing thermally intense regions of the body and limiting ambiguities within the depth image to provide better joint estimates within the occluded region”) based on one or more of object detection or a known relationship with one or more of the one or more known reference points in which the location is identified; (Vu, ¶0226: “if the known skeletal joint positions are provided for the observed thermal distribution, the patient's skeletal posture can be estimated even when the subject is highly occluded, has several ambiguous joint positions”) converting the obscured reference point of interest to an established reference point by improving the accuracy of the estimated location (Vu, ¶0223: “To provide a reliable means of estimating occluded skeletal postures… performing accurate joint estimations”) by tracking changes in subsequently captured video image frames of the subject (Vu, ¶0014: “method further comprises monitoring any changes in the subject's posture or position”) and applying one or more statistical methods to establish, as an estimated established location, the estimated location relative to an estimated established location of one or more other established reference points, the location of one or more of the one or more known reference points, or combination thereof; (Vu, ¶0192: “The radius of this cylinder is defined by the average distance of both the left 1 and right r shoulder joints”; interpreting the spine joint is the obscured point, it will be equally distant from the two shoulder joints) translating the identified and estimated established locations of the respective known and established reference points into a coordinate space of a coordinate system to (Vu, ¶0187: “The samples collected from the depth-image, converted into three dimensional coordinates”). However, Vu does not explicitly teach, anonymization of video data and generate time sequenced coordinates of the reference points; and constructing, with a model analysis unit, a subject model from the time sequenced coordinates of the reference points.
In an analogous field of endeavor, Barnes teaches, anonymization of video data (Barnes, ¶0156: “certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed”) and generate time sequenced coordinates of the reference points; (Barnes, ¶0007: “Time-stamped coordinates of the feature points in the workflow are acquired at each of the first plurality of time points”) and constructing, with a model analysis unit, a subject model from the time sequenced coordinates of the reference points. (Barnes, ¶0030: “constructs two or three-dimensional maps… where the constructed maps are used to create dense point clouds and/or generate textured meshes representing a subject”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu using the teachings of Barnes to introduce constructing an anonymized subject model using time-stamped coordinates data. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of accurately tracking the posture of a subject. Therefore, it would have been obvious to combine the analogous arts Vu and Barnes to obtain the invention in claim 13.
Regarding claim 14, Vu in view of Barnes teaches, The system of claim 13, wherein the time sequenced coordinates comprise time sequenced change coordinates. (Barnes, ¶0007: “thereby obtaining real time translational movement of the coordinates of the feature points”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu using the teachings of Barnes to introduce tracking changes in coordinates. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of accurately tracking the changes in coordinates of target points over time. Therefore, it would have been obvious to combine the analogous arts Vu and Barnes to obtain the invention in claim 14.
Regarding claim 15, it recites a system with elements corresponding to the steps of the method recited in claim 2. Therefore, the recited elements of system claim 15 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 2. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 16, it recites a system with elements corresponding to the steps of the method recited in claim 3. Therefore, the recited elements of system claim 16 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 3. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 17, it recites a system with elements corresponding to the steps of the method recited in claim 4. Therefore, the recited elements of system claim 17 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 4. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 18, Vu in view of Barnes teaches, The system of claim 17, wherein the operations further comprise: estimating locations of a plurality of obscured reference points of interest based on known relationships with known reference points, established reference points, or both; (Vu, ¶0226: “if the known skeletal joint positions are provided for the observed thermal distribution, the patient's skeletal posture can be estimated even when the subject is highly occluded, has several ambiguous joint positions”) and translating the estimated locations of the estimated obscured reference points of interest into time sequenced coordinates (Barnes, ¶0007: “Time-stamped coordinates of the feature points in the workflow are acquired at each of the first plurality of time points”) with the coordinate space comprising time sequenced change coordinates. (Barnes, ¶0007: “thereby obtaining real time translational movement of the coordinates of the feature points”). The proposed combination as well as the motivation for combining Vu and Barnes references presented in the rejection of claim 14, apply to claim 18 and are incorporated herein by reference. Thus, the system recited in claim 18 is met by Vu and Barnes.
Regarding claim 19, Vu in view of Barnes teaches, The method of claim 13, wherein the coordinate system comprises a grid system or a vector based system. (Vu, ¶0205: “a voxel grid size that provides an accurate chest surface representation was selected”).
Regarding claim 20, Vu in view of Barnes teaches, The method of claim 13, wherein the coordinate system comprises a grid system (Vu, ¶0205: “a voxel grid size that provides an accurate chest surface representation was selected”) is based on pixels of an image sensor, digital camera, or digital image. (Vu, ¶0193: “stability scheme based on pixel tracking history is provided. A visualization of this pixel-history is provided in FIG. 17B”).
Regarding claim 21, it recites a system with elements corresponding to the steps of the method recited in claim 8. Therefore, the recited elements of system claim 21 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 8. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 22, it recites a system with elements corresponding to the steps of the method recited in claim 9. Therefore, the recited elements of system claim 22 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 9. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 24, it recites a system with elements corresponding to the steps of the method recited in claim 7. Therefore, the recited elements of system claim 24 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 7. Additionally, the rationale and motivation to combine Vu and Barnes presented in rejection of claim 1, apply to this claim.
Regarding claim 25, Vu in view of Barnes teaches, The system of claim 13, wherein the one or more statistical methods includes averaging the estimated location as correlated to the known relationships (Vu, ¶0192: “The radius of this cylinder is defined by the average distance of both the left 1 and right r shoulder joints”; interpreting the spine joint is the obscured point, it will be equally distant from the two shoulder joints).
Claims 10 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Vu et al. (US 2018/0049669 A1) in view of Barnes et al. (US 2017/0323472 A1) and in further view of Wang et al. (US 2023/0298204 A1).
Regarding claim 10, Vu in view of Barnes teaches, The method of claim 1, further comprising. However, the combination of Vu and Barnes does not explicitly teach, applying a confidence score to the estimated location of the obscured reference points.
In an analogous field of endeavor, Wang teaches, applying a confidence score to the estimated location of the obscured reference points. (Wang, ¶0061: “pose detector 216 may assign a lower confidence score C.sub.2d.sup.k to keypoints in the image that are occluded and a higher confidence score C.sub.2d.sup.k to keypoints that are not occluded”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu in view of Barnes using the teachings of Wang to introduce applying confidence scores to key point estimation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of computing the accuracy of an estimated obscured point. Therefore, it would have been obvious to combine the analogous arts Vu, Barnes and Wang to obtain the invention in claim 10.
Regarding claim 26, Vu in view of Barnes teaches, The system of claim 13, wherein the operations further comprise. However, the combination of Vu and Barnes does not explicitly teach, applying a confidence score to the estimated established location of the established reference point, the estimated location of the obscured reference point of interest, or both.
In an analogous field of endeavor, Wang teaches, applying a confidence score to the estimated established location of the established reference point, the estimated location of the obscured reference point of interest, or both. (Wang, ¶0061: “pose detector 216 may assign a lower confidence score C.sub.2d.sup.k to keypoints in the image that are occluded and a higher confidence score C.sub.2d.sup.k to keypoints that are not occluded”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu in view of Barnes using the teachings of Wang to introduce applying confidence scores to key point estimation. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of computing the accuracy of an estimated obscured point. Therefore, it would have been obvious to combine the analogous arts Vu, Barnes and Wang to obtain the invention in claim 26
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Vu et al. (US 2018/0049669 A1) in view of Barnes et al. (US 2017/0323472 A1) and in further view of Biswas et al. (US 2024/0311983 A1).
Regarding claim 23, Vu in view of Barnes teaches, The system of claim 13, wherein the operations further comprise. However, the combination of Vu and Barnes does not explicitly teach, identifying a non-subject object having a known dimension in the video image frames to scale the image frames.
In an analogous field of endeavor, Biswas teaches, identifying a non-subject object having a known dimension in the video image frames to scale the image frames. (Biswas, ¶0019: “using a known object's dimensions as a reference point (e.g., scaling sections of a video frame using a scaling factor calculated based upon a known length of an object”).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Vu in view of Barnes using the teachings of Biswas to introduce detecting an object of known size. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of automatically scaling the image frames with respect to the size of the known object. Therefore, it would have been obvious to combine the analogous arts Vu, Barnes and Biswas to obtain the invention in claim 23.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAZUL ISLAM/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662