DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on May 16, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
The specification is objected to under 37 CFR 1.52(b)(3) and 1.75(h) because the specification improperly contains the claims on page (p.) 8, line 20, through p. 12, line 19. The claims must be presented in a separate section, not within the specifications. Applicant is required to amend the specification to remove the claims from the specifications. The claims presented in the claims section are accepted.
Reference number 125 is used inconsistently throughout the detailed description and drawing. Specifically, reference number 125 is described as an “embedded processing unit” (p. 15, lines 7 and 28-29), a “processing unit” (p. 15, line 31, and p. 16, line 3), and an “integrated processing unit” (p. 16, line 4), while the corresponding drawing (Fig. 1D) labels reference number 125 as a “Mobile Processing Flow”. It is unclear whether reference number 125 is referring to a structural component or to a processing flow. Applicant is required to amend the specification to provide consistent terminology for reference number 125. Corresponding amendments to the drawing may be required to ensure compliance with 37 CFR 1.84.
Reference number 150 is used inconsistently throughout the detailed description. Specifically, reference number 150 is described as a “device” (p. 15, line 29), a “smart phone” (p. 16, line 3), a “portable edge computing tool” (p.16, line 20), a “smart phone device” (p. 16, line 21), and a “portable device” (p. 18, line 22). Applicant is required to amend the specification to provide consistent terminology for reference number 150.
Reference number 127 is used inconsistently throughout the detailed description and drawing. Specifically, reference number 127 is described as a “pose estimation” (p. 16, lines 17-18) and a “pose estimation module” (p. 17, line 8), while the corresponding drawing (Fig. 1D) labels reference 127 as a “Posture estimation model”. It is unclear whether reference 127 is referring to a pose or posture related component. Applicant is required to amend the specification to provide consistent terminology for reference number 125. Corresponding amendments to the drawing may be required to ensure compliance with 37 CFR 1.84.
The specification uses both “posture classification model” and “posture classification module”, and both “pose estimation model” and “pose estimation module” to refer to components. It is unclear whether these terms refer to the same structure. Applicant is required to amend the specification to clarify the terminology for consistency. Corresponding amendments to the drawing may be required to ensure compliance with 37 CFR 1.84.
Appropriate correction is required in response to this action.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 24, 29, and 31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 24, the term “keypoints”, found on line 3, is a relative term which renders the claim indefinite. The claim recites “classifying, by a single linear layer of the posture estimation model, positions of the subject lying in the bed based on pose estimation model keypoints generated by the pose estimation model”; however, the specifications recite “a single linear layer that takes the keypoint coordinates from the pose estimation model directly for posture classification.”. It is unclear whether the “keypoints” are keypoint coordinates, values or a representation computed from the keypoint coordinates, or some other structure. Accordingly, the scope of the claim is indefinite.
Regarding claims 29 and 31, the term “monitoring/recovery”, found on line 3 of claim 29 and line 4 of claim 31, is ambiguous and does not have a recognized meaning in the art. It is unclear whether the term refers to monitoring, recovery, either monitoring or recovery, both monitoring and recover simultaneously or some other structure. Because the scope can not be reasonably determined, the scope of the claim is indefinite. For examining purposes, the limitations have been interpreted to read “monitoring and recovery”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-4, 6-7, 9-10, 12, 14, 16-19, 21-31 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite(s) determining and tracking the pose and posture of a human subject lying in bed based on images. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (e.g. processor, memory, operating system, etc.).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that independent claims 1, 16, and 30 are directed to an abstract idea as shown below:
► STEP 1: Do the claims fall within one of the statutory categories?
YES. Claims 1, 16, and 30 are directed to a system and a computer-implemented method for in-bed pose and posture determination and tracking, and a computer-implemented method to aid in diagnosing, treating, or preventing a sleep-related medical condition.
► STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
Yes, the claims are directed toward a mental process and/or mathematical concepts (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject
matter that are considered abstract ideas:
Mathematical concepts - mathematical relationships, mathematical formulas or
equations, mathematical calculations;
Certain methods of organizing human activity - fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes - concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
Independent claim(s) 1, 16, and 30 comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
Regarding claim 1, the system recites the mental steps of:
a pose estimation model trained with a dataset of lying poses and operative to estimate poses of the human subject lying in the bed based on one or more of the image frames (Mental process including observation, data collecting, and evaluation that can be done in the human mind. A person could create a pose model in their head based on images they’ve seen of humans lying in beds and estimate the pose of a human lying in a bed based on one or more pictures.),
and a posture classification model trained with the dataset of lying poses and operative to classify positions of the human subject lying in the bed based on one or more of the image frames; (Mental process including observation, data collecting, and evaluation that can be in the human mind. A person could create a posture model in their head based on images of human poses they’ve seen of humans lying in beds and classify the position of a human lying in a bed based on one or more pictures and their own perceived classifications.);
wherein the processing unit is operative to determine a pose and posture of the human subject lying in the bed (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could observe a person lying in the bed, evaluate and determine the human subject’s pose and posture.).
Regarding claim 16, the system recites the steps (functions) of:
estimating, by a pose estimation model of the processing unit, poses of the human subject lying in the bed based on one or more of the captured images (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could create a pose model in their head based on images they’ve seen of humans lying in beds and estimate the pose of a human lying in a bed based on one or more pictures.)
classifying, by a posture classification model of the processing unit, positions of the human subject lying in the bed based on one or more of the captured images (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could create a posture model in their head based on images of human poses they’ve seen of humans lying in beds and classify the position of a human lying in a bed based on one or more pictures and their own perceived classifications.);
determining a pose and posture of the human subject lying in the bed (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could observe a person lying in the bed, evaluate and determine the human subject’s pose and posture.).
Regarding claim 30, the system recites the steps (functions) of:
acquiring images of a subject while the subject is sleeping or attempting to sleep in a bed for a period of time (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could observe images of a subject sleeping or attempting to sleep in a bed for a period of time.);
estimating, by a pose estimation model, poses of the subject lying in the bed based on one or more of the images (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could create a pose model in their head based on images they’ve seen of humans lying in beds and estimate the pose of a human lying in a bed based on one or more pictures.);
classifying, by a posture classification model, positions of the subject lying in the bed based on one or more of the images (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could create a posture model in their head based on images of human poses they’ve seen of humans lying in beds and classify the position of a human lying in a bed based on one or more pictures and their own perceived classifications.)
determining pose and posture of the subject during the period of time or a portion thereof (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could observe a subject or images of a subject during a time period and determine their pose and posture.)
analyzing the pose and/or posture to aid in diagnosing, treating, or preventing the sleep-related medical condition (Mental process including observation, data collecting, and evaluation that can be done mentally in the human mind. A person could evaluate the results of the pose and posture models mentally created in their head to decide a treatment plan for people sleeping irregularly.).
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the 'basic tools of scientific and technological work' that are open to all."' 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ('"[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work'" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584,589, 198 USPQ 193, 197 (1978) (same).
As such, a person could examine a plurality of images of a human subject lying in a bed, determine and track the person’s pose, posture, and position, then analyze the observed data and provide information to aid in diagnosing, treating, or preventing a sleep-related medical condition based on the observations, either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words "apply it" (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception;
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Independent claim(s) 1, 16, and 30 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
Regarding claim 1, the claim recites additional limitations: “an imaging device comprising one or more of a depth sensor or a long wavelength infrared camera, the imaging device positioned proximate to a bed and oriented to capture images of the human subject lying in the bed” which amounts to data collecting and fails to integrate the claim into a practical application. Imaging device, depth sensor, and long wavelength infrared camera are generic computers or components.
“a processing unit in communication with the imaging device and operative to receive captured images of the human subject lying in the bed, the captured images including a plurality of image frames, the processing unit comprising one or more processors and memory” which amounts to data collecting and linking devices, and fails to integrate the claim into a practical application. Imaging device, processing unit, and memory are generic computers or components.
Regarding claim 16, the claim recites additional limitations: “receiving, by a processing unit, captured images of a human subject lying in a bed from an imaging device” which amounts to data collecting and linking devices, and fails to integrate the claim into a practical application. Imaging device and processing unit are generic computers or components.
Regarding claim 30, the claim does not recite additional elements.
The additional limitations found in claims 1 and 16 are generic computer components and/or insignificant pre/post-solution extra activity that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in the apparatus claim. See MPEP 2106.05(g).
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Independent claim(s) 1 and 16 recite additional “processing” components, but do not recite any additional elements that are not well understood, routine or conventional. The use of generic processor elements is a routine, well understood and conventional process that is performed by computers. Claim 30 does not recite additional components.
Claims 2 and 17 add the limitations of observing a human pose and posture through bedding, this is a mental process and fails to remedy the abstract ideas of claims 1 and 16.
Claims 3, 4, and 14 follow the same logic of claim 1.
Claims 6 and 18 add the limitations of a stacked hourglass model trained with a dataset of lying poses. This is mental process (based on evaluation and mathematics) and fails to remedy the abstract ideas of claims 1 and 16.
Claims 7 and 19 adds the limitations of an autoencoder. This is a mental process (evaluation based on mathematics) and fails to remedy the abstract ideas of claims 1 and 16.
Claims 9 and 21 add the limitations of a preprocessor configured to compute histogram of oriented gradients (HoG) features of the one or more captured images received by the processing unit to form a HoG feature vector corresponding to each respective captured image(s). This is a mental process based on mathematics, logic, and evaluation, performed by a generic computing device (e.g. processor) that could be performed either mentally or using a pen and paper. This claim fails to remedy the abstract ideas of claims 1 and 16.
Claims 10, 22, and 23 add the limitations of receiving at least one of the HoG feature vectors formed by the preprocessor; convert each respective HoG feature vector to a latent vector comprising a low-dimensional representation of the corresponding respective HoG feature vector. vector; remap, using a decoder of an output layer of the HoG-autoencoder, the latent vector to a HoG feature vector; and determine, using a linear classification layer of the HoG-autoencoder, a posture class probability for a captured image corresponding to the HoG feature vector. This is a mental process based on mathematics, logic, and evaluation, performed by a generic computing device (e.g. processor) that could be performed either mentally or using a pen and paper. This claim fails to remedy the abstract ideas of claims 1 and 16.
Claims 12 and 26 add the limitation of an “edge device” (generic computer component) receiving images from the “processing unit” (generic computer component) and requesting notifications of a detection of a type or duration of pose and posture. This is a mental process based on observation, evaluation, and linking, performed by generic computer components and fails to remedy the abstract idea of claims 1 and 16.
Claims 24 and 25 follows the same logic of claim 16.
Claim 27 adds the limitation of an alarm response at the edge device based on alarm limits. This is a mental process based on observation, evaluation, and linking, performed by generic computer components and fails to remedy the abstract idea of claim 16.
Claim 28 adds the limitations of configurable alarm limits based on needs conditions, or goals of a subject. This is a mental process based on logic and evaluation, can be done in the human mind, and fails to remedy the abstract idea of claim 16.
Claim 29 adds the limitation of “wherein the needs, conditions, or goals of the subject include at least one of prevention or treatment of pressure ulcers, avoiding supine posture, 3rd trimester pregnancy, sleep apnea, chronic respiratory problems, post- surgical monitoring/recovery, neck or back injury, carpel tunnel syndrome, sleep disorders, fibromyalgia syndrome.” This is a mental process including a judgement, observation, and/or evaluation, and can be done mentally in the human mind. This limitation fails to remedy the abstract idea of claim 16.
Claim 31 adds the limitation of “wherein the sleep-related medical condition is at least one selected from the group consisting of pressure ulcers, avoiding supine posture, 3rd trimester pregnancy, sleep apnea, chronic respiratory problems, post-surgical monitoring/recovery, neck or back injury, carpel tunnel syndrome, sleep disorders, and fibromyalgia syndrome.” This is a mental process including a judgement, observation, and/or evaluation, and can be done mentally in the human mind. This limitation fails to remedy the abstract idea of claim 30.
Thus, since claim(s) 1-4, 6-7, 9-10, 12, 14, 16-19, 21-31 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1-4, 6-7, 9-10, 12, 14, 16-19, 21-31 are not eligible subject matter under 35 U.S.C 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 6, and 16-18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ostadabbas & Liu (US 20200265602 A1; 2020).
Regarding Claim 1,
Ostadabbas & Liu (2020) teach: A system for in-bed pose and posture determination and tracking for a human subject (Abstract, a system for determining in-bed human pose estimation; Also used for postures, ¶ [0066] “…we fine-tuned a state-of-the-art pose estimation model… to transfer the learning to estimation of the poses in sleeping postures”; Tracking human subject over time, ¶ [0017] “…steps of the method are repeated a plurality of times to estimate a series of poses and determine movement of the human subject over a period of time.”.), comprising :
an imaging device comprising one or more of a depth sensor or a long wavelength infrared camera, the imaging device positioned proximate to a bed and oriented to capture images of the human subject lying in the bed (Ostadabbas & Liu (2020) teach an imaging device comprising a long wavelength infrared camera positioned above the human subject lying in the bed, see Fig. 2A and Fig 2B, claim 1, and ¶ [0008] “…a system is disclosed for estimating a pose of a human subject lying on a bed. The system includes a long wavelength infrared camera positioned above the human subject for capturing thermal imaging data of the human subject lying on the bed.”.);
and a processing unit in communication with the imaging device (¶ [0008] “It also includes a computer system coupled to the long wavelength infrared camera….computer system comprises at least one processor…containing a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to receive the thermal imaging data and process the thermal imaging data”) and operative to receive the captured images of the human subject lying in the bed (¶ [0008] “The computer system comprises…at least one processor to receive the thermal imaging”; ), the captured images including a plurality of image frames (See Figs. 3A-3L for samples of images captured using the cameras. These individually captured images are interpreted as an image frame. A series of images are taken over a period of time and used for the dataset (Abstract; ¶ [0017]), resulting in a plurality of image frames.), the processing unit comprising one or more processors and memory (¶ [0008] “The computer system comprises at least one processor, memory associated with the at least one processor”), the processing unit including:
a pose estimation model trained with a dataset of lying poses and operative to estimate poses of the human subject lying in the bed based on one or more of the image frames (¶ [0007] “…computer system using a model to estimate the pose of the human subject, the model comprising a machine learning inference model trained on a training dataset of a plurality of in-bed human poses.”; The processor executes a program to, “receive the thermal imaging data and process the thermal imaging data using a model to estimate the pose of the human subject, the model comprising a machine learning inference model trained on a training dataset of a plurality of in-bed human poses.”(¶ [0008])),
and a posture classification model trained with the dataset of lying poses and operative to classify positions of the human subject lying in the bed based on one or more of the image frames (Ostadabbas & Liu (2020) teaches the in-bed estimation model performs classification tasks to form an accurately labelled in-bed pose dataset (see ¶ [0036]). The pose estimation model comprising of a trained dataset of lying poses image frames, taught in previous limitations of claim 1 and ¶ ¶ [0007]- [0008], can be used to identify postures by fine tuning the pose estimation model, see ¶ [0066] “we fine-tuned a state-of-the-art pose estimation model (i.e., a stack hourglass network trained on RGB pose datasets) to transfer the learning to estimation of the poses in sleeping postures”. Ostadabbas & Liu (2020) further teaches the resulting model can be used to collect data relating to human subjects’ position (e.g. supine, left side, and right side) and their respective categories, see ¶ [0067] “we collected pose data from 7 volunteers in hospital room and from another volunteer in the living room, while lying in the bed and randomly changing pose under three main categories of supine, left side, and right side.”. Examiner notes, the instant application’s specifications use the term “supine position” (see p. 13, line 27 and p. 19, line 24 of instant application), thus supine is interpreted as a classification of a position. Ostadabbas & Liu (2020) goes on to teach prior methods (PM) were limited in categorizing rough postures (e.g., supine, left, and right sides), while their method shows higher granularity, resulting in higher accuracy (See ¶ [0074]). By concluding the pose estimation model granularity was higher and resulted in better accuracy than PM results, based on categorizing (i.e. classifying) postures, it is inherent the postures were categorized in Ostadabbas & Liu’s (2020) teachings for comparison purposes. Categorizing and classifying is considered equivalent to one of ordinary skill in the art. One of ordinary skill in the art can interpret the “fine-tuned” pose estimation model as a posture classification model because it is trained with the dataset of lying poses from the pose estimation model and the categorization of postures is inherently part of the classification task described in ¶ [0036]);
wherein the processing unit is operative to determine a pose and posture of the human subject lying in the bed (Refer back to Abstract, ¶ [0008], ¶ [0066], and ¶ [0074]).
Regarding Claim 16,
Claim 16 equally mirrors limitations of claim 1. Ostadabbas & Liu (2020) further teach a computer-implemented method for in-bed pose and posture determination and tracking (See Ostadabbas & Liu (2020) Abstract “methods… are disclosed for estimating an in-bed human pose”, ¶ [0066] teaches posture, ¶ [0017] teaches tracking. The method is implemented using a computer system (refer back to ¶ ¶ [0007]- [0008]); therefore, the method is interpreted as a computer-implemented based method.). Thus, claim 16 is rejected based on the prior art taught in claim 1 and herein claim 16.
Regarding Claims 2 and 17,
Ostadabbas & Liu (2020) teach the limitations of claims 1 and 16.
Ostadabbas & Liu (2020) further teach: wherein the imaging device is capable of imaging body pose and posture of the human subject through bedding covering the human subject (Abstract and ¶ [0066] teaches imaging body pose and posture of human subject; Capturing image data through bedding covering the human subject is taught in Claims 2 and 12, ¶ [0009] “In one or more embodiments, the human subject lying on the bed is at least partially under a cover when the thermal imaging data is captured using the long wavelength infrared camera.”; ¶ [0032] “The human subject 10 may or may not be covered by a sheet, blanket, or other cover 16”).
Regarding Claim 3,
Ostadabbas & Liu (2020) teach the limitations of claims 1 and 16.
Ostadabbas & Liu (2020) further teach: wherein the processing unit is integrated with the imaging device (See Ostadabbas & Liu (2020) ¶ [0079] “Other computer systems are also possible. For example, computer system may comprise one or more physical machines, or virtual machines running on one or more physical machines” and ¶ [0081] “elements and components described herein may be further divided into additional components or joined together to form fewer components for performing the same functions.”. It is understood by one of ordinary skill in the art that a physical machine includes multiple components (e.g. a processor, components that make up an imaging device, etc.). In conclusion, Ostadabbas & Liu’s (2020) teachings detail a system may comprise one or more physical machines, elements and components may be divided or joined, and the system includes a processing unit and an imaging device (see ¶ [0008]); therefore, components of a processing unit could be integrated with the components that make up an imaging device.).
Regarding Claim 4,
Ostadabbas & Liu (2020) teach the limitations of claims 1.
Ostadabbas & Liu (2020) further teach: wherein the processing unit is located remotely from the imaging device and in electronic communication with the imaging device (Abstract and Fig. 2A teach image data is captured using a camera positioned above the human subject and the data is transmitted to a computer system for processing. The computer system includes a processing unit and a set of instructions in the memory that causes the processor to receive imaging data from the camera (¶ [0008]). Additionally, ¶ ¶ [0079]-[0081] teach components may or may not be combined to perform the functions of the system and computers are connected via a network (i.e. electric communication). Given the teachings in the Abstract, ¶ [0008], the ability to operate components without combining (¶ ¶ [0079]-[0081]), the use of a network, and electronic based functions of a computer system known to one of ordinary skill in the art, examiner interprets the camera being above human subject and transmitting the data to a processor to be equivalent to a processing unit being located remotely from the imaging device and in electronic communication with the imaging device.)
Regarding Claims 6 and 18,
Ostadabbas & Liu (2020) teach the limitations of claims 1 and 16.
Ostadabbas & Liu (2020) further teach: wherein the pose estimation model includes a stacked hourglass model trained with the dataset of lying poses (See Ostadabbas & Liu (2020) ¶ [0016] In one or more embodiments, the machine learning inference model comprises a stacked hourglass network”; ¶ [0027] FIG. 9 shows PCK evaluation of in-bed human pose estimation models tested on data from a “Hosp” setting with different cover conditions. hg(UCITD) stands for the fined tuned hourglass model on the UCITD dataset followed by cover conditions. hg-LWIR stands for applying a pre-trained stacked hourglass (hg) model directly on LWIR dataset. hg-RGB stands for applying a pre-trained hg model directly on our in-bed RGB dataset.”; ¶¶ [0077]- [0071]; The dataset the stacked hourglass model is trained with includes a plurality of in-bed human poses (Abstract)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7, 9, 19, 21, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Ostadabbas & Liu (US 20200265602 A1; 2020) in view of view of Morzhakov (US 20200349347 A1), and in further view of knowledge of one of ordinary skill in the art as evidenced by the background discussion of Ostadabbas & Liu (US 20200265602 A1; 2020).
Regarding claims 7 and 19,
Ostadabbas & Liu (2020) teach the limitations of claims 1 and 16, including a posture classification model.
Ostadabbas & Liu (2020) fails to teach: wherein the posture classification model includes histogram of oriented gradients (HoG)-autoencoder.
Ostadabbas & Liu (2020) teaches, in its background information, a histogram of oriented gradients (HoG) is a known feature extraction technique used to obtain edge and texture-based features during preprocessing of image data (refer to background information found on ¶ [0042], not “Background” section ).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have incorporated Ostadabbas & Liu’s (2020) teachings of a posture classification model with Ostadabbas & Liu’s (2020) background teachings of HoG feature extraction techniques during preprocessing. Doing so would provide for an increased number of parameters for classifying posture. The inventions lie in the same field of endeavor of determining characteristics of features.
Ostadabbas & Liu’s (2020) teachings and background information fail to teach a histogram of oriented gradients (HoG)-autoencoder.
In a related art, Morzhakov teaches: an autoencoder that uses position of a figure representing a person as input to yield the posture of the person associated with the input, see Abstract and ¶ [0081] “The context for all of the autoencoders in Set #1 302 is the position of the input stick figure on the floor, and the treatment yields the pose, e.g., the posture of the person associated with the input stick figure and/or the orientation of the person.”.
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have modified Ostadabbas & Liu’s (2020) teachings of a posture classification model and Ostadabbas & Liu’s (2020) background teachings of a well-known HoG features extraction technique to incorporate the autoencoder taught by Morzhakov, resulting in a model that feeds HoG image features to an autoencoder to learn representation of the image structure for classification purposes. The inventions lie in the same field of endeavor of tracking and determining human subject’s pose, posture, and position found in a set of images to identify potentially dangerous or risky activity. The motivation to combine the reference is to more effectively analyze motion or activity in order to prevent injury or harm (see Morzhakov ¶ [0004]).
Regarding Claims 9 and 21,
Ostadabbas & Liu (2020), Ostadabbas & Liu’s (2020) background information, and Morzhakov teach the limitations of claims 7 and 19.
Ostadabbas & Liu’s (2020) background information further teaches (Ostadabbas & Liu (2020) teach histogram of oriented gradients (HoG) is computed for features extraction and takes place during preprocessing of image data (see ¶ ¶ [0040]-[0044]).
Ostadabbas & Liu (2020)’s background information fails to teach a preprocessor configured to compute HoG feature vector corresponding to each respective one of the one or more captured images.
Ostadabbas & Liu’s (2020) disclosure further teaches: at least one processor unit is being used to receive a plurality of images for processing, using a plurality of instructions (¶ ¶ [007]-[008]). Based on the teachings found herein, one of ordinary skill in the art could interpret one processor acting as a “preprocessor”, preprocessing the captured images for a histogram of gradients based on features from the captured images.
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate the background teachings of computing HoG features with Ostadabbas & Liu’s (2020) disclosure and have a preprocessing unit configured to compute the HoG gradients of each of the one or more captured images received by the processing unit to form a HoG feature corresponding to each respective one of the one or more captured images
Ostadabbas & Liu (2020) and Ostadabbas & Liu’s (2020) background teachings fail to teach a feature vector.
In a related art, Morzhakov teaches: transforming input image data to create a vector or code that indicates information about properties or characteristics (i.e. features) of the input data ([0063] “The encoder encodes, i.e., transforms input data into a latent-space representation (also called a latent vector or code) typically having a lower dimension than that of the input data. The code can indicate certain latent information about, e.g., certain properties or characteristics of, the input data. The decoder receives the latent-space representation and reconstructs the input data, which is provided as the output of the autoencoder.”).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have modified Ostadabbas & Liu’s (2020) teachings of a preprocessor configured to compute a well-known histogram of oriented gradients (HoG) feature (Ostadabbas & Liu’s (2020) background information) of each of the one or more captured images received by the processing unit to form a HoG feature vector, as taught by Morzhakov, corresponding to each respective one of the one or more captured images. The inventions lie in the same field of endeavor of tracking and determining human subject’s pose, posture, and position found in a set of images to identify potentially dangerous or risky activity. The motivation to combine the reference is to improve the detection techniques and improve the system speed by limiting the size of data required for processing and training of models (see Morzhakov ¶ [0004]).
Regarding Claim 22
Ostadabbas & Liu (2020), Ostadabbas & Liu’s (2020) background information andMorzhakov teach the limitations of claim 21, including a HoG feature vector.
Morzhakov further teaches receiving, by an encoder of the (Morzhakov teaches providing the autoencoder a feature based representation derived from a preprocessing of captured data (see Morzhakov ¶ [0008] “the method includes providing the stick figure as an input to an autoencoder system”; ¶ [0063] “The code can indicate certain latent information about, e.g., certain properties or characteristics of, the input data.”). It would have been obvious to a person of ordinary skill in the art to represent such processed features in the form of feature vectors when providing the input to the encoder, as feature vector representations were well-known and conventional forms of neural network input at the time of the invention.); and converting, by the encoder of the (¶ [0063] “The encoder encodes, i.e., transforms input data into a latent-space representation (also called a latent vector or code) typically having a lower dimension than that of the input data. The code can indicate certain latent information about, e.g., certain properties or characteristics of, the input data.”. Examiner interprets the “certain properties or characteristics of, the input data” represented by code to be equivalent to feature code or feature vectors.).
Morzhakov fails to teach the autoencoder in relation to HoG and using HoG feature vectors in this context.
In a related art, Ostadabbas & Liu’s (2020) background information teaches: Histogram of oriented gradients (HoG) computed during preprocessing (refer back to ¶ [0042]).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have modified Morzhakov’s teachings of receiving, by an encoder of the autoencoder, at least one of the feature vectors formed by the preprocessor, and converting, by the encoder of the autoencoder, each respective feature vector to a latent vector comprising a low-dimensional representation of the corresponding feature vector to be implemented using HoG data, resulting in an HoG based autoencoder (HoG-autoencoder) and HoG based feature vector (HoG-feature vector). The inventions lie in the same field of endeavor of tracking and determining human subject’s pose, posture, and position found in a set of images to identify potentially dangerous or risky activity. The motivation to combine the reference is to improve the detection techniques and improve the system speed by limiting the size of data required for processing and training of models (see Morzhakov ¶ [0004]).
Claims 12, 14, and 25-31 are rejected under 35 U.S.C. 103 as being unpatentable over Ostadabbas & Liu (US 20200265602 A1; 2020) in view of view of Ghose et al. (US 20210293634 A1).
Regarding Claims 12,
Ostadabbas & Liu (2020) teach the limitations of claim 1.
Ostadabbas & Liu’s (2020) further teach: an receive images from the processing unit; and
request (Distributed device network is configured to communicate and receive images from processing unit and request pose or posture information (see ¶¶ [007]-[008]; ¶ [0077]- [0081]). Types of poses by the processing unit are monitored (Abstract; ¶ [0008]).
Ostadabbas & Liu’s (2020) fail to teach an edge device and requesting notification of a detection of a type or duration of pose or posture by the processing unit.
In a related art, Ghose et al. teaches: an edge device (see Ghose et al. ¶ [0031] “…the centralized unit 106 can be implemented in a variety of computing systems, such as …, edge devices…”) and requesting notification of a detection of a type or duration of pose or posture (An alert, interpreted equivalent to a notification, may be go off when movement of body parts, interpreted equivalent to a moving posture, are detected, see ¶ [0034] “In an embodiment, the display unit 108 may be used to analyze or monitor condition of a patient and provide an alert to the patient by displaying one or more parameters related to movement of one or more body parts of a subject.”).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have modified Ostadabbas & Liu’s (2020) teachings of a distributed device system in communication with a processor that receives images and detects poses and postures by a processor to add an edge detection and notification system as taught by Ghose et al. The inventions lie in the same field of endeavor of tracking movement of objects, including humans. The motivation to combine the reference is to more accurately track movement without the need of extra tools (e.g. body suits with markers), see Ghose et al. ¶ [0004].
Regarding Claim 24,
Ostadabbas & Liu (2020) teach the limitations of claims 16.
Ostadabbas & Liu (2020) further teaches: classifying (See classification tasks associated with the deep learning in-bed pose estimation model layer ¶ [0036]), positions of the subject lying in the bed based on pose estimation model keypoints generated by the pose estimation model (Positions of the subject based on probability of correct keypoints are generated by the pose estimation model, see ¶¶ [0067]-[0068] and ¶¶ [0071]-[0072]).
Ostadabbas & Liu (2020) fail to teach using a single linear layer.
In a related art, Ghose et al. teaches using a linear layer for classification and identifying errors (¶¶ [0044]- [0045]).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to modify the teachings of classifying positions of the subject lying in the bed based on pose estimation model keypoints generated by the pose estimation model by using a layer of the posture estimation model, as taught by Ostadabbas & Liu (2020), to incorporate a single linear layer technique, taught by Ghose et al. within the classifying layer of the posture estimation model. The inventions lie in the same field of endeavor of tracking movement of objects, including humans. The motivation to combine the reference is to increase the accuracy of tracking movement (Ghose et al. ¶ [0004]).
Regarding Claims 14 and 25,
Ostadabbas & Liu (2020) teach the limitations of claims 1 and 16.
Ostadabbas & Liu (2020) further teach: further comprising a motion detection component operative to determine (Ostadabbas & Liu (2020) teach a movement determining model (equivalent to a motion detection component) that detects human posture in images or frames over a period of time (equivalent to consecutive image frames) for estimating a series of poses (see ¶ [0017] “the steps of the method are repeated a plurality of times to estimate a series of poses and determine movement of the human subject over a period of time.”; Refer back to claim 1 rejection found above, Abstract, and ¶ [0066] for posture classification model discussion; Refer to ¶ ¶ [0077]-[0082] for teaching relating to components and ability to modify in order to achieve intended tasks.))
Ostadabbas & Liu (2020) fails to teach to account for tracking a human when the human remains stationary.
In a related art, Ghose et al. teaches: temporal pattern analysis through a method of tracking deformation over a plurality of slides and identifying errors between signature movement patterns by using a predetermined threshold to identify errors (see Ghose et al. ¶ [0005]). Thus, the disclosure teaches tracking patterns across multiple frames and using a predetermined threshold to determine a difference between temporally separated patterns. Examiner notes, one could set a threshold that results in, or where the adverse results reflect, a determination of no difference in pattern (e.g. same posture).
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to combine the human detection and posture-based feature extraction model taught by Ostadabbas & Liu (2020) with the temporal tracking and threshold-based pattern comparison taught by Ghose et al. to track humans across multiple frames, including when motion isn’t present (same posture is returned) after a predetermined number of frames, because applying known temporal pattern matching and threshold techniques to known human posture techniques generates predictable results. The inventions lie in the same field of endeavor of tracking movement of objects, including humans. The motivation to combine the reference is to increase the accuracy in movement tracking and to avoid intensive setup and huge costs caused by the excessive number of tools (e.g. body suits with multiple markers) required to perform the tracking task, as previously seen in the field (see Ghose et al. ¶ [0004]).
Regarding Claim 26,
Ostadabbas & Liu (2020) and Ghose et al. teach the limitations of claim 25. Claim 26 equally mirrors the limitations of claim 12, found above. Thus, claim 26 is rejected based on the prior art taught in claims 12 and 25.
Regarding Claim 27,
Ostadabbas & Liu (2020) and Ghose et al. teach the limitations of claim 26.
Ghose et al. further teaches: further comprising initiating, at the edge device, an alarm responsive to the notification of the detection or duration of detection of the at least one of the type of pose or the type of posture classification corresponding to one or more of a proscribed pose, a proscribed posture classification, or an exceeded duration of a proscribed pose or proscribed posture corresponding to one or more alarm limits (A centralized unit, including an edge device (see Ghose et al. ¶ [0031]) sets conditions and initiates an alert of the detected type of posture or non-limiting conditions (¶ [0034]-[0035]). Centralized unit and display unit may be coupled together (¶ [0025]) and centralized unit is configurable (¶ [0035]). Further, classification and predefined thresholds are set so when a value exceeds the threshold, motion is detected, and an alert notifies the user of body parts’ range of motion (i.e. posture) (¶¶ [0044]- [0045]).)
Regarding Claim 28,
Ostadabbas & Liu (2020) and Ghose et al. teach the limitations of claim 27.
Ghose et al. further teaches: wherein the alarm limits are configurable according to one or more needs, conditions, or goals of the subject (Alert is configurable based on non-limiting conditions (e.g. fitness parameters), see ¶ [0034]. Predetermined thresholds (¶¶ [0044]- [0045]) are equivalent to alarm limits.).
Regarding Claims 29,
Ostadabbas & Liu (2020) and Ghose et al. teach the limitations of claim 28.
Ghose et al. further teaches: wherein the needs, conditions, or goals of the subject include at least one of prevention or treatment of pressure ulcers, avoiding supine posture, 3rd trimester pregnancy, sleep apnea, chronic respiratory problems, post- surgical monitoring/recovery, neck or back injury, carpel tunnel syndrome, sleep disorders, fibromyalgia syndrome (Different parameters (equivalent to conditions), ranges of motion, and/or thresholds can be configured to help monitoring patients’ health and provide alerts (¶¶ [0034]- [0035]; ¶¶ [0045]-[0046]). Ghose et al. identifies numerous health related needs, conditions, or goals of the subject including, post-surgical monitoring and recovery, arthritis, and disease (see ¶ [0038] & ¶ ¶ [0045]- [0046].).
Regarding Claim 30,
Ostadabbas & Liu (2020) teaches: A computer-implemented method (See Ostadabbas & Liu (2020) Abstract “methods… are disclosed for estimating an in-bed human pose”, ¶ [0066] teaches posture, ¶ [0017] teaches tracking. The method is implemented using a computer system (refer back to ¶ ¶ [0007]- [0008]); therefore, the method is interpreted as a computer-implemented based method.) to aid in diagnosing, treating, or preventing a sleep-related medical condition (Abstract; ¶ [003] establishes pose monitoring and classification during sleep help caries important information in diagnosing sleep apnea, ulcers, and other medical issues and states the methods and systems in the application aim to improve on previous techniques through a novel in-bed pose estimation technique in ¶ [0006]. Examiner interprets improving sleep monitoring techniques as an aid in diagnosing sleep-related medical conditions.), the computer-implemented method comprising:
acquiring images of a subject while the subject is sleeping or attempting to sleep in a bed for a period of time (Image data of human subjects lying on a bed are captured (equivalent to acquiring images of a subject) (see ¶¶ [007]-[008]). Background disclosure establishes in researching and monitoring peoples’ sleep behavior in a bed ¶ [0003] and the goal of the techniques taught in the methods and systems is to enhance these techniques ¶ [0006]; therefore, one of ordinary skill in the art may interpret the subjects lying beds as sleeping or attempting to sleep. Human subjects are tracked over time, ¶ [0017] “…steps of the method are repeated a plurality of times to estimate a series of poses and determine movement of the human subject over a period of time.”), See Figs. 3A-3L for samples of images captured using the cameras. These individually captured images are interpreted as an image frame. A series of images are taken over a period of time and used for the dataset (Abstract; ¶ [0017]), resulting in a plurality of image frames.;
estimating, by a pose estimation model, poses of the subject lying in the bed based on one or more of the images (Abstract “… imaging data of a human subject lying on a bed… estimate the pose of the human subject, the model comprising a machine learning inference model trained on a training dataset of a plurality of in-bed human poses.”;
classifying, by a posture classification model, positions of the subject lying in the bed based on one or more of the images (¶ [0036] teaches classification tasks are performed by deep learning model. The pose estimation model comprising of a trained dataset of lying pose images (¶ ¶ [0007]- [0008]), interpreted as a deep learning model by one of ordinary skill in the art, can be used to identify postures by fine tuning the pose estimation model, see ¶ [0066] “we fine-tuned a state-of-the-art pose estimation model (i.e., a stack hourglass network trained on RGB pose datasets) to transfer the learning to estimation of the poses in sleeping postures”. Ostadabbas & Liu (2020) further teaches the resulting model can be used to collect data relating to human subjects’ position (e.g. supine, left side, and right side) and their respective categories, see ¶ [0067] “we collected pose data from 7 volunteers in hospital room and from another volunteer in the living room, while lying in the bed and randomly changing pose under three main categories of supine, left side, and right side.”. Examiner notes, the instant application’s specifications use the term “supine position” (see p. 13, line 27 and p. 19, line 24 of instant application), thus supine is interpreted as a classification of a position. Ostadabbas & Liu (2020) goes on to teach prior methods (PM) were limited in categorizing rough postures (e.g., supine, left, and right sides), while their method shows higher granularity, resulting in higher accuracy (See ¶ [0074]). By concluding the pose estimation model granularity was higher and resulted in better accuracy than PM results, based on categorizing (i.e. classifying) postures, it is inherent the postures were categorized in Ostadabbas & Liu’s (2020) teachings for comparison purposes. Categorizing and classifying is considered equivalent to one of ordinary skill in the art. One of ordinary skill in the art can interpret the “fine-tuned” pose estimation model as a posture classification model because it is trained with the dataset of lying poses from the pose estimation model and the categorization of postures is inherently part of the classification task described in ¶ [0036]); and
determining pose and posture of the subject during the period of time or a portion thereof (Refer back to Abstract, ¶ [0007]- ¶ [0008], ¶ [0066], and ¶ [0074]); and
analyzing the pose and/or posture to aid in diagnosing, (Abstract, Figs 3A-3L, ¶ ¶ [0007]- [0066], and ¶ [0066] teach estimating pose and postures based on captured data from a subject in bed. Based on observing and estimating pose and posture of subject lying in bed over a period of time, and the goal of the methods and system to enhancing techniques to analyze sleep (¶ [0003] & ¶ [0006]), examiner interprets estimating pose and posture based on captured image data to be equivalent to “analyzing the pose and/or posture to aid in diagnosing sleep”.).
Ostadabbas & Liu (2020) fail to explicitly disclose analyzing the pose and/or posture to aid in diagnosing, treating, or preventing related medical conditions.
In a related art, Ghose et al. teaches: a system and method that aids in diagnosing, treating, or preventing the related medical condition. Ghose et al. teaches tracking human movement patterns (¶ [0005]) and health related applications for the method including tracking post-surgical recovery (see ¶ [0045]). Ghose et al. further teaches the results (tracked movement patterns) assist clinicians diagnose, treat, or prevent medical conditions, see ¶ [0046] “inference of possible defects in posture can be derived by observing signatures of movement patterns…generated signatures of movement patterns are compared with stored signatures of normal movement patterns to understand abnormal movement patterns based on clinician decided thresholds and may provide output to a treating physician to understand how the patient is affected by a problem or disease.”
Therefore, it would have been obvious to a person of ordinary skill in the art prior to the effective filling date of the claimed invention to have incorporated Ostadabbas & Liu’s (2020) teachings of a method to aid in diagnosing, treating, or preventing a sleep-related medical condition including aquiring images of sleeping subjects, estimating poses of subjects, classifying positions of subjects, determining pose and posture of subct, and analyzing the pose and/or posture of subjects relating to sleep for the purpose and outcome of diagnosing, treating, or preventing medical condition related movement patterns, as taught by Ghose et al. The inventions lie in the same field of endeavor of tracking human movement. The motivation to combine the reference is to increase the accuracy in movement tracking and to avoid intensive setup and huge costs caused by the excessive number of tools (e.g. body suits with multiple markers) required to perform the tracking task, as previously seen in the field (see Ghose et al. ¶ [0004]).
Regarding Claim 31,
Ostadabbas & Liu (2020) and Ghose et al. teach the limitations of claim 30. Claim 31 equally mirrors the limitations of claim 29, found above. Thus, claim 31 is rejected based on the prior art taught in claims 29 and 30.
Regarding Claims 10 and 23,
Claims 10 and 23 are not rejected under 35 U.S.C. 102 or 103 and would be allowable if incorporated into their respective independent claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL DAVID BAYNES whose telephone number is (571)272-0607. The examiner can normally be reached Monday - Friday 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.D.B./
Samuel D. Baynes
Examiner, Art Unit 2665
/Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665