DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 were pending for examination in the Application No. 18/184,890 filed March 16th, 2023. In the remarks and amendments received on August 5th, 2025, claims 1-5, 7-12, and 17-20 are amended and claim 6 and 16 is cancelled. Accordingly, claims 1-5, 7-15, and 17-20 are currently pending for examination in the application.
Response to Amendment
Applicant’s amendments filed August 5th, 2025, to the Abstract, Specification, Drawings, and Claims have overcome each and every objection, 35 § U.S.C. 112 (b) rejection, and 35 U.S.C. § 101 rejection(s) regarding non-statutory subject matter previously set forth in the Non-Final Office Action mailed May 9th, 2025. Accordingly, the objection(s), 35 § U.S.C. 112 (b) rejection(s), and 35 U.S.C. § 101 rejection(s) regarding non-statutory subject matter are withdrawn in response to the remarks and amendments filed. Examiner warmly thanks Applicant for considering the suggested amendments to be made to the disclosure.
Response to Arguments
Applicant’s arguments filed August 5th, 2025, regarding the rejections of the claims have been fully considered but are moot because the arguments do not apply to the new combination of references being used in the current rejection below.
Remarks Regarding 35 § U.S.C. 112(f) Interpretation(s)
The examiner appreciates Applicant’s remarks to traverse the claim terms being interpreted under 35 U.S.C. §112(f) as previously set forth in the Non-Final Office Action mailed May 9th, 2025. However, the examiner respectfully disagrees that the claim terms being interpreted under 35 U.S.C. §112(f) are accompanied by sufficient structure, material, or acts to perform entirely the function recited in the claims as remarked by Applicant (pg. 9 of Applicant’s Remarks).
35 U.S.C. §112(f) is invoked for the terms listed in section “Claim Interpretations” of the current Office Action below because the terms (A) “processing module” and (B) “warning module” recite the non-functional generic placeholder “module” (MPEP § 2181). These terms are Applicant’s claim term preceding a functional limitation: (A) “arranged to process the depth image…”; and (B) “arranged to generate an alert…”. Because Applicants merely modify the non-functional generic placeholder by the function being performed (i.e., “processing” and “warning”), the claimed terms are akin to generic terms.
Additionally, Applicant has not provided on the record evidence and/or examples on how such limitations are “accompanied by sufficient structure, material, or acts to perform entirely the function recited therein” (pg. 9 of Applicant’s Remarks). Therefore, the 112(f) interpretations are maintained and are repeatedly recited in the current Office Action below (see section “Claim Interpretations” below).
Claim Objections
Claims 7, 15, and 17 are objected to because of the following informalities:
In claims 7, 15, and 17, the examiner respectfully suggests amending the phrase “artificial intelligence AI” to recite “artificial intelligence [[AI]](AI)” to distinguish between the term and its abbreviation.
Appropriate correction is required.
Claim Interpretation (Previously Presented)
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP § 2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph):
A. The Claim Limitation Uses the Term "Means" or "Step" or a Generic Placeholder (A Term That Is Simply A Substitute for "Means")
With respect to the first prong of this analysis, a claim element that does not include the term "means" or "step" triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts.
Such claim limitation(s) is/are:
"processing module arranged to process…" in claim 11 implemented on hardware disclosed lines 15-21 of pg. 8 of the instant Specification (e.g., "computer"), and thus, claims 12-15 and 17-20 are similarly interpreted; and
"warning module arranged to generate…" in claim 11 implemented as a software application as disclosed in lines 31-34 of pg. 12 to lines 1-4 of pg. 13 of the instant Specification, and thus, claims 12-15 and 17-20 are similarly interpreted.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
This application includes one or more claim limitations that use a generic placeholder but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are:
"3D spatial sensor arranged to provide…" in claim 11 implemented on hardware disclosed in claim 12 (e.g., "stereo camera, 3D solid-state LiDAR, or structure light camera").
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 10-12, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Derenne et al. (Derenne; US 2015/0109442 A1, previously cited in the Non-Final Office Action mailed May 9th, 2025, as pertinent art).
Regarding claim 1, Derenne discloses a method for monitoring activities of an object, comprising the steps of:
providing a depth image capturing at least a part of the object and an item of supporting furniture, wherein the item of supporting furniture is provided with a support surface arranged to physically support the object to be disposed thereon, and the object is movable relative to the item of supporting furniture (para(s). [0075], [0077], and [0270], recite(s)
[0075] “In one embodiment, any one or more of the video cameras 22 of system 20 are motion and image sensing devices sold under the brand name Kinect™, or variations thereof, by Microsoft Corporation of Redmond, Wash., USA. The Kinect™ motion sensing camera device includes an RGB (red, green, blue) camera, a depth sensor, and a multi-array microphone. …The depth sensor may include an infrared laser projector combined with a complementary metal oxide semiconductor (CMOS) sensor, which captures reflected signals from the laser projector and combines these signals with the RGB sensor signals.”
[0077] “In still other embodiments, other types of video cameras 22 are used, or a combination of one or more of the Kinect™ cameras 22 is used with one or more of the WAVI Xtion™ cameras 22 . Still other combinations of cameras 22 may be used. Modifications may also be made to the camera 22 , whether it includes a Kinect™ camera or a WAVI Xtion™ camera, or some other camera, in order to carry out the functions described herein, as would be known to one of ordinary skill in the art. …The terms “video camera” or “camera,” as used herein, will therefore encompass devices that only detect images, as well as devices that detect both images and depths. …”
[0270] “In one embodiment, system 20 monitors images and depth readings from cameras 22 to predict behavior that leads to someone getting out of bed. In one embodiment, this also includes recognizing when a patient is awake, as opposed to sleep; recognizing the removal of sheets, or the movement external objects out of the way, such as, but not limited to, an over bed table (OBT), a phone, a nurse call device, etc.); and recognizing when a patient swings his or her legs, grabs a side rail, inches toward a bed edge (such as shown in FIG. 13), lifts his or her torso, finds his or her slippers or other footwear, moves toward a gap between siderails of the patient's bed, or takes other actions.”
, where cameras that “detect both images and depths” such as a “Kinect™ motion sensing camera” is a camera capturing depth images; where the “bed” is an item of supporting furniture and the “patient” is an object moveable relative to the item of supporting furniture), and to identify a status of the item of supporting furniture and other furniture detectable by one or more sensor and/or computer vision (para(s). [0270]—see citation in the preceding limitation immediately above—, where determining “movement [of] external objects out of the way” such as an “over bed table (OBT)” is identifying a status of other furniture; and para(s). [0270] further recite(s):
[0297] “When configured with a patient support apparatus software module 34 , system 20 communicates all or a portion of the data it generates to the patient support apparatus 36 , such as via patient support apparatus computer device 70 . When the patient support apparatus 36 receives this data, it makes it available for display locally on one or more lights, indicators, screens, or other displays on patient support apparatus 36 . For example, when system 20 is currently executing an exit detection algorithm for a particular patient support apparatus 36 , computer device 24 forwards this information to the patient support apparatus 36 so that a caregiver receives a visual confirmation that exit detection alerting is active for that bed, chair, or other patient support apparatus 36 . As another example, computer device 24 sends the Fowler angle measured by system 20 to the patient support apparatus 36 so that it can be displayed and/or used by the patient support apparatus. System 20 further sends a signal to patient support apparatus 36 indicating when the patient has left the bed 36 so that the bed 36 can perform an auto-zeroing function of its built-in scale system. Any of the other parameters that are detected—e.g. an obstacle, the positions of the siderails, the height of the bed's deck, and a running total of the number of times various components on the bed have been moved, whether a cord is plugged in prior to the bed moving (and thus triggering a warning to the caregiver)—are sent by computer device 24 to the patient support apparatus 36 , in at least one embodiment of system 20 in which computer device 24 executes a patient support apparatus software module.”
where determining the “positions of the siderails, the height of the bed’s deck,” etc. is identifying the status of the item of supporting furniture);
processing the depth image to determine an activity of the object and the item of supporting furniture including the support surface being captured in the depth image (para(s). [0270]—see citation in the preceding limitation “providing a depth image…” above—, where processing the “images and depth readings from cameras 22 to predict behavior that leads to someone getting out of bed” is processing depth images to determine an activity of the object and the item of supporting furniture including the support surface (e.g., a patient “getting out of bed”)); and
generating an alert upon a determination of the activity of the object being identified as a risky activity (para(s). [0272] and [0275], recite(s)
[0272] “For some patient exit detection software modules, 34 , computer device 24 detects when a patient places his or her hands over a side rail. The coordinates of the patient's feet and other body extremities are compared to each other and it is determined whether any of these fall outside the bed outline coordinates. The center of gravity of the patient may also or alternatively be estimated and a higher likelihood of a patent exiting the bed is concluded when the vertical component of the patient's center of gravity increases, or when the vertical component of the position of the patient's head increases. The detection of a patient leaning over a side rail also increases the estimate of the likelihood of a patient leaving the bed. Movement of the patient toward a side rail, or toward the side of the bed closest to the bathroom, also increases the estimate of the likelihood of the patient leaving the bed. The removal of sheets and the sitting up of a patient in bed may also increase this estimate. System 20 calculates a likelihood of a patient making an imminent departure from the bed, based on any of the aforementioned factors. If this estimate exceeds a predefined threshold, then an alert is transmitted to appropriate caregivers. A numeric value corresponding to this estimation of the likelihood of a patient exiting the bed may also be displayed on one or more screens that are viewable by a caregiver, including the screens of mobile devices, such as smart phones, laptops, tablet computers, etc.”
[0275] “If an exit event or condition detected by one or more video cameras 22 gives rise to a low risk status, system 20 gives out a warning, according to at least one software module 34 . In the case of a high risk status, system 20 issues an alarm. Automatic voice signals may also be transmitted to speakers within the patient's room. …”
, where “an alert is transmitted to appropriate caregivers” when it is determined that at least “a likelihood of a patient making an imminent departure from the bed” is generating an alert upon a determination of the activity of the object being identified as a risky activity (e.g., a “risk status” if an “exit event or condition [is] detected by one or more video cameras”)), wherein the alert is generated based on analyzing a danger level of the activity of the object associated with the status of the item of supporting furniture and a predicted interaction between the object and other furniture (para(s). [0270]—see citation in the preceding limitation “providing a depth image…” above—, and para(s). [0275]—see citation in the preceding limitation immediately above—, where generating the “alert” when a level of risk (e.g., “low risk status” or “high risk status”) of the activity of the object associated with “an exit event or condition detected” is generating the alert based on analyzing a danger (i.e., “risk”) level of the activity of the object associated with the status of the item of supporting furniture (e.g., the patient “grab[bing] a side rail [of the bed]”) and a predicted interaction between the object and the other furniture (e.g., recognizing “movement [of] external objects out of the way, such as, but not limited to, an over bed table (OBT)”)).
Regarding claim 2, Derenne discloses the method of claim 1, wherein the depth image is captured by a three-dimensional (3D) spatial sensor including a stereo camera, a 3D solid-state LiDAR, or a structured light camera (para(s). [0075] and [0077]—see citations in claim 1 limitation “providing a depth image…” above—, where a “Kinect™ motion sensing camera device” is a at least a structured light camera).
Regarding claim 10, Derenne discloses the method of claim 1, wherein the object is a patient or an object requiring caregivers’ and/or other peoples’ attentions (para(s). [0270]—see citation in claim 1 limitation “providing a depth image…” above—, where the object is at least a patient).
Regarding claim 11, the claim is a system performing the method of claim 1. Therefore, claim 11 recites similar limitations to claim 1 and is rejected for similar rationale and reasoning (see the analysis for claim 1 above).
Regarding claim 12, the claim recites similar limitations to claim 2 and is rejected for similar rationale and reasoning (see the analysis for claim 2 above).
Regarding claim 20, the claim recites similar limitations to claim 10 and is rejected for similar rationale and reasoning (see the analysis for claim 10 above).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3-5, 7-9, 13-15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Derenne as applied to claims 2 and 12 above, and further in view of Rush et al. (Rush; US 10,489,661 B1, previously cited in the Non-Final Office Action mailed May 9th, 2025).
Regarding claim 3, Derenne discloses the method of claim 2, wherein the step of processing the depth image comprises a step of(para(s). [0095], [0138], and [0178], recite(s)
[0095] “For example, in one embodiment, the software used by computer device 24 to analyze the image and depth data from cameras 22 is processed using the commercially available software… These algorithms are designed to be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce high resolution images of entire scenes, find similar images from an image database, follow eye movements, recognize scenery and establish markers to overlay scenery with augmented reality, and other tasks.”
[0138] “Computer device 24 utilizes environmental data 52 regarding the position of cameras 22 so that depth and image readings from the multiple cameras can be correlated to each other. Thus, for example, if a first camera detects a first side of an object at a first distance from the first camera, and a second camera detects another side of the object at a second distance from the second camera, the location information of each camera 22 within the room is utilized by computer device 24 to confirm that the first and second cameras 22 are looking at the same object, but from different vantage points. The color, shape, and size information that each camera 22 gathers about the object from its vantage point is then combined by computer device 24 , thereby providing computer device 24 with more information for identifying the object and/or for monitoring any activities that relate to the object.”
[0178] “As mentioned above, video monitoring system 20 is also configured to detect and identify objects that appear in the images and depth data gathered from cameras 22 . Computer device 24 processes the images and depth data received from cameras 22 to detect the object within one or more image frames. Computer device 24 then seeks to match the three-dimensional pattern of the detected object to the attribute data 44 of a known object that is stored in database 50. …”
, where identifying “objects that appear in the images and depth data gathered from cameras” for “identifying the object and/or for monitoring any activities that relate to the object” (e.g., if a patient is “getting out of bed” such as recited in para. [0270]—see citation in claim 1 limitation “providing a depth image…” above) is performing 3D analysis of the object (e.g., “patient”) and the item of supporting furniture (e.g., “bed”) so as to determine the activity of the object (e.g., the patient “getting out of bed”)).
Where Derenne does not specifically disclose
converting the depth image to point cloud data for further 3D analysis of the object and the item of supporting furniture so as to determine the activity of the object;
Rush teaches in the same field of endeavor of processing depth images to determine an activity of an object (e.g., patient) relative to an item of supporting furniture (e.g., bed)
converting the depth image to point cloud data for further 3D analysis of the object and the item of supporting furniture so as to determine the activity of the object (lines 35-53 of col. 3, recite(s)
[lines 35-53 of col. 3] “…The processor 106 may execute one or more software programs (e.g., modules) that implement techniques described herein. For example, the processor 106 , in conjunction with one or more modules as described herein, is configured to generate a depth mask (image) of the environment based upon the depth estimate data (e.g., z-component data) captured by the cameras 102 . For example, one or more modules are configured to cause the processor 106 to continually monitor the depth value of at least substantially all of the pixels that represent the captured environment and stores the greatest (deepest) depth value associated with each pixel. For instance, the modules cause the processor 106 to continually monitor for a pre-determined amount of time (e.g., a plurality of frames) the depth value of the pixels and stores the deepest depth value measured during the time interval. Thus, the depth mask comprises an accumulation of depth values and each value represents the deepest depth value of a pixel measured over the time interval. The processor 106 can then be instructed to generate a point cloud based upon the depth mask that includes a set of point values that represent the captured environment.”
, where “generat[ing] a point cloud based upon the depth mask [image]” is converting the depth image to point cloud data).
Since Derenne further discloses software to analyze the image and depth data from cameras in the system include machine learning algorithms such as “produc[ing] 3D point clouds” (para(s). [0095]—see citation of Derenne in the current claim above), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Derenne to incorporate converting the depth image to point cloud data for further 3D analysis of the object and the item of supporting furniture so as to determine the activity of the object to better determine the activity of the object by improving the detection of the object (e.g., patient) on/above the item of supporting furniture (e.g., bed) as taught by Rush (para(s). [0030], recite(s)
[0030] “The module 116 is configured to cause the processor 106 to determine a depth value associated with each pixel (e.g., each pixel has a corresponding value that represents the approximate depth from the camera 102 to the detected object). In an implementation, the module 116 is configured to cause the processor 106 to determine a center of mass of a detected object positioned above the bed plane. For example, the module 116 may initially cause the processor 106 to determine a bed plane 202 representing a bed within the FOV of the camera 102 (e.g., determine the depth of the bed with no objects on or over the bed). Thus, the pixels associated with the bed are identified (i.e., define a bed plane) and an associated distance is determined for the identified bed. When an object is positioned within in the bed, the module 116 is configured to cause the processor 106 to continually monitor the depth values associated with at least substantially all of the pixels within the defined bed plane. Thus, the processor 106 is configured to process the depth image to determine one or more targets (e.g., users, patients, bed, etc.) are within the captured scene. For instance, the processor 106 may be instructed to group together the pixels of the depth image that share a similar distance.”
).
Regarding claim 4, Derenne in view of Rush discloses the method of claim 3, wherein Derenne further discloses the step of processing the depth image further comprises a step of identifying a location of the item of supporting furniture, including locating the support surface of the item of supporting furniture captured in the depth image (para(s). [0218-0219] and [0222], recite(s)
[0218] “In some systems 20 , one or more cameras 22 are positioned to measure a height H ( FIG. 6) of the patient's bed. System 20 identifies the particular type of bed the patient is resting on by detecting a number of attributes of the bed via cameras 22 and then comparing these attributes to attribute data 44 of specific types of beds stored in database 50 . The list of attributes include dimensions for the detected bed, markings on the bed, structural features of the beds, identifiers positioned on the bed, or other information about the bed that can be used to distinguish the bed from other types of beds that may be present in the health care facility. If only one type of bed is used within the facility, then such comparisons may be omitted.”
[0219] “After a bed is detected by system 20 , system 20 determines how high the bed is currently positioned (distance H in FIG. 6) above the ground. This number is then compared with the known minimum height for that particular bed. Such known heights are stored in database 50 . Indeed, database 50 contains values of the minimum heights for each type of bed that may be present in the health care facility. System 20 issues an alert if it detects that height H is greater than the known lowest height for that particular bed. In issuing this alert, a tolerance may be included to account for any measurement errors by system 20 so that bed height alerts are not issued in response to inaccurate height measurements by system 20 . Sending such alerts helps in preventing patient falls, and/or in minimizing any negative consequences from any falls that might occur.”
[0222] “…That is, many models of patient support apparatuses 36 , such as beds, include lights positioned at defined locations that are illuminated by the bed when the brake is on and when the exit detection system is armed. Attribute data 44 for these beds includes the location and color of these lights for each type of bed 36. …”
, where determining “attribute data” for a “bed” includes the “location” of the bed and “how high the bed is currently positioned… above the ground” is identifying a location of the item of supporting furniture, including locating the support surface of the item (e.g., the “height” of the bed is a location of the support surface of the bed from the ground)).
Regarding claim 5, Derenne in view of Rush discloses the method of claim 4 wherein Derenne further discloses the step of identifying the location of the item of supporting furniture includes at least one of:
identifying one or more machine-detectable markers each indicating a predetermined position of a feature of the item of supporting furniture;
annotating the location of the item of supporting furniture by an operator; or
determining the location of the item of supporting furniture using AI image recognition
(para(s). [0218]—see citation in claim 4 above—, where identifying the bed by “markings”, “structural features”, and/or “identifiers position on the bed” is identifying the location of the item of supporting furniture based on at least identifying one or more machine-detectable markers (e.g., “markings”, “features”, and/or “identifiers”) each indicating a predetermined position of a feature of the bed (e.g., “identifiers positioned on the bed” to identify known ”specific types of beds” stored in a database are machine-detectable markers indicating a predetermined position of a feature—e.g., “attribute[s]”—of the bed)).
Regarding claim 7, Derenne in view of Rush discloses the method of claim 4, wherein Derenne further discloses the step of processing the depth image further comprises a step of identifying a position and/or a posture of the object based on a trained artificial intelligence (AI) models and/or skeleton of the object (para(s). [0171-0172], recite(s):
[0171] “Video monitoring system 20 is configured to detect people who appear in the images detected by cameras 22 . In at least one embodiment, system 20 detects such people and generates a rudimentary skeleton 76 that corresponds to the current location of each individual detected by cameras 22 . FIG. 5 shows one example of such a skeleton 76 superimposed upon an image of an individual 78 detected by one or more cameras 22 . In those embodiments where cameras 22 include a Microsoft Kinect device, the detection and generation of skeleton 76 is carried out automatically by software included with the commercially available Microsoft Kinect device. Regardless of the manner in which skeleton 76 is generated, it includes a plurality of points 80 whose three dimensional positions are computed by computer device 24 , or any other suitable computational portion of system 20 . In those embodiments where cameras 22 include Kinect devices that internally generate skeleton 76 , computer device 24 is considered to include those portions of the internal circuitry of the Kinect device itself that perform this skeleton-generating computation. In the embodiment shown in FIG. 5, skeleton 76 includes points 80 that are intended to correspond to the individual's head, neck, shoulders, elbows, wrists, hands, trunk, hips, knees, ankles, and feet. In other embodiments, skeleton 76 includes greater or fewer points 80 corresponding to other portions of a patient's body.”
[0172] “For each point 80 of skeleton 76 , system 20 computes the three dimensional position of that point multiple times a second. The knowledge of the position of these points is used to determine various information about the patient, either alone or in combination with the knowledge of other points in the room, as will be discussed in greater detail below. For example, the angle of the patient's trunk (which may be defined as the angle of the line segment connecting a trunk point to a neck point, or in other manners) is usable in an algorithm to determine whether a patient in a chair is leaning toward a side of the chair, and therefore may be at greater risk of a fall. The position of the hands relative to each other and/or relative to the chair also provides an indication of an intent by the patient to get up out of the chair. For example, placing both hands on the armrests and leaning forward is interpreted, in at least one embodiment, by computer device 24 to indicate that the patient is about to stand up. Computer device 24 also interprets images of a patient who places both hands on the same armrest as an indication of an intent by the patient to get up out of the chair. Many other algorithms are described in greater detail below that use the position of body points 80 relative to objects in the room and relative to each other to determine conditions of interest.”
, where generating a “skeleton 76 superimposed upon an image of an individual 78 detected by one or more cameras 22” to determine the posture of a patient (e.g., “position of body points 80 relative to objects in the room and relative to each other to determine conditions of interest”) is identifying a posture of the object (e.g., “patient”) based on at least a skeleton of the object).
Regarding claim 8, Derenne in view of Rush discloses the method of claim 7, wherein Derenne further discloses the step of processing the depth image further comprises a step of prediction the risky activity performed by the object with reference to (i) a tracked posture of the object captured in a single and/or a sequence of depth images and/or (ii) a status of furniture other than the supporting furniture (para(s). [0171-0172]—see citations in claim 7 above—, where processing the depth image to determine the “intent by the patient” such as a patient “getting out of bed” (as recited in para. [0270]— see citation in claim 1 limitation “processing the depth image…” above) is predicting the risky activity performed by the object with reference to (i) a tracked posture of the object captured in a single and/or a sequence of depth images (e.g., “position of body points” detected by cameras) and/or (ii) a status of furniture other than the supporting furniture (e.g., “movement external objects out of the way, such as, but not limited to, an over bed table (OBT)”)).
Regarding claim 9, Derenne in view of Rush discloses the method of claim 8, wherein Derenne further discloses the step of processing the depth image further comprises a step of identifying a portion of the object being outside of the support surface to determine if the activity of the object is risky based on(para(s). [0275]—see citation in claim 1 limitation “generating an alert…” above—, where the “exit event or condition detected” for determining if the activity of the object is risky includes at least a portion of the object (e.g., “patient’s feet and other body extremities”) outside of the support surface (e.g., “fall outside the bed outline coordinates”); and wherein para(s). [0272] further recite(s) the determination that the portion of the object being outside of the support surface is based on the object staying on/above the support surface and outside of the support surface (e.g., “whether any of these [i.e., ‘patient’s feet and other body extremities are compared to each other’] fall outside the bed outline coordinates”):
[0272] “For some patient exit detection software modules, 34 , computer device 24 detects when a patient places his or her hands over a side rail. The coordinates of the patient's feet and other body extremities are compared to each other and it is determined whether any of these fall outside the bed outline coordinates. The center of gravity of the patient may also or alternatively be estimated and a higher likelihood of a patent exiting the bed is concluded when the vertical component of the patient's center of gravity increases, or when the vertical component of the position of the patient's head increases. The detection of a patient leaning over a side rail also increases the estimate of the likelihood of a patient leaving the bed. …”
).
Where Derenne does not specifically disclose
identifying a portion of the object being outside of the support surface to determine if the activity of the object is risky based on a ratio of points in a point cloud representing the object staying on/above the support surface of the item of supporting furniture and outside of the support surface;
Rush further teaches in the same field of endeavor of determining if the activity is risky based on the object staying on/above the support surface of the item of supporting furniture and outside of the support surface
identifying a portion of the object being outside of the support surface to determine if the activity of the object is risky based on a ratio of points in a point cloud representing the object staying on/above the support surface of the item of supporting furniture and outside of the support surface (lines 57-67 of col. 11 to lines 1-28 of col. 12, recite(s)
[lines 57-67 of col. 11 to lines 1-28 of col. 12] “…For example, the processor 106 is configured to determine pixels that represent the bed 226 with respect to other objects within the field of view of the camera 102. The surface area of the objects identified as outside of the bed 226 are estimated. The processor 106 determines the object 204 to be a human when the estimation is greater than a defined threshold of pixels (e.g., a subset of pixels is greater than a defined threshold of pixels). The module 116 is configured to instruct the processor 106 to differentiate between a standing person and a person lying down based upon the percentage of pixels representing the object 204 (based upon the surface area) classified as above the bed 226 as compared to the percentage of pixels of the object 204 classified as below the bed 226. For example, if the percentage of the pixels representing object 204 detected below the bed 226 is above a defined threshold (e.g., greater than forty percent, greater than fifty percent, etc.), the module 116 instructs the processor 106 to determine that the person is lying down within the FOV of the camera 102. Thus, the processor 106 is configured to identify a subset of pixels representing a mass proximal to the subset of pixels representing the floor that were not proximal to the floor pixels in previous frames. In this implementation, the module 116 determines that the patient is on the floor when the subset of pixels representing the mass proximal to the subset of pixels representing the floor that were not proximal to the floor pixels in previous frames.”
, where determining the risky activity of if a “patient fell from his or her bed” includes determining “the percentage of pixels representing the object 204 (based upon the surface area) classified as above the bed 226” is determining a risky activity by identifying a portion of the object (e.g., “percentage of pixels”) being outside of the support surface based on a ratio (i.e., “percentage”) of points in a point cloud representing the object staying on/above the support surface of the item of supporting furniture and outside of the support surface (e.g., a “percentage of pixels representing the object 204 (based upon the surface area) classified as above the bed” is a ratio between pixels of the patient staying on/above the support surface of the bed and pixels not on/above the support surface of the bed (i.e., outside of the support surface))).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Derenne to incorporate identifying a portion of the object being outside of the support surface to determine if the activity of the object is risky based on a ratio of points in a point cloud representing the object staying on/above the support surface of the item of supporting furniture and outside of the support surface to more accurately determine the risk activity of when the patient is at risk of falling off the support surface of their item of supporting furniture as taught by Rush (para(s). [0047], recite(s)
[0047] “In another example, as shown in FIG. 2K, the module 116 is configured to determine a movement of the object 204 within the bed 226 . For example, the module 116 is configured to cause the processor 106 to approximate a total change in volume of the detected pixels representing the object 204 (e.g., patient) in the bed within one image frame to the next image frame (e.g., the change in volume of pixels from to to TN). If the total change in volume of the object is above a defined threshold, the module 116 is configured to cause the processor 106 to issue a notification directing a user to view a display monitor 124 and/or display portion 128 associated with the patient based on a determination that the patient may not be moving beyond the defined medical protocol. The system 100 is configured to track pixels associated with a mass over a number of depth frame images. If the processor 106 determines that the pixels representing the mass move closer to the floor (e.g., depth values of the pixels representing the mass approach the depth values of the pixels representing the floor), the processor 106 may determine that the object representing the mass is falling to the floor. In some implementations, the system 100 may utilize sound detection (e.g., analysis) in addition to tracking the pixels to determine that the patient has fallen. For example, the processor 106 may determine that a sudden noise in an otherwise quiet environment would indicate that a patient has fallen.”
).
Regarding claim 13, the claim recites similar limitations to claim 3 and is rejected for similar rationale and reasoning (see the analysis for claim 3 above).
Regarding claim 14, the claim recites similar limitations to claim 4 and is rejected for similar rationale and reasoning (see the analysis for claim 4 above).
Regarding claim 15, the claim recites similar limitations to claim 5 and is rejected for similar rationale and reasoning (see the analysis for claim 5 above).
Regarding claim 17, the claim recites similar limitations to claim 7 and is rejected for similar rationale and reasoning (see the analysis for claim 7 above).
Regarding claim 18, the claim recites similar limitations to claim 8 and is rejected for similar rationale and reasoning (see the analysis for claim 8 above).
Regarding claim 19, the claim recites similar limitations to claim 9 and is rejected for similar rationale and reasoning (see the analysis for claim 9 above).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIA Z YAO whose telephone number is (571)272-2870. The examiner can normally be reached Monday - Friday (8:30AM - 5PM).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.Z.Y./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666