Prosecution Insights
Last updated: April 19, 2026
Application No. 17/158,285

SYSTEM AND METHOD FOR MONITORING AN INDIVIDUAL USING LIDAR

Final Rejection §103§112
Filed
Jan 26, 2021
Examiner
HAILE, BENYAM
Art Unit
2688
Tech Center
2600 — Communications
Assignee
Curbell Medical Products Inc.
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
2y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
428 granted / 691 resolved
At TC average
Strong +25% interview lift
Without
With
+25.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
55 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
2.5%
-37.5% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 691 resolved cases

Office Action

§103 §112
(sensor DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-26 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-26 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 14 recite the limitation “the processor is configured to: receive a set of spatial data from the sensor, wherein the spatial data comprises direct spatial coordinates representing a point cloud”. The originally filed disclosure only provides support for the processor receiving “a set of spatial data from the LIDAR sensor” and calculate the location/coordinate based on the spatial data, published application [0006, 0044, 0052]. There is no support in the originally filed disclosure for the claimed limitation of LIDAR sensor provided spatial data comprising “direct spatial coordinates” to the processor. Claims 2-13, 15-26 are rejected for being dependent on a rejected claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-8, 10, 13-21, 23, 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fuji et al. [US 20130100284] in view of Valdhorn [US 20170085839]. As to claim 1. Fuji discloses A system for monitoring an individual, comprising: a processor, [fig. 1, 2A, 0098] processing unit 20; a sensor, [fig. 1, 2A, 0098] range imaging sensor 10, in electronic communication with the processor, [fig. 1, 2A, 0098], wherein the sensor is configured to emit laser pulses for direct distance measurement, [0065] the range imaging sensor projects an intensity-modulated light and calculates the distance of the target object based on the time-of-flight of the received reflected light, and generate a set of spatial data as a point cloud, [0063] wherein the range image is an image having pixel values as distance values, [0086] each pixel representing distance to an object existing, the range image with a plurality of pixels each representing distance values reads on the claimed set of spatial data as a point cloud, and wherein the processor is configured to: receive a set of spatial data from the sensor, [0105] range image received from range image sensor 10, wherein the spatial data comprises direct spatial coordinates representing a point cloud, [0063] wherein the range image is an image having pixel values as distance values, [0086] each pixel representing distance to an object existing; calculate a first location of the individual relative to a support object based on the set of spatial data, [0105, 0106, 0170] processing unit 20 calculates the location of the person with respect to the bed from a received range image, [0086] range image represents location of objects in a polar coordinate system; and determine if the first location is at an alert location relative to the support object, [0106, 0167, 0174]. Fuji fails to disclose that the sensor is a LIDAR sensor. Valdhorn teaches a system and method that implements a LIDAR sensor to generate a range image, [0081] for monitoring the location of a person, [0124], in bed, [0218]. It would have been obvious for one of ordinary skill in the art at the time of the filing of the claimed invention to combine the teachings of Fuji with that of Valdhorn to use the LIDAR sensor of Valdhorn as the range sensor for the system of Fuji as nothing but choosing one of a finite number of available options for generating a range image, as suggested in Valdhorn [0081], and as LIDAR provides a high accuracy. As to claim 2. Fuji discloses The system of claim 1, wherein the processor is configured to calculate the first location of the individual relative to the support object by: distinguishing spatial data of the individual from spatial data of the support object, [0105, 0106, 0170] processing unit 20 calculates the location of the person with respect to the bed; and calculating a center of mass of the individual based on the spatial data of the individual, [0168] center region of the person determined (Note: center of mass is different from geometric center. However, the current application determines center of mass based on the assumption that the patients density is homogeneous, [0054] of the published application, which makes the center of mas the same as the geometric center). As to claim 3. Fuji discloses The system of claim 1, where the processor is configured to determine an alert position of the individual relative to the support object by calculating a location of at least one edge of the support object, [0167] fall determined based on the location of the person relative to the range of the bed. As to claim 4. Fuji discloses The system of claim 3, wherein the alert location is determined if the center of mass of the individual is beyond the edge of the support object, [0015, 0168, 0169]. As to claim 5. Fuji discloses The system of claim 3, wherein the alert location is determined if the center of mass of the individual is within a predetermined distance of the edge of the support object, [0169] a threshold distance from the bed indicates a fall. As to claim 6. Fuji discloses The system of claim 1, wherein the processor is configured to send an alert signal when the first location is determined to be at the alert location, [0169] when the location is at a predetermined distance from the bed. As to claim 7. Fuji discloses The system of claim 1. wherein the processor is configured to: receive a second set of spatial data from the LIDAR sensor: calculate a second location of the individual relative to the support object based on the second set of spatial data: and determine if the second location is at an alert location relative to the support object, [0167] fall determined based on the location of the person relative to the range of the bed. As to claim 8. Fuji discloses The system of claim 7, wherein the processor is configured to: determine if the change from the first location to the second location is indicative of movement to an alert location, [0167]. As to claim 10. Fuji discloses The system of claim 7, wherein the processor is configured to: determine a direction of movement of the individual, [0171, 0172] movement traced from inside to outside of the bed and vice versa. As to claim 13. Fuji discloses The system of claim 7, wherein the processor is configured to: determine if the individual has moved from a recumbent position to a sitting position based on the first location and the second location, [0157, 0165, 0169]. As to claims 14-21, 23, 26 are rejected using the same prior arts and reasoning as to that of claims 1-8, 10, 13, respectively. Claim(s) 9, 11, 22, 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fuji in view of Valdhorn as applied to claim 1 above, further in view of Weffers [US 20190221315]. As to claim 9. The combination of Fuji and Valdhorn fails to disclose The system of claim 8, wherein the processor is configured to: calculate a probability that the individual will move to an alert location based on the first location and the second location. Weffers teaches a patient monitoring system and method using a camera, [0071] to track movement of tracked object, [0075], as a fall prediction system, [0071]; wherein the alert indication is changed from green to orange based on the tracked first and second location of the person with respect to the bed edge. It would have been obvious for one of ordinary skill in the art at the time of the filing of the claimed invention to combine the teachings of the combination of Fuji and Valdhorn with that of Weffers so that the system can provide an alert before the person falls. As to claim 11. The combination of Fuji and Valdhorn fails to disclose The system of claim 7, wherein the processor is configured to: determine a velocity of the individual. Weffers teaches a patient monitoring system and method using a camera, [0071] to track direction and speed of movement of tracked object, [0075], as a fall prediction system, [0071]. It would have been obvious for one of ordinary skill in the art at the time of the filing of the claimed invention to combine the teachings of the combination of Fuji and Valdhorn with that of Weffers so that the system can provide an alert before the person falls. As to claims 22, 24 are rejected using the same prior arts and reasoning as to that of claims 9, 11, respectively. Claim(s) 12, 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fuji in view of Valdhorn as applied to claim 1 above, further in view of Wiggermann et al. [US 20160314672]. As to claim 12. The combination of Fuji and Valdhorn fails to disclose The system of claim 7, wherein the processor is configured to: receive one or more additional sets of spatial data from the LIDAR sensor; and determine an acceleration of the individual based on the first location, the second location, and additional locations based on the one or more additional sets of spatial data. Wiggermann teaches an estimation and monitoring of a patient in bed, [fig. 2]; wherein the system determines the acceleration of the patient based on a body position over time and a signal from an angle sensor, [0056]. It would have been obvious for one of ordinary skill in the art at the time of the filing of the claimed invention to combine the teachings of the combination of Fuji and Valdhorn with that of Wiggermann so that the system can calculate more parameters without having sensors for every parameter needed. As to claim 25 is rejected using the same prior arts and reasoning as to that of claim 12. Response to Arguments Applicant's arguments filed 02/27/2024 have been fully considered but they are not persuasive. Argument 1: Claim 1 requires “a LIDAR sensor generating a point cloud”. Response 1: Fuji teaches a range sensor, the emits a light, and uses the reflected light to determine the range of a target object. The teaching further uses the sensor for detecting regions occupied by the person using the range sensor. LIDAR stands for Light Detection And Ranging. The sensor of Fuji implements the functionality of a LiDAR sensor without specifically specifying that it is a LiDAR sensor. One of ordinary skilled in the art can easily understand that the sensor of Fuji can easily be implemented using a LiDAR sensor as the range sensor without modifying any of the functionality and implementation of the system. Argument 2: The “spatial data comprises direct spatial coordinates representing a point cloud”, which is fundamentally different from the pixel-based range image data disclosed in Fiji. Response 2: Fuji, [0086], specifically describes that the range images represent the location of the object in polar coordinate system. Further describing “Such a range image is referred to as a range image of a polar coordinate system.” Argument 3: Fuji fails to disclose determining an “Alert Location”. Response 3: Fuji [0167] describes “when the object person 40 falls from the bed 30 …, it is necessary to immediately provide a notification. When an object … is detected only outside of the range of the bed 30,”; and [0170] describes “In a case where the regions of a person inside and outside of the range of the bed 30 are combined to determine the location of the person”. Fiji clearly determines an alert location, as described above in the cited portions to be as a location outside the range of the bed. Argument 4: No motivation to combine with Valdhorn, as the two references disclose fundamentally different technologies. Response 4: In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). Valdhorn [0081] specifically provides a description that one of ordinary skill in the art can use image sensors instead of LIDAR for performing the same method. In this case, both teachings use a range sensing technology for monitoring a person relative to a bed. The difference is in the type of sensor implemented. There is a finite set of range sensors, that one of ordinary skilled in the art can easily determine which one to use by using the fundamental step of designing a system using the criteria of weighing cost, without compromising the functionality of the product. One of ordinary skill in the art understands that replacing one sensor with another requires some changes to be made to the system to integrate the second sensor into the system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENYAM HAILE whose telephone number is (571)272-2080. The examiner can normally be reached 7:00 AM - 5:30 PM Mon. - Thur.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Lim can be reached at (571)270-1210. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Benyam Haile/Primary Examiner, Art Unit 2688
Read full office action

Prosecution Timeline

Jan 26, 2021
Application Filed
Feb 24, 2023
Non-Final Rejection — §103, §112
Oct 04, 2023
Response after Non-Final Action
Feb 27, 2024
Response Filed
May 31, 2024
Final Rejection — §103, §112
Jan 15, 2025
Response after Non-Final Action
Jul 15, 2025
Request for Continued Examination
Jul 22, 2025
Response after Non-Final Action
Jul 27, 2025
Non-Final Rejection — §103, §112
Jan 30, 2026
Response Filed
Feb 13, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592108
System and Method for Authenticating User of Robotaxi
2y 5m to grant Granted Mar 31, 2026
Patent 12592713
SYSTEM AND METHOD FOR CONVERTING DIRECT ANALOG SAMPLES TO COMPRESSED DIGITIZED SAMPLES
2y 5m to grant Granted Mar 31, 2026
Patent 12582578
COMPLIANCE KIT AND SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12580576
SUCCESSIVE APROXIMATION REGISTER ANALOG TO DIGITAL CONVERTERS INCLUDING BUILT-IN SELF-TEST
2y 5m to grant Granted Mar 17, 2026
Patent 12567322
Discovery Of And Connection To Remote Devices
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+25.1%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 691 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month