DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03 November 2023 has been entered. Applicant amended claims 22, 24, 27, 34, 39-40. Applicant added new claims 42-46. Applicant previously cancelled claims 1-21. Accordingly, claims 22-46 remain pending.
Response to Arguments
Applicant’s arguments with respect to the amended limitations in the independent claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 22-26, 30, 32-33, 38-41are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 21-22, 24-27, 29-30, 32, and 38-39 of copending Application No. 18639499 in view of Svenson et al US 20210307621 (hereinafter Svenson). Although the claims at issue are not identical, they are not patentably distinct from each other because the subject matter claimed in the claim(s) of the instant application is fully disclosed and covered by the claims in the copending application 18639499 in view of Svenson.
This is a provisional nonstatutory double patenting rejection.
Instant Application 18946584
Copending Application 18639499
22. (New) A system for automatically monitoring an individual in a designated environment to identify a risk of harm while maintaining a privacy of the individual, the system comprising:
an image-based sensor capturing image-based data of the designated environment to extract therefrom a digital representation of a posture of the individual within the designated environment such that an identity of the individual is unidentifiable from said digital representation;
a radar capturing radar data of the designated environment;
a network communication interface;
a non-transitory computer-readable medium having stored thereon characteristic features of multiple anticipated harm scenarios, digitally characterised by a respective combination of a respective posture and a corresponding vital sign change;
a digital data processor executing non-transitory digital instructions in real-time to digitally:
process said radar data to extract a current vital sign of the individual being monitored;
compare said digital representation of the posture with multiple predefined postures to categorize a current predefined posture of the individual being monitored;
identify a given risk of harm upon said current vital sign and said current predefined posture in combination corresponding with a given one of said multiple anticipated harm scenarios, wherein either said current vital signa and said current predefined posture are insufficient, alone, to identify said given risk of harm; and communicate an alert corresponding to said identified risk of harm via said control network communication interface.
21. (New) A harm prevention monitoring system for automatically monitoring a risk of harm to an individual in a designated environment, the system comprising:
a sensor array configured to acquire data of a plurality of data types representative of a current state of the individual;
wherein said sensor array comprises a radar sensor configured to capture radar data of the designated environment
a control interface configured to communicate with said sensor array and a remote device;
a digital data processor in communication with said sensor array and said control interface and configured to execute digital instructions to automatically: via said data of said plurality of data types acquired via said sensor array, …wherein said characteristic feature comprises a vital sign of the individual
extract in real-time a characteristic feature of said current state of the individual;… wherein said characteristic feature comprises a vital sign of the individual
digitally compute using said characteristic feature the risk of harm to the individual with respect to an anticipated harm scenario; and
upon the risk of harm corresponding with said anticipated harm scenario, communicate via said control interface to said remote device an alert corresponding to said anticipated harm scenario; and wherein the risk of harm is at least partly computed by implementing a non-contact vital sign monitoring process.
23. The system of claim 22, wherein said image-based sensor comprises a depth-enabled image sensor.
22. (New) The system of claim 21, wherein said sensor array further comprises one or more of a colour (RGB) camera, a colour-depth (RBG-D) camera, a depth camera, a thermal sensor, an audio sensor, or a dynamic vision sensor (DVS).
24. The system of claim 22, wherein said image-based sensor captures sequential image-based data over time to extract therefrom a posture sequence representative of a gesture, wherein at least some of said anticipated harm scenarios are digitally characterised by a respective gesture and corresponding vital sign change, and wherein said digital data processor executes digital instructions in real-time to digitally compare said digital representation of said gesture with multiple predefined gestures to categorize a current predefined gesture of the individual being monitored to identify said given risk of harm upon said current vital sign and said current predefined gesture in combination corresponding with said given one of said anticipated harm scenarios.
31. (New) The system of claim 29, wherein said human action recognition process distinguishes between two or more postural projections in the designated environment and is operable to detect recognised motions of distinguished postural projections so as to at least partly compute the risk of harm.
29. (New) The system of claim 28, wherein said digital data processor comprises digital instructions configured to implement a human action recognition process on said postural projection so as to at least partly compute the risk of harm.
28. (New) The system of claim 27, wherein said sensor array further comprises at least one additional depth-enabled image-based camera which is arranged to complement coverage of the designated environment, and wherein said digital data processor comprises digital instructions configured to merge image-depth data from respective depth-enabled image-based cameras to extract said skeletal projection.
25. The system of claim 22, wherein said digital representation of said posture comprises a digital representation of a physical disposition of physical body parts of the individual being monitored relative to said designated environment.
25. (New) The system of claim 21, wherein said characteristic feature further comprises any one or more of a body motion of the individual, or a body posture of the individual, an activity level of the individual, a predefined action of the individual, a predefined behaviour of the individual, a presence of a designated object in the vicinity of the individual, an anomalous presence in the designated environment.
27. (New) The system of claim 21, wherein said sensor array further comprises a depth-enabled image-based camera configured to capture image-depth data of the designated environment, and wherein said characteristic feature is based at least in part on a postural projection of the individual.
26. The system of claim 25, wherein at least one of said anticipated harm scenarios is characterised by a predefined physical disposition defined by a suspended vertical body orientation.
26. (New) The system of claim 21, wherein said anticipated harm scenario corresponds to one or more of a self-harm event, a choking event, a bleeding event of the individual, or a seizure of the individual.
30. The system of claim 29, wherein said predefined data anonymization process comprises automatically processing said image-based data at said image-based sensor to extract a skeletal projection or anonymized three-dimensional body representation from said image-based data.
28. (New) The system of claim 27, wherein said sensor array further comprises at least one additional depth-enabled image-based camera which is arranged to complement coverage of the designated environment, and wherein said digital data processor comprises digital instructions configured to merge image-depth data from respective depth-enabled image-based cameras to extract said skeletal projection.
38. (New) The system of claim 21, wherein said sensor array further comprises an image-based sensor configured to capture image data of the designated environment, and wherein said digital data processor extracts from said image data said characteristic feature comprising an anomalous human action.
32. The system of claim 22, wherein said image-based sensor comprises one or more of a colour (RGB) camera, a colour-depth (RBG-D) camera, a depth camera, or a dynamic vision sensor (DVS).
24. (New) The system of claim 21, wherein said sensor array further comprises at least two colour-depth cameras and at least two thermal or IR sensors, arranged to provide at least two complementary views of the designated environment.
33. The system of claim 22, wherein said respective posture defines any one or more of a body motion, a body posture, an activity level, a predefined action, a predefined behaviour, or a predefined relative body disposition.
25. (New) The system of claim 21, wherein said characteristic feature further comprises any one or more of a body motion of the individual, or a body posture of the individual, an activity level of the individual, a predefined action of the individual, a predefined behaviour of the individual, a presence of a designated object in the vicinity of the individual, an anomalous presence in the designated environment.
38. The system of claim 22, wherein said image-based sensor comprises a depth-enabled sensor.
22. (New) The system of claim 21, wherein said sensor array further comprises one or more of a colour (RGB) camera, a colour-depth (RBG-D) camera, a depth camera, a thermal sensor, an audio sensor, or a dynamic vision sensor (DVS).
39. The system of claim 38, wherein said depth-enabled sensor comprises a time-of-flight infrared sensor or at least two stereoscopic cameras.
24. (New) The system of claim 21, wherein said sensor array further comprises at least two colour-depth cameras and at least two thermal or IR sensors, arranged to provide at least two complementary views of the designated environment.
40. The system of claim 22, wherein said anticipated harm scenario corresponds to one or more of a self-harm event, a hanging event, a choking event, a suicide attempt, or a fight.
32. (New) The system of claim 31, wherein said anticipated harm scenario comprises fighting.
26. (New) The system of claim 21, wherein said anticipated harm scenario corresponds to one or more of a self-harm event, a choking event, a bleeding event of the individual, or a seizure of the individual.
30. (New) The system of claim 28, wherein said anticipated harm scenario comprises any one or combination of a self-harm event, a hanging, a choking or a seizure.
41. The system of claim 22, wherein the designated environment comprises a prison cell.
39. (New) The system of claim 21, wherein the designated environment comprises a prison cell.
Copending Application 18639499 independent claim does not disclose, but Svenson discloses wherein at least some of said anticipated harm scenarios are digitally characterized by a respective combination of a respective posture and a corresponding vital sign change (paragraphs 70-79 disclose the abnormality detection server includes analyzing biometric characteristic of both physiological characteristics such as vitals of body temperature, hear/pulse rate, breathing pattern, etc., and behavioral characteristic such as body movement/gesture and facial expression. The combination biometric characteristics are known as biometric pattern per paragraph 155. According to paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture can be used to detect an abnormal event. Paragraphs 80 and 82 reveal abnormality detection server retrieve biometric profile/scenario of the person from biometric profile management system server and identifies an abnormality of the individual based on the biometric detection data and the biometric profile of the person, which is generated based on previously detected biometric characteristics of that person, see paragraph 119. Paragraph 121 also disclose the biometric profile include biometric deviation data representing a deviation of the biometric characteristic of the individual from the biometric characteristic of a group of individuals. The deviated biometric profile from the group biometric profiles can represent the harm scenarios. Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a person exhibiting these traits from causing harm to themselves or others. The critical characteristics and behaviors can be defined globally (i.e., those characteristics and behaviors that are considered critical for each student monitored), and individually where the critical characteristics and behaviors may differ for each student. For example, the presence of particular physical characteristics (e.g., increased heart rate) may trigger a critical event notification (or “alert”) for a student with a medical condition, but not for other students. The system can also be configured to generate a critical event alert if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student)); identify a given risk of harm upon said current vital sign and said current predefined posture in combination corresponding with a given one or said multiple anticipated harm scenarios, wherein either of said current vital sign and said current predefined posture are insufficient, alone, to identify given risk of harm (paragraph 173 discloses based on the detection time data, the abnormality detection server selects one biometric pattern from the plurality of biometric patterns that has a time data corresponding to the detection time data, and uses the selected biometric pattern in identifying the abnormality. The information in the selected biometric pattern can be used in the same manner as the detection based on the biometric profile without time data as describe (see also claim 3). According to paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture can be used to detect an abnormal event. Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a student exhibiting these traits from causing harm to themselves or others. The system may trigger an alert for the presence of particular physical characteristics (e.g., increased heart rate) for a student with a medical condition and if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student)).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claim invention to modify the obtain posture data of Copending Application 18639499 with analyzing the combined vital signs and body movement as taught in Svenson to provide accurate and reliable monitoring of the behavior and activities of people and minimizes public harm risks (paragraphs 2-3 and 6-7 of Svenson).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 22-27, 32-35, and 38-42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ng et al US 20200211154 (hereinafter Ng), in view of Ganesh et al US 20120245479 (hereinafter Ganesh), and in further view of Svenson et al US 20210307621 (hereinafter Svenson).
As to claim 1, Ng teaches a system for automatically monitoring an individual in a designated environment to identify a risk of harm while maintaining a privacy of the individual (abstract and Figures 1 and 2 disclose privacy preserving fall[risk] detection system for one or more persons), the system comprising:
an image-based sensor (Figure 2, reference number 202-1 “Embedded Fall Detection Vision Sensor” and Figure 1, reference number 102 “Camera”) capturing image-based data of the designated environment (paragraphs 55-56 disclose the embedded fall detection vision sensor uses one or more cameras depicted as 102 in Figure 1 to monitor a designated environment such as a room, a house, lobby, or a hallway and capture video images. See also paragraph 117 and step 602 of Figure 2 which disclose the embedded fall detection vision sensor receives a sequence of video images capturing one or more persons being monitored for potential falls) to extract therefrom a digital representation of a posture of the individual within the designated environment such that an identity of the individual is unidentifiable from said digital representation (paragraph 117 discloses the system estimates a pose for each detected persons and generates a cropped image such as a skeleton diagram/stick figure for each detected persons, as shown in Figure 3. This cropped image of skeleton diagram/stick figure is the digital representation of a posture of the detected person);
a network communication interface (paragraph 55 discloses the embedded fall detection vision sensor also includes a network interface);
a non-transitory computer-readable medium having stored thereon characteristic features of multiple anticipated harm scenarios (paragraphs 55 and 174 reveal processor/engine that implements software functions stored as instructions on a non-transitory computer-readable storage medium. As shown in Figure 1 and paragraphs 81-82, the fall detection engine 101 includes an action classifier. The classifier includes/stores pre-defined actions such as struggling and lying down and these pre-defined actions are considered as dangerous activities indicative of a fall/harm scenarios), digitally characterized by a … respective posture (paragraphs 81-82 disclose the fall detection engine 101 includes an action classifier. The classifier includes/stores pre-defined postures such as standing, sitting, bending, struggling ,and lying down, wherein the last two can be anticipated dangerous activities);
a digital data processor executing non-transitory digital instructions in real-time to digitally (paragraphs 55 and 174 reveal processor/engine that implements software functions stored as instructions on a non-transitory computer-readable storage medium, and abstract/paragraph 52 reveal the fall detection system perform the instructions in real-time):
compare said digital representation of the posture with multiple predefined postures to categorize a current predefined posture of the individual being monitored (paragraph 118 discloses the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions. The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Thus, the classifier compares the cropped image of the person by analyzing the features of the digital representation/cropped image to pre-defined/learned patterns/characteristics of a particular category/class);
identify a risk of harm upon said current predefined posture corresponding with one of said anticipated harm scenarios (paragraphs 119-121 disclose analyzing the multiple cropped images for the detected persons that were classified as “fall action” to generate a fall/non-fall decision); and
communicate an alert corresponding to said identified given risk of harm via said control network communication interface (paragraphs 61, 120, 148-149 disclose the processor within the sensor system generates a fall/alarm notification corresponding to detected/determined fall posture from the cropped images of the detected person and sends the fall alarm/notification to server 204 or mobile app 212).
Ng does not teach a radar capturing radar data of the designated environment; wherein at least some of said anticipated harm scenarios are digitally characterized by a respective combination of a respective posture and a corresponding vital sign change; process said radar data to extract a current vital sign of the individual being monitored; identify a given risk of harm upon said current vital sign and said current predefined posture in combination corresponding with a given one or said multiple anticipated harm scenarios, wherein either of said current vital sign and said current predefined posture are insufficient, alone, to identify given risk of harm.
Ganesh teaches a radar capturing radar data of the designated environment (Figure 1A and paragraph 25 disclose radar transmitter transmitting an outbound radar signal to a subject in a designated environment (bedroom) and the outbound radar signal is reflected off the subject as a reflected signal. The reflected radar signal is received by a radar receiver. Paragraph 34 further discloses motion feature states based on the received signal); wherein at least some of said anticipated harm scenarios are digitally characterized by a vital sign change (paragraphs 25-26 disclose anticipated harm scenario is associated with physiological parameters of a person sitting or sleeping is monitored for high pulse rate, sleep apnea, or sudden infant death syndrome, breathing rate, respiration rate); process said radar data to extract a current vital sign of the individual being monitored (paragraph 26 discloses processing the return radar signal to obtain physiological parameters such as heart beat or respiration rate of the person being monitored); identify a risk of harm upon said current vital sign corresponding with one of said anticipated harm scenarios (paragraph 36 discloses rate estimation module that compares the heart rate and the respiration rate to threshold/acceptable rates, and determine whether the estimation is classified as “motion”, “concern”, “still but with heart and respiration rates within the acceptable range”, (`still`), or “still with either one or both of the heart or respiration rates outside the acceptable range”, (`concern`). The range of acceptable heart rate and respiration rate can be predetermined such as based on some historical data, can be adjusted by an operator, or can be set by an algorithm that learns the subject being monitored. When the estimation value exceeds a threshold, this may indicate Lack of motion, lack of heartbeat, lack of respiration estimation and pertain to a harmful/ at risk scenario).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Ng’s system for automatically monitoring an individual in a designated environment to further include Ganesh’s radar components to accurately provide an unobtrusive monitoring of a person’s physiology that is inexpensive and manageable (paragraph 3 of Ganesh).
The combination of Ng in view of Ganesh does not teach, but Svenson teaches wherein at least some of said anticipated harm scenarios are digitally characterized by a respective combination of a respective posture and a corresponding vital sign change (paragraphs 70-79 disclose the abnormality detection server includes analyzing biometric characteristic of both physiological characteristics such as vitals of body temperature, hear/pulse rate, breathing pattern, etc., and behavioral characteristic such as body movement/gesture and facial expression. The combination biometric characteristics are known as biometric pattern per paragraph 155. According to paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture can be used to detect an abnormal event. Paragraphs 80 and 82 reveal abnormality detection server retrieve biometric profile/scenario of the person from biometric profile management system server and identifies an abnormality of the individual based on the biometric detection data and the biometric profile of the person, which is generated based on previously detected biometric characteristics of that person, see paragraph 119. Paragraph 121 also disclose the biometric profile include biometric deviation data representing a deviation of the biometric characteristic of the individual from the biometric characteristic of a group of individuals. The deviated biometric profile from the group biometric profiles can represent the harm scenarios. Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a person exhibiting these traits from causing harm to themselves or others. The critical characteristics and behaviors can be defined globally (i.e., those characteristics and behaviors that are considered critical for each student monitored), and individually where the critical characteristics and behaviors may differ for each student. For example, the presence of particular physical characteristics (e.g., increased heart rate) may trigger a critical event notification (or “alert”) for a student with a medical condition, but not for other students. The system can also be configured to generate a critical event alert if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student)); identify a given risk of harm upon said current vital sign and said current predefined posture in combination corresponding with a given one or said multiple anticipated harm scenarios, wherein either of said current vital sign and said current predefined posture are insufficient, alone, to identify given risk of harm (paragraph 173 discloses based on the detection time data, the abnormality detection server selects one biometric pattern from the plurality of biometric patterns that has a time data corresponding to the detection time data, and uses the selected biometric pattern in identifying the abnormality. The information in the selected biometric pattern can be used in the same manner as the detection based on the biometric profile without time data as describe (see also claim 3). According to paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture are used to detect an abnormal event. Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a student exhibiting these traits from causing harm to themselves or others. The system may trigger an alert for the presence of particular physical characteristics (e.g., increased heart rate) for a student with a medical condition and if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student)).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claim invention to modify the obtain posture data of Ng in view of Ganesh’s detected vital signs with analyzing the combined vital signs and body movement as taught in Svenson to provide accurate and reliable monitoring of the behavior and activities of people and minimizes public harm risks (paragraphs 2-3 and 6-7 of Svenson).
As to claim 23, the combination of Ng in view of Ganesh and Svenson teaches wherein said image-based sensor comprises a depth-enabled image sensor (Ng: paragraph 46 discloses the sensors are vision sensor and paragraph 159 reveals the camera system capture videos at different resolutions. Cameras at various resolutions are depth-enabled).
As to claim 24, the combination of Ng in view of Ganesh and Svenson teaches wherein said image-based sensor captures sequential image-based data over time to extract therefrom a posture sequence representative of a gesture (Ng: paragraphs 12 and 117 disclose the camera/sensor capture sequence of video images and paragraphs 117-118 and Figure 6, 604, disclose the system generates/extracts a cropped image for each video image in the sequence. For the given video image in the sequence of video images, the system estimate a gesture/pose for the detected person and generates a cropped image which may be classified. Paragraph 119 further discloses classification is further made for multiple consecutive image frames within the sequence of video images to obtain fall decision), wherein at least some of said anticipated harm scenarios are digitally characterized by a respective gesture and corresponding vital sign change (Ng: paragraphs 117-118 disclose analyzing the multiple cropped images for the detected persons that were classified as “fall action” to generate a fall/non-fall decision. Paragraph 118 discloses the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions. Ganesh: paragraphs 25-26 disclose anticipated harm scenario is associated with physiological parameters of a person sitting or sleeping is monitored for high pulse rate, breathing rate, and/or respiration rate that exceeds or falls under an expected normal threshold. Svenson: paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture can be used to detect an abnormal event. Paragraph 173 discloses based on the detection time data, the abnormality detection server selects one biometric pattern from the plurality of biometric patterns that has a time data corresponding to the detection time data, and uses the selected biometric pattern in identifying the abnormality. The information in the selected biometric pattern can be used in the same manner as the detection based on the biometric profile without time data as describe (see also claim 3). Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a student exhibiting these traits from causing harm to themselves or others. The system may trigger an alert for the presence of particular physical characteristics (e.g., increased heart rate) for a student with a medical condition and if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student)), and wherein said digital data processor executes digital instructions in real-time (Ng: paragraphs 55 and 174 reveal processor/engine that implements software functions stored as instructions on a non-transitory computer-readable storage medium, and abstract/paragraph 52 reveal the fall detection system perform the instructions in real-time. Ganesh: paragraph 62 further discloses data processing system for storing and executing program code via processor and memory) to digitally compare said digital representation of said gesture with multiple predefined gestures to categorize a current predefined gesture of the individual being monitored to identify said given risk of harm upon said current vital sign and said current predefined gesture in combination, corresponding with said given one of said anticipated harm scenarios (Ng: paragraph 118 discloses the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions. The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Thus, the classifier compares the cropped image of the person by analyzing the features of the digital representation/cropped image to pre-defined/learned patterns/characteristics of a particular category/class. Ganesh: paragraph 36 discloses rate estimation module that compares the heart rate and the respiration rate to threshold/acceptable rates, and determine whether the estimation is classified as “motion”, “concern”, “still but with heart and respiration rates within the acceptable range”, (`still`), or “still with either one or both of the heart or respiration rates outside the acceptable range”, (`concern`). The range of acceptable heart rate and respiration rate can be predetermined such as based on some historical data, can be adjusted by an operator, or can be set by an algorithm that learns the subject being monitored. When the estimation value exceeds a threshold, this may indicate Lack of motion, lack of heartbeat, lack of respiration estimation and pertain to a harmful/ at risk scenario. Svenson: paragraph 173 discloses based on the detection time data, the abnormality detection server selects one biometric pattern from the plurality of biometric patterns that has a time data corresponding to the detection time data, and uses the selected biometric pattern in identifying the abnormality. The information in the selected biometric pattern can be used in the same manner as the detection based on the biometric profile without time data as describe (see also claim 3), see also paragraphs 293-294). Motivation similar to motivation presented in claim 22.
As to claim 25, the combination of Ng in view of Ganesh and Svenson teaches wherein said digital representation of said posture comprises a digital representation of a physical disposition of physical body parts of the individual being monitored relative to said designated environment (Ng: paragraph 117 discloses the system estimates a pose for each detected persons and generates a cropped image such as a skeleton diagram/stick figure for each detected persons, as shown in Figure 3. This cropped image is based on identifying a set of human key-points (physical disposition) and then generating a skeleton diagram/stick figure, which is the digital representation of a posture of the detected person).
As to claim 26, the combination of Ng in view of Ganesh and Svenson teaches wherein at least one of said anticipated harm scenarios is characterized by a predefined physical disposition defined by a suspended vertical body orientation (Ng: paragraphs 111 and 118 disclose the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions such as standing, bending, struggling (suspended vertical body orientation), and/or lying down (suspended vertical body orientation). The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Paragraph 121 also discloses the system can detect a fall off bed event/action (suspended vertical body orientation) when a monitored person lying on the bed is experiencing a serious medical condition that would result in a fall from the bed to the floor).
As to claim 27, the combination of Ng in view of Ganesh and Svenson teaches wherein said current vital sign comprises at least one of a heart rate and a respiration rate (Ganesh: paragraph 26 discloses processing the return radar signal to obtain physiological parameters such as heart beat or respiration rate of the person being monitored). Motivation similar to motivation presented in claim 22.
As to claim 32, the combination of Ng in view of Ganesh and Svenson teaches wherein said image-based sensor comprises one or more of a color (RGB) camera, a color-depth (RBG-D) camera, a depth camera, or a dynamic vision sensor (DVS) (Ng: paragraph 46 discloses the sensors are vision sensor. Paragraph 122 discloses processing RGB input images from camera. Paragraph 159 reveals the camera system capture videos at different resolutions. Cameras at various resolutions are depth-enabled).
As to claim 33, the combination of Ng in view of Ganesh and Svenson teaches wherein said respective posture defines any one or more of a body motion, a body posture, an activity level, a predefined action, a predefined behavior, or a predefined relative body disposition (Ng: paragraphs 111 and 118 disclose the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions such as standing, bending, struggling (suspended vertical body orientation), and/or lying down (suspended vertical body orientation). The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Paragraph 121 also discloses the system can detect a fall off bed event/action (suspended vertical body orientation) when a monitored person lying on the bed is experiencing a serious medical condition that would result in a fall from the bed to the floor).
As to claim 34, the combination of Ng in view of Ganesh and Svenson teaches wherein at least one of said anticipated harm scenarios is digitally characterized by a combination of said vital sign change, said respective posture (Ganesh: paragraph 36 discloses rate estimation module that compares the heart rate and the respiration rate to threshold/acceptable rates, and determine whether the estimation is classified as “motion”, “concern”, “still but with heart and respiration rates within the acceptable range”, (`still`), or “still with either one or both of the heart or respiration rates outside the acceptable range”, (`concern`). When the estimation value exceeds a threshold, this may indicate Lack of motion, lack of heartbeat, lack of respiration estimation and pertain to a harmful/ at risk scenario. Ng: paragraphs 111 and 118 disclose the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions such as standing, bending, struggling (suspended vertical body orientation), and/or lying down (suspended vertical body orientation). The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Paragraph 121 also discloses the system can detect a fall off bed event/action (suspended vertical body orientation) when a monitored person lying on the bed is experiencing a serious medical condition that would result in a fall from the bed to the floor), and at least one of a digitally detected presence of a designated object in the vicinity of the individual being monitored, an anomalous presence in the designated environment, or a temperature associated with the designated environment. Svenson: paragraphs 70-79 both physiological characteristics /vitals and behavioral characteristic/body posture can be used to detect an abnormal event. Paragraph 173 discloses based on the detection time data, the abnormality detection server selects one biometric pattern from the plurality of biometric patterns that has a time data corresponding to the detection time data, and uses the selected biometric pattern in identifying the abnormality. The information in the selected biometric pattern can be used in the same manner as the detection based on the biometric profile without time data as describe (see also claim 3). Paragraphs 293-294 also disclose critical characteristics may be physical or behavioral traits that warrant immediate action, such as to prevent a student exhibiting these traits from causing harm to themselves or others. The system may trigger an alert for the presence of particular physical characteristics (e.g., increased heart rate) for a student with a medical condition and if a student exhibits a particular behavioral characteristic (e.g., striking or hitting another student) (see also claim 3).). Motivation similar to motivation presented in claim 22.
As to claim 35, the combination of Ng in view of Ganesh and Svenson teaches wherein said image-based data is further processed to extract respective digital representations of respective postures of respective individuals within the designated environment (Ng: paragraph 117 discloses the system estimates a pose for each detected persons and generates a cropped image such as a skeleton diagram/stick figure for each detected persons, as shown in Figure 3. This cropped image of skeleton diagram/stick figure is the digital representation of a posture of the detected person), and wherein at least one of said anticipated harm scenarios is digitally characterized by a relative disposition of said respective postures and respective vital signs (Ganesh: paragraph 36 discloses rate estimation module that compares the heart rate and the respiration rate to threshold/acceptable rates, and determine whether the estimation is classified as “motion”, “concern”, “still but with heart and respiration rates within the acceptable range”, (`still`), or “still with either one or both of the heart or respiration rates outside the acceptable range”, (`concern`). When the estimation value exceeds a threshold, this may indicate Lack of motion, lack of heartbeat, lack of respiration estimation and pertain to a harmful/ at risk scenario. Ng: paragraphs 111 and 118 disclose the system classifies the cropped image[digital representation] of the detected person as a particular action/postures within a set of pre-defined actions such as standing, bending, struggling (suspended vertical body orientation), and/or lying down (suspended vertical body orientation). The system classifies the posture/action in the cropped image by: (1) classifying the action as either a general “fall” action or a general “non-fall/normal” action; and (2) further classifying the classified general action into a specific action within a category of actions associated with the classified general action. Paragraph 121 also discloses the system can detect a fall off bed event/action (suspended vertical body orientation) when a monitored person lying on the bed is experiencing a serious medical condition that would result in a fall from the bed to the floor), and at least one of a digitally detected presence of a designated object in the vicinity of the individual being monitored, an anomalous presence in the designated environment, or a temperature associated with the designated environment ( Ng: paragraph 85 discloses detecting action of an individual sitting in a chair). Motivation similar to motivation presented in claim 22.
As to claim 38, the combination of Ng in view of Ganesh and Svenson teaches wherein said image-based sensor comprises a depth-enabled sensor (Ng: paragraph 46 discloses the sensors are vision sensor and paragraph 159 reveals the camera system capture videos at different resolutions. Cameras at various resolutions are depth-enabled).
As to claim 39, the combination of Ng in view of Ganesh and Svenson teaches wherein said depth-enabled sensor comprises a time-of-flight infrared sensor or at least one stereoscopic cameras (Ng: paragraph 159 reveals the camera system capture videos at different resolutions. Paragraphs 58-59 disclose stereoscopic cameras system by describing the system hosts multi-camera management application which is configured to divide each monitored area into a set of zones, and assign one or more embedded vision sensors. Each zone in the set of zones, the system can be configured to “fuse” or otherwise combine fall-detection outputs from two or more embedded vision sensors covering the given zone. For example, if a monitored person's identity cannot be identified or determined based on fall-detection output from a first embedded vision sensor positioned at a bad angle, that person's identity may be identified or determined based on fall-detection output from a second embedded fall-detection vision sensor positioned at a good angle. Generally speaking, the system can combine two or more sources of fall-detection outputs from two or more embedded vision sensors and make a collective fall-detection decision on a given person based on the two or more sources of fall-detection outputs).
As to claim 40, the combination of Ng in view of Ganesh and Svenson teaches wherein said anticipated harm scenario corresponds to one or more of a self-harm event, a hanging event, a choking event, a suicide attempt, or a fight (Ganesh: paragraphs 19 and 55 disclose the psychological data can be indicative of suicide attempt). Motivation similar to motivation presented in claim 22.
As to claim 41, the combination of Ng in view of Ganesh and Svenson teaches wherein the designated environment comprises a prison cell (Ganesh: paragraphs 19 and 61 disclose the environment comprise can be multiple cells in a prison). Motivation similar to motivation presented in claim 22.
As to claim 42, the combination of Ng in view of Ganesh and Svenson teaches wherein said current vital sign comprises at least one of a corresponding predefined change in heart rate or respiration rate, wherein either of said current predefined posture or said predefined change in heart rate or respiration rate, alone, is insufficient to identify said risk of harm (Svenson: paragraphs 70-79 disclose the abnormality detection server include analyzing biometric characteristic of both physiological characteristics such as vitals of body temperature, hear/pulse rate, breathing pattern, etc., and behavioral characteristic such as body movement/gesture and facial expression. The combination biometric characteristics are known as biometric pattern per paragraph 155 to determine an abnormality. Paragraphs 80 and 82 reveal abnormality detection server retrieve biometric profile/scenario of the person from biometric profile management system server and identifies an abnormality (such as a change in heat rate) of the individual based on the biometric detection data and the biometric profile of the person, which is generated based on previously detected biometric characteristics of that person, see paragraph 119. Paragraph 121 also disclose the biometric profile include biometric deviation data representing a deviation of the biometric characteristic of the individual from the biometric characteristic of a group of individuals). Motivation similar to the motivation presented in claim 22.
Claim(s) 28-31 and 36-37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ng et al US 20200211154 (hereinafter Ng), in view of Ganesh et al US 20120245479 (hereinafter Ganesh), in further view of Svenson et al US 20210307621 (hereinafter Svenson), and in further view of Albornoz GB 2553123 (hereinafter Albornoz).
As to claim 28, the combination of Ng in view of Ganesh and Svenson teaches all the limitations recited in claim 22 above and Ng teaches sanitizing the image data (paragraphs 20-21) . The combination of Ng in view of Ganesh and Svenson does not teach wherein said image-based data is digitally anonymized by digitally removing predefined personally identifiable information (PII).
Albornoz teaches wherein said image-based data is digitally anonymized by digitally removing predefined personally identifiable information (PII) (paragraphs 54-55 reveal capturing image of a person and a mask is applied to the image, the head (face is considered PII) or entire body of an individual subject may be obscured using pixilation, blurring or masking, or the image may be modified to display an outline silhouette of an individual subject or, for a still greater degree of anonymity).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify the sanitation method in Ng’s system for automatically monitoring an individual in a designated environment and preserving the individual privacy in view of Ganesh’s radar components and Svenson’s teachings of detecting abnormal events based on vital changes and body movements with Albornoz’s teachings of anonymization to provide a versatile system that ensures the privacy of identified subjects while also allowing the privacy to be overridden in exceptional situations (thefts or attacks) or where there are pressing needs to do so (paragraphs 16-17 of Albornoz).
As to claim 29, the combination of Ng in view of Ganesh and Svenson teaches all the limitations recited in claim 22 above and Ng teaches sanitizing the image data (paragraphs 20-21) . The combination of Ng in view of Ganesh and Svenson does not teach wherein said image-based data is anonymized by executing a pre- defined data anonymization process.
Albornoz teaches wherein said image-based data is anonymized by executing a pre- defined data anonymization process (paragraphs 54-55 reveal capturing image of a person and a masking/anonymity process is applied to the image, the head or entire body of an individual subject may be obscured using pixilation, blurring or masking, or the image may be modified to display an outline silhouette of an individual subject or, for a still greater degree of anonymity).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify the sanitation method in Ng’s system for automatically monitoring an individual in a designated environment and preserving the individual privacy in view of Ganesh’s radar components and Svenson’s teachings of detecting abnormal events based on vital changes and body movements with Albornoz’s teachings of anonymization to provide a versatile system that ensures the privacy of identified subjects while also allowing the privacy to be overridden in exceptional situations (thefts or attacks) or where there are pressing needs to do so (paragraphs 16-17 of Albornoz).
As to claim 30, the combination of Ng in view of Ganesh, Svenson, and Albornoz teaches wherein said predefined data anonymization process comprises automatically processing said image-based data at said image-based sensor to extract a skeletal projection or anonymized three-dimensional body representation from said image-based data (Albornoz: paragraphs 54-55 reveal capturing image of a person and a masking/anonymity process is applied to the image, the head or entire body of an individual subject may be obscured using pixilation, blurring or masking, or the image may be modified to display an outline silhouette of an individual subject or, for a still greater degree of anonymity. Ng: paragraph 117 discloses the system estimates a pose for each detected persons and generates a cropped image such as a skeleton diagram/stick figure for each detected persons). Motivation similar to the motivation presented in clam 29.
As to claim 31, the combination of Ng in view of Ganesh, Svenson, and Albornoz teaches wherein said predefined data anonymization process comprises automatically processing said image-based data at said image-based sensor to identify a facial border from said image-based data and blur pixelated content within said facial border (Albornoz: paragraphs 54-55 reveal capturing image of a person. The facial features of an individual subject may be obscured by pixelating or blurring the area of each image that shows the face of the individual subject. Alternatively or additionally, a mask may be applied to each image . When a masking/anonymity process is applied to the image, the head or entire body of an individual subject may be obscured using pixilation, blurring or masking, or the image may be modified to display an outline silhouette of an individual subject or, for a still greater degree of anonymity). Motivation similar to the motivation presented in clam 29.
As to claim 36, the combination of Ng in view of Ganesh and Svenson teaches all the limitations recited in claim 22 above and Ng teaches wherein said network communication interface is communicatively linked to a graphical user interface (GUI) (paragraphs 155, 164, and 165 reveal the network interface is communicatively coupled to input/output devices which include display interface). The combination of Ng in view of Ganesh and Svenson does not teach wherein said digital data processor is operable to output anonymized data representative of the individual in real-time for display via said GUI during monitoring.
Albornoz teaches wherein said digital data processor is operable to output anonymized data representative of the individual in real-time for display via said GUI during monitoring (paragraphs 55, 88-89, and 92 disclose the anonymized image is displayed via computing device. The computing device includes one or more input mechanisms such as keyboard and mouse or touchscreen interface and a display unit such as one or more monitors, all of which are examples of user interfaces. Paragraphs 5-10 disclose reveal the invention solve the problem by extracting data in real time. Paragraph 42 also discloses for a given moment of time video feed from cameras are displayed and analyzed simultaneously thus in real time).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify the sanitation method in Ng’s system for automatically monitoring an individual in a designated environment and preserving the individual privacy in view of Ganesh’s radar components and Svenson’s teachings of detecting abnormal events based on vital changes and body movements with Albornoz’s teachings of anonymization to provide a versatile system that ensures the privacy of identified subjects while also allowing the privacy to be overridden in exceptional situations (thefts or attacks) or where there are pressing needs to do so (paragraphs 16-17 of Albornoz).
As to claim 37, the combination of Ng in view of Ganesh, Svenson, and Albornoz teaches wherein said digital data processor is configured to execute digital instructions to digitally merge said anonymized data to generate a three-dimensional textured scene of the designated environment (Albornoz: paragraph 81 discloses creating a merge dataset. Ng: Paragraph 132 discloses generated 3-D feature vector for detected face). Motivation similar to the motivation presented in clam 36.
Claim(s) 43-44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ng et al US 20200211154 (hereinafter Ng), in view of Ganesh et al US 20120245479 (hereinafter Ganesh), in further view of Svenson et al US 20210307621 (hereinafter Svenson), in further view of Bailey et al US 20140221797 (hereinafter Bailey).
As to claim 43, the combination of Ng in view of Ganesh, and Svenson teaches all the limitations presented in claim 42 above. The combination of Ng in view of Ganesh, and Svenson does not teach, but Bailey teaches wherein said given risk of harm comprises a strangulation risk (paragraphs 2-3 and 5 disclose method and system of monitoring at-risk individuals in confinement situations such as prisons and psychiatric hospitals for self-inflicted injury such as strangulation) and wherein said combination of said current predefined posture and said current vital sign comprises a predefined strangulation posture and at least one of a corresponding predefined change in heart rate or respiration rate, whereas either of said predefined strangulation posture or said predefined change in heart rate or respiration rate, alone, is insufficient alone to identify said strangulation risk (paragraphs and 103 and 104 disclose sensors that measure vitals and the acceleration level (body position) of the individual to detect self-harm. Paragraph 5 reveals the oximeter, RF transmitter, and the acceleration sensors are used to determine both the changes in vitals and the changes in body posture/acceleration to determine self-harm).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify the sanitation method in Ng’s system for automatically monitoring an individual in a designated environment and preserving the individual privacy in view of Ganesh’s radar components and Svenson’s teachings of detecting abnormal events based on vital changes and body movements with Bailey’s teachings of further detecting self-harm events using vitals and body movements to provide an improved method of monitoring a confined individual that eliminates the need for continually or periodically observing the individual (paragraph 7 of Bailey).
As to claim 44, the combination of Ng in view of Ganesh, Svenson, and Bailey teaches wherein the strangulation risk comprises a hanging event (Bailey: paragraph 3 discloses the attempted self-harm/risk is strangulation by hanging). Motivation similar to the motivation presented for claim 45.
Claim(s) 45-46 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ng et al US 20200211154 (hereinafter Ng), in view of Ganesh et al US 20120245479 (hereinafter Ganesh), in further view of Svenson et al US 20210307621 (hereinafter Svenson), and in further view of Joseph et al US 20200054278 (hereinafter Joseph).
As to claim 45, the combination of Ng in view of Ganesh, and Svenson teaches all the limitations presented in claim 42 above. The combination of Ng in view of Ganesh, and Svenson does not teach but Joseph teaches wherein said given one of said anticipated harm scenarios comprises an overdose risk and wherein said combination of said current predefined posture and said current vital sign comprises a predefined overdose posture and at least one of a corresponding predefined change in heart rate or respiration rate, whereas either of said predefined overdose posture or said predefined change in heart rate or respiration rate, alone, is insufficient, to identify said overdose risk (paragraph 123 discloses the sensor monitors combination body position, degree of sedation, and vitals such as inhalation/exhalation rate changes, heart rate changes and body temperature changes to determine overdose. All these characteristics are used to determined overdose).
It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify the sanitation method in Ng’s system for automatically monitoring an individual in a designated environment and preserving the individual privacy in view of Ganesh’s radar components and Svenson’s teachings of detecting abnormal events based on vital changes and body movements with Jospeh’s teachings of determining overdose risk to provide non-invasive real-time monitoring system with diagnostic algorithms that continuously quantify and analyze the pattern of an ambulatory person's respiratory rate (RR), degree of upper airway obstruction (talking/snoring), body activity level, body coordination, body position, heart rate, and/or temperature, among other physiological conditions (paragraph 2 of Joseph).
As to claim 46, the combination of Ng in view of Ganesh, Svenson, and Joseph teaches wherein said predefined overdose posture comprises at least one of convulsions or lying down (Joseph: paragraph 123 discloses the 3-axis accelerometer sensor may be configured to measure a relative x-y-z position and a movement of the patient, such as the amount and pattern of head bobbing (indicate convulsions), body movement, body coordination, and body position in real-time to further estimate, in the case of a drug overdose, the degree of sedation and the trends of sedation over time ). Motivation similar to the motivation of claim 45.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FELICIA FARROW whose telephone number is (571)272-1856. The examiner can normally be reached M - F 7:30am-4:00pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at (571)270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/F.F/Examiner, Art Unit 2437
/BENJAMIN E LANIER/Primary Examiner, Art Unit 2437