DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation "the first set of cameras" in the third wherein clause. There is insufficient antecedent basis for this limitation in the claim.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 14 and 16-20 is/are rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 12-17 of prior U.S. Patent No. 12,243,317. This is a statutory double patenting rejection. See Table below.
Additionally, the nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§706.02(1)(1) - 706.02(1)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-mfo-Lisp.
Claims 1-2, 4-6, 8-9, and 11-13 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-11 of U.S. Patent No. 12,243,317 as shown in the table below. Although the claims at issue are not identical, they are not patentably distinct from each other because the narrowing wherein clause in the present disclosure claim 1 is taken verbatim from issued U.S. Patent No. 12,243,317’s claim 2.
Instant Disclosure 19017894
U. S. Patent No. 12,243,317
1. A system for real-time personal protective equipment (PPE) compliance monitoring at a worksite, the worksite including (1) a preparatory area where workers put on PPE and (2) an operations area where the workers are required to perform tasks while wearing the PPE,
wherein the operations area is proximal to the preparatory area,
the system comprising: a processor configured to execute a computer instruction; a host memory connected to the processor; a graphical processing unit (GPU) connected to the processor; a GPU memory connected to the GPU; a storage device connected to the processor; an input device positioned in the preparatory area, and configured to feed first video frames to the storage device; a plurality of cameras positioned in the operations area, and configured to feed second video frames to the storage device; and a display device connected to the processor; wherein: the processor loads a trained model on the GPU to configure the GPU as a PPE compliance inference engine, by concurrently operating a plurality of input threads, a plurality of inference threads, and a plurality of output threads, the GPU applies the trained model to determine PPE compliance statuses of the workers in the operations area, the plurality of input threads include: performing preprocessing on the first and second video frames, the preprocessing including at least one operation selected from video frame resizing, video frame normalizing, and video frame augmenting, and enqueuing the pre-processed first and second video frames to form an input queue, the plurality of inference threads include: dequeuing the pre-processed first and second video frames from the input queue, and using the pre-processed first video frames as reference video frames, applying the trained model to identify PPE in the pre-processed second video frames, based on an identifying result, generating PPE annotations for the pre-processed second video frames, and enqueuing the pre-processed second video frames with the PPE annotations overlaid thereon to form an output queue, the plurality of output threads include: dequeuing the pre-processed second video frames with the PPE annotations overlaid thereon from the output queue, and generating an output file including (1) the pre-processed second video frames with the PPE annotations overlaid thereon, and (2) PPE compliance statistics, the display device is configured to, based on the output file, present via a web-based dashboard interface to a user of the system, the PPE compliance statuses of the workers in the operations area.
1. A system for real-time personal protective equipment (PPE) compliance monitoring at a worksite, the worksite including (1) a preparatory area where workers put on PPE and (2) an operations area where the workers are required to perform tasks while wearing the PPE,
[recited in claim 2, below]
the system comprising: a processor configured to execute a computer instruction; a host memory connected to the processor; a graphical processing unit (GPU) connected to the processor; a GPU memory connected to the GPU; a storage device connected to the processor; an input device positioned in the preparatory area, and configured to feed first video frames to the storage device; a plurality of cameras positioned in the operations area, and configured to feed second video frames to the storage device; and a display device connected to the processor; wherein: the processor loads a trained model on the GPU to configure the GPU as a PPE compliance inference engine, by concurrently operating a plurality of input threads, a plurality of inference threads, and a plurality of output threads, the GPU applies the trained model to determine PPE compliance statuses of the workers in the operations area, the plurality of input threads include: performing preprocessing on the first and second video frames, the preprocessing including at least one operation selected from video frame resizing, video frame normalizing, and video frame augmenting, and enqueuing the pre-processed first and second video frames to form an input queue, the plurality of inference threads include: dequeuing the pre-processed first and second video frames from the input queue, and using the pre-processed first video frames as reference video frames, applying the trained model to identify PPE in the pre-processed second video frames, based on an identifying result, generating PPE annotations for the pre-processed second video frames, and enqueuing the pre-processed second video frames with the PPE annotations overlaid thereon to form an output queue, the plurality of output threads include: dequeuing the pre-processed second video frames with the PPE annotations overlaid thereon from the output queue, and generating an output file including (1) the pre-processed second video frames with the PPE annotations overlaid thereon, and (2) PPE compliance statistics, the display device is configured to, based on the output file, present via a web-based dashboard interface to a user of the system, the PPE compliance statuses of the workers in the operations area.
2. The system of claim 1, wherein the preparatory area is enclosed by at least three vertical walls and a roof, wherein the input device comprises a first set
configured to capture the first video frames and transmit the first video frames to the storage device, wherein the first set of cameras includes a face level camera and a complete body camera, wherein both the face level camera and the complete body camera are mounted on one wall of the least three vertical walls of the preparatory area; wherein
[recited in claim 1, above]
the plurality of cameras are configured to capture working video frames of workers in the operations area and transmit the working video frames as the second video frames to the storage device, wherein the plurality of cameras include a field view camera and at least one area camera.
2. The system of claim 1, wherein the preparatory area is enclosed by at least three
vertical walls and a roof, wherein the input device comprises a first set of cameras configured to capture the first video frames and transmit the first video frames to the storage device, wherein the first set of cameras includes a face level camera and a complete body camera, wherein both the face level camera and the complete body camera are mounted on one wall of the least three vertical walls of the preparatory area; wherein the operations area is proximal to, adjacent or open to the preparatory area, wherein
the plurality of cameras are configured to capture working video frames of workers in the operations area and transmit the working video frames as the second video frames to the storage device, wherein the plurality of cameras include a field view camera and at least one area camera.
4. The system of claim 1, wherein each video frame of the first and second video frames is asynchronously transferred from the host memory to the GPU memory.
3. The system of claim 1, wherein each video frame of the first and second video frames is asynchronously transferred from the host memory to the GPU memory.
5. The system of claim 1, wherein the plurality of inference threads further include: running the trained model with each video frame of the pre-processed second video frames on the GPU to generate the pre-processed second video frames with a plurality of bounding boxes, and enqueuing the pre-processed second video frames with the plurality of bounding boxes to form the output queue.
4. The system of claim 1, wherein the plurality of inference threads further include: running the trained model with each video frame of the pre-processed second video frames on the GPU to generate the pre-processed second video frames with a plurality of bounding boxes, and enqueuing the pre-processed second video frames with the plurality of bounding boxes to form the output queue.
6. The system of claim 1, wherein the PPE compliance statistics include one or more of PPE inference confidence scores, a number of PPE items identified, and PPE compliance rates among the workers.
5. The system of claim 1, wherein the PPE compliance statistics include one or more of PPE inference confidence scores, a number of PPE items identified, and PPE compliance rates among the workers.
8. The system of claim 1, wherein the first and second video frames are selected from the group consisting of a pre-recorded video, a real-time video from the plurality cameras, and a combination thereof.
6. The system of claim 1, wherein the first and second video frames are selected from the group consisting of a pre-recorded video, a real-time video from the plurality cameras, and a combination thereof.
9. The system of claim 1, wherein the trained model is selected from the group consisting of a PPE-CenterNet, a PPE-DAB-Deformable-DETR, and a PPE-YOLO.
7. The system of claim 1, wherein the trained model is selected from the group consisting of a PPE-CenterNet, a PPE-DAB-Deformable-DETR, and a PPE-YOLO.
10. The system of claim 9, wherein the trained model is the PPE-YOLO.
8. The system of claim 7, wherein the trained model is the PPE-YOLO.
11. The system of claim 1, wherein a PPE class is defined as including Helmet, NoHelmet, Vest, and NoVest and wherein the trained model is trained by a database with a first ratio of the Helmet to the NoHelmet and a second ratio of the NoVest to the Vest less than three (3) to alleviate a high-imbalanced class issue.
9. The system of claim 1, wherein a PPE class is defined as including Helmet, NoHelmet, Vest, and NoVest and wherein the trained model is trained by a database with a first ratio of the Helmet to the NoHelmet and a second ratio of the NoVest to the Vest less than three (3) to alleviate a high-imbalanced class issue.
12. The system of claim 1, wherein the system processes the first and second video frames with a rate of at least 15 frames per second (FPS).
10. The system of claim 1, wherein the system processes the first and second video frames with a rate of at least 15 frames per second (FPS).
13. The system of claim 12, wherein the system processes the first and second video frames with a rate of at least 28 frames per second (FPS).
11. The system of claim 10, wherein the system processes the first and second video frames with a rate of at least 28 frames per second (FPS).
14. A method for real-time personal protective equipment (PPE) compliance monitoring at a worksite, the worksite including (1) a preparatory area where workers put on PPE and (2) an operations area where the workers are required to perform tasks while wearing the PPE, the method comprising: obtaining first video frames via an input device positioned in the preparatory area; obtaining second video frames via a plurality of cameras positioned in the operations area; loading a trained model on a graphical processing unit (GPU), such that the GPU applies the trained model to determine PPE compliance statuses of the workers in the operations area by concurrently operating a plurality of input threads, a plurality of inference threads, and a plurality of output threads, wherein the plurality of input threads include: performing preprocessing on the first and second video frames, the preprocessing including at least one operation selected from video frame resizing, video frame normalizing, and video frame augmenting, and enqueuing the pre-processed first and second video frames to form an input queue, the plurality of inference threads include: dequeuing the pre-processed first and second video frames from the input queue, and using the pre-processed first video frames as reference video frames, applying the trained model to identify PPE in the pre-processed second video frames, based on an identifying result, generating PPE annotations for the pre-processed second video frames, and enqueuing the pre-processed second video frames with the PPE annotations overlaid thereon to form an output queue, and the plurality of output threads include: dequeuing the pre-processed second video frames with the PPE annotations overlaid thereon from the output queue, and generating an output file including (1) the pre-processed second video frames with the PPE annotations overlaid thereon, and (2) PPE compliance statistics, and based on the output file, presenting the PPE compliance statuses of the workers in the operations area, via a web-based dashboard interface on a display device.
12. A method for real-time personal protective equipment (PPE) compliance monitoring at a worksite, the worksite including (1) a preparatory area where workers put on PPE and (2) an operations area where the workers are required to perform tasks while wearing the PPE, the method comprising: obtaining first video frames via an input device positioned in the preparatory area; obtaining second video frames via a plurality of cameras positioned in the operations area; loading a trained model on a graphical processing unit (GPU), such that the GPU applies the trained model to determine PPE compliance statuses of the workers in the operations area by concurrently operating a plurality of input threads, a plurality of inference threads, and a plurality of output threads, wherein the plurality of input threads include: performing preprocessing on the first and second video frames, the preprocessing including at least one operation selected from video frame resizing, video frame normalizing, and video frame augmenting, and enqueuing the pre-processed first and second video frames to form an input queue, the plurality of inference threads include: dequeuing the pre-processed first and second video frames from the input queue, and using the pre-processed first video frames as reference video frames, applying the trained model to identify PPE in the pre-processed second video frames, based on an identifying result, generating PPE annotations for the pre-processed second video frames, and enqueuing the pre-processed second video frames with the PPE annotations overlaid thereon to form an output queue, and the plurality of output threads include: dequeuing the pre-processed second video frames with the PPE annotations overlaid thereon from the output queue, and generating an output file including (1) the pre-processed second video frames with the PPE annotations overlaid thereon, and (2) PPE compliance statistics, and based on the output file, presenting the PPE compliance statuses of the workers in the operations area, via a web-based dashboard interface on a display device.
16. The method of claim 14, wherein the plurality of inference threads further include: running the trained model with each video frame of the pre-processed second video frames on the GPU to generate the pre-processed second video frames with a plurality of bounding boxes, and enqueuing the pre-processed second video frames with the plurality of bounding boxes to form the output queue.
13. The method of claim 12, wherein the plurality of inference threads further include: running the trained model with each video frame of the pre-processed second video frames on the GPU to generate the pre-processed second video frames with a plurality of bounding boxes, and enqueuing the pre-processed second video frames with the plurality of bounding boxes to form the output queue.
17. The method of claim 14, wherein the PPE compliance statistics include one or more of PPE inference confidence scores, a number of PPE items identified, and PPE compliance rates among the workers.
14. The method of claim 12, wherein the PPE compliance statistics include one or more of PPE inference confidence scores, a number of PPE items identified, and PPE compliance rates among the workers.
18. The method of claim 14, wherein the trained model is the PPE-YOLO.
15. The method of claim 12, wherein the trained model is the PPE-YOLO.
19. The method of claim 14, wherein a PPE class is defined as including Helmet, NoHelmet, Vest, and NoVest and wherein the trained model is trained by a database with a first ratio of the Helmet to the NoHelmet and a second ratio of the NoVest to the Vest less than three (3) to alleviate a high-imbalanced class issue.
16. The method of claim 12, wherein a PPE class is defined as including Helmet, NoHelmet, Vest, and NoVest and wherein the trained model is trained by a database with a first ratio of the Helmet to the NoHelmet and a second ratio of the NoVest to the Vest less than three (3) to alleviate a high-imbalanced class issue.
20. The method of claim 14, wherein the first and second video frames are processed with a rate of at least 30 frames per second (FPS).
17. The method of claim 12, wherein the first and second video frames are processed with a rate of at least 30 frames per second (FPS).
Allowable Subject Matter
Claims 1-2, 4-6, 8-14 and 16-20 contain allowable subject matter.
The following is an examiner’s statement of reasons for allowance: the closest prior art KINI et al., (An Approach to Infraction Detection for Workers’ Safety Using Video Analytics) which teaches an automated system to continuously monitor whether workers are wearing their personal protective equipment (PPE) employing machine learning, and deep learning techniques. This safety equipment detection framework is based on computer vision, image processing, and machine learning to automate a current safety system. The system utilizes an object detection algorithm such as YOLO to analyze CCTV footage frame by frame. Upon detecting any infraction, an alarm triggers and a notification is sent to a supervisor; and IMAM et al., (Ensuring Miners’ Safety in Underground Mines Through Edge Computing: Real-Time PPE Compliance Analysis Based on Pose Estimation) which teaches integrating AI and computer vision into underground mining operations to verify the correct usage of PPE on mining sites using key point detection and bounding box analysis. These references, either singularly or in combination fail to anticipate or render obvious especially the underlined limitations of claim 1 (and the similar method limitations of claim 14) including [a] system for real-time personal protective equipment (PPE) compliance monitoring at a worksite, the worksite including (1) a preparatory area where workers put on PPE and (2) an operations area where the workers are required to perform tasks while wearing the PPE, wherein the operations area is proximal to the preparatory area, the system comprising:
a processor configured to execute a computer instruction;
a host memory connected to the processor;
a graphical processing unit (GPU) connected to the processor;
a GPU memory connected to the GPU;
a storage device connected to the processor;
an input device positioned in the preparatory area, and configured to feed first video frames to the storage device;
a plurality of cameras positioned in the operations area, and configured to feed second video frames to the storage device; and
a display device connected to the processor
wherein the processor loads a trained model on the GPU to configure the GPU as a PPE compliance inference engine,
by concurrently operating a plurality of input threads, a plurality of inference threads, and a plurality of output threads, the GPU applies the trained model to determine PPE compliance statuses of the workers in the operations area,
the plurality of input threads include:
performing preprocessing on the first and second video frames, the preprocessing including at least one operation selected from video frame resizing, video frame normalizing, and video frame augmenting, and
enqueuing the pre-processed first and second video frames to form an input queue,
the plurality of inference threads include:
dequeuing the pre-processed first and second video frames from the input queue, and
using the pre-processed first video frames as reference video frames, applying the trained model to identify PPE in the pre-processed second video frames,
based on an identifying result, generating PPE annotations for the pre-processed second video frames, and
enqueuing the pre-processed second video frames with the PPE
annotations overlaid thereon to form an output queue,
the plurality of output threads include:
dequeuing the pre-processed second video frames with the PPE annotations overlaid thereon from the output queue, and
generating an output file including (1) the pre-processed second video frames with the PPE annotations overlaid thereon, and (2) PPE compliance statistics,
the display device is configured to, based on the output file, present via a web-based dashboard interface to a user of the system, the PPE compliance statuses of the workers in the operations area.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Marnie Matt whose telephone number is (303)297-4255. The examiner can normally be reached Monday - Friday, 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached on 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARNIE A MATT/Primary Examiner, Art Unit 2485