DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by KELLY (US 20210350689 A1).
Regarding claim 1, KELLY discloses A method for monitoring cleaning of a medical room comprising:
Fig. 1 and [0052] for the clean sensor/indicators and their associated systems disclosed by Kelly can be installed in or on many different types of locations or surfaces including but not limited to: medical clinics.
receiving imaging of the medical room;
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
analyzing the imaging to:
identify one or more surfaces in the medical room that should be cleaned, and detect performance of one or more cleaning motions corresponding to the one or more surfaces in the medical room;
[0107] The movement of the person's hands cleaning the surface is monitored (“cleaning motions”). And Cleaning personnel clothing or equipment can also be detected by the machine vision and this can be used to log cleaning events per surface cleaned (“performance of one or more cleaning motions…”).
determining which surfaces have been cleaned based on correlating the detected one or more cleaning motions with the one or more surfaces; and
[0107] the central database keeps a log of the face ID or other unique identifier of the cleaning crew. Infract The clean process includes the key steps in detection of an unclean surface, cleaning event occurrence, and notification to customers and employees.
displaying at least one indication of at least one of: (1) the surfaces that have been cleaned, and (2) at least one of the one or more surfaces that have not been cleaned.
[0179] Indication of Surface is CLEAN or UNCLEAN and [0180] In some embodiments, a display or indicator can be positioned at or near to surface to be cleaned.
Regarding claim 2, KELLY discloses The method of claim 1, wherein at least one machine learning model is used to detect performance of the one or more cleaning motions.
[0179] Indication of Surface is CLEAN or UNCLEAN and [0180] In some embodiments, a display or indicator can be positioned at or near to surface to be cleaned. And see also [0201]
Regarding claim 3, KELLY discloses The method of claim 2, wherein the at least one machine learning model is configured to detect a hand grasping a cleaning implement.
[0134] Using Machine Vision (Proof of cleaning and completeness of cleaning). [0135] Machine vision can detect movement of gloves/rags/wristbands/cleaning materials/or cleaning personnel.
Regarding claim 4, KELLY discloses The method of claim 1, wherein the one or more surfaces in the medical room are identified using a first machine learning model ([0129] Area based machine vision. [0130] This can be done when a physical location is mapped out with cameras and a facility map.)
and performance of the one or more cleaning motions is detected using a second machine learning model that is different than the first machine learning model.
[0179] Indication of Surface is CLEAN or UNCLEAN and [0180] In some embodiments, a display or indicator can be positioned at or near to surface to be cleaned. And see also [0201]
Regarding claim 5, KELLY discloses The method of claim 4, wherein the performance of the one or more cleaning motions is detected using at least one sensor sensing contact with the one or more surfaces in addition to using the second machine learning model.
[0050] for At runtime, the AI clean camera system can observe patrons/customers behaviors using and touching the equipment or surfaces.
Regarding claim 6, KELLY discloses The method of claim 1, wherein determining which surfaces has been cleaned comprises determining that a cleaning motion has been performed for a threshold amount of time.
[0157] IR, Radar, or other motion sensing can be used to detect whether the time of cleaning motion is greater than a predetermined amount. [0062]
Regarding claim 7, KELLY discloses The method of claim 1, wherein the imaging comprises imaging from multiple cameras.
[0062] In one embodiment, cameras mounted in the facility can view one or more surfaces that have SAFE & READY stickers.
Regarding claim 8, KELLY discloses The method of claim 1, wherein the at least one indication comprises a textual indication.
[0107] Cleaning crews' faces can be biometrically identified, and the cleaning event can be logged with their name and face ID and the time the cleaning was done. The face ID can link to a specific person with a registered name or can be a fully anonymous person doing the cleaning.
Regarding claim 9, KELLY discloses The method of claim 1, wherein the at least one indication is provided in an image of the medical room.
[0107] FIG. 17 is an image of a cleaning crew cleaning a room.
Regarding claim 10, KELLY discloses The method of claim 9, wherein the at least one indication comprises a visual indicator displayed in association with a surface in the image.
[0179] Indication of Surface is CLEAN or UNCLEAN and [0180] In some embodiments, a display or indicator can be positioned at or near to surface to be cleaned.
Regarding claim 11, KELLY discloses The method of claim 10, wherein the visual indicator comprises at least one of outlining of the surface and coloring of the surface.
[0179] Indication of Surface is CLEAN or UNCLEAN which could be performed by displaying or indicator can be positioned at or near to surface to be cleaned. LED lights can be used for status indicators—e.g., RED for unclean, GREEN for clean. Other colors for different status—e.g. yellow if timed out
Regarding claim 12, KELLY discloses The method of claim 1, wherein the one or more surfaces in the medical room that should be cleaned are identified at least in part based on detecting touching of the one or more surfaces by people during a medical procedure.
[0050] for At runtime, the AI clean camera system can observe patrons/customers behaviors using and touching the equipment or surfaces.
Regarding claim 13, KELLY discloses The method of claim 1, comprising, for a respective surface to be cleaned, providing a first visual indication in an image of the medical room that the surface should be cleaned, and replacing the first visual indication with a second visual indication upon detecting that the respective surface has been cleaned.
[0104] FIG. 15 shows a depiction of a cameras configured for detecting if humans cough, sneeze, touch their mouth or nose or face. It can also detect if a person touches a single surface/device or multiple surfaces/devices. Then the clean sensor/indicator light can be turned on so that staff can clean the surface proactively. Computer vision images can be sent to a neural network model that has been trained to recognize these human events and then the clean event is triggered. Area based cameras can watch many surfaces and can trigger clean alerts for specific sensor/indicators that need to be cleaned and not turn on others that do not need to be cleaned. One of the benefits cab be labor optimization for cleaning staff or employees.
Regarding claim 14, KELLY discloses The method of claim 1, wherein the one or more cleaning motions comprise wiping of the one or more surfaces in the medical room.
[0125] Looking for the amount of surface that a cleaning rag has wiped down a particular surface.
Regarding claim 15, KELLY discloses A system comprising one or more processors, memory, and one or more programs stored in the memory for execution by the one or more processors for:
receiving imaging of the medical room;
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
analyzing the imaging to:
identify one or more surfaces in the medical room that should be cleaned, and detect performance of one or more cleaning motions corresponding to the one or more surfaces in the medical room;
[0107] The movement of the person's hands cleaning the surface is monitored (“cleaning motions”). And Cleaning personnel clothing or equipment can also be detected by the machine vision and this can be used to log cleaning events per surface cleaned (“performance of one or more cleaning motions…”).
determining which surfaces have been cleaned based on correlating the detected one or more cleaning motions with the one or more surfaces; and
[0107] the central database keeps a log of the face ID or other unique identifier of the cleaning crew. Infract The clean process includes the key steps in detection of an unclean surface, cleaning event occurrence, and notification to customers and employees.
transmitting data to at least one display for displaying at least one indication of at least one of: (1) the surfaces that have been cleaned, and (2) at least one of the one or more surfaces that have not been cleaned.
[0179] Indication of Surface is CLEAN or UNCLEAN and [0180] In some embodiments, a display or indicator can be positioned at or near to surface to be cleaned.
Regarding claim 16, KELLY discloses The system of claim 15, comprising at least one camera for generating the imaging.
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
Regarding claim 17, KELLY discloses The system of claim 16, wherein the at least one camera is configured for visible light imaging, infrared imaging, ultraviolet imaging, or a combination thereof.
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
Regarding claim 18, KELLY discloses The system of claim 15, comprising at least one illuminator.
[0385] In addition or in the alternative, the system includes a sanitization indicator 112, 114 including an illuminator including a processor configured to generate a first color indicating a clean state and a second color indicating a dirty state.
Regarding claim 19, KELLY discloses The system of claim 18, wherein the at least one illuminator is incorporated into at least one camera or at least one surgical light.
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
Regarding claim 20, KELLY discloses The system of claim 15, comprising the at least one display.
Fig. 1 for receiving images or video from the camera see also [0107] and FIG. 17 for an image/video of a cleaning crew cleaning a room captured by the machine vision camera are showing that cleaning is occurring on specific surfaces.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMIRA MONSHI whose telephone number is (571)272-0995. The examiner can normally be reached 8 AM-5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John W Miller can be reached at 5712727353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMIRA MONSHI/Primary Examiner, Art Unit 2422