Prosecution Insights
Last updated: April 19, 2026
Application No. 18/814,398

AI-BASED VIDEO TAGGING FOR ALARM MANAGEMENT

Non-Final OA §DP
Filed
Aug 23, 2024
Examiner
JEAN, FRANTZ B
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Covidien LP
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
753 granted / 837 resolved
+32.0% vs TC avg
Moderate +9% lift
Without
With
+8.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
854
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
25.5%
-14.5% vs TC avg
§102
33.2%
-6.8% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 837 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a first office action in response to the instant application for letters patent filed on 23 August 2024. Claims 1-20 are presented for examination. Specification The disclosure is objected to because of the following informalities: the cross-reference to related applications must be updated to reflect the current status of the related applications. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 of the instant application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 and 11-20 of U.S. Patent No. 12106655 and claims 1-9 and 11-20 of U.S. Patent No. 11328572. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are arguably broader than the claims of patent numbers “655” and “572” which encompass the same metes, bounds, and limitations. Furthermore, it must be noted that some limitations that are missing in the independent claims can be found in the dependent claims. In other words, the instant application contains almost all the limitations of the patents cited above. Therefore, it would be obvious to a skill artisan before the effective filing date of the invention as claimed to eliminate or slightly modify the limitations of the narrower claims, since it has been held that omission of an element and its function and a combination where the remaining elements perform the same function as before involves only routine skill in the art. See in re Karlson, 136 USPQ 184 (CCPA1964). Patent Number: 12106655 See exemplary claims 1-9 1. A method, comprising: receiving, using a processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient; detecting, using machine learning, presence of a noise object and setting an interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; and modifying an alarm level based on comparison of the quality level of the sequence of images with the threshold quality level. 2. The method of claim 1, wherein the video stream further comprises at least one of a sequence of depth images and a sequence of RGB images. 3. The method of claim 1, further comprising determining a physiological parameter for the patient based on the sequence of images. 4. The method of claim 1, wherein detecting presence of a noise object further comprises detecting presence of a clinician intervention and setting the interaction-flag to a positive value in response to detecting the presence of a clinician intervention for a predetermined cool-off period. 5. The method of claim 1, wherein detecting presence of a noise object further comprises detecting a velocity of the noise object relative to the patient. 6. The method of claim 5, further comprising: comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. 7. The method of claim 1, wherein modifying the alarm level further comprises delaying the alarm level in response to determining that the noise object is a caregiver's hand. 8. The method of claim 1, wherein detecting presence of a noise object further comprises adding a bounding box around an object in one or more of the sequence of images. 9. The method of claim 8, further comprising identifying the object in the bounding box using a multi-object classifier. 10. The method of claim 3, further comprising reporting the physiological parameter to a clinician if the quality level of the sequence of images is above the threshold quality level. 11. In a computing environment, a method performed at least in part on at least one processor, the method comprising: receiving, using the at least one processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient; detecting, using machine learning, presence of a noise object and setting an interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; reporting a physiological parameter for the patient to a clinician if the quality level of the sequence of images is above the threshold quality level, wherein the physiological parameter is based on the sequence of images; and modifying an alarm level based on comparison of the quality level of the sequence of images with the threshold quality level. 12. The method of claim 11, wherein the video stream further comprises a sequence of depth images. 13. The method of claim 11, wherein the video stream further comprises a sequence of RGB images. 14. The method of claim 11, wherein detecting presence of a noise object further comprises detecting presence of a clinician intervention and setting the interaction-flag to a positive value in response to detecting the presence of a clinician intervention for a predetermined cool-off period. 15. The method of claim 11, wherein detecting presence of a noise object further comprises detecting a velocity of the noise object relative to the patient. 16. The method of claim 15, further comprising comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. 17. The method of claim 11, wherein detecting presence of a noise object further comprises adding a bounding box around an object in one or more of the sequence of images and wherein the method further comprises identifying the object in the bounding box using a multi-object classifier. 18. A physical article of manufacture including one or more tangible computer-readable storage media, encoding computer-executable instructions for executing on a computer system a computer process to provide an automated connection to a collaboration event for a computing device, the computer process comprising: receiving a video stream, the video stream comprising a sequence of images for at least a portion of a patient; detecting, using machine learning, presence of a noise object and setting an interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; and modifying an alarm level based on comparison of the quality level of the sequence of images with the threshold quality level. 19. The physical article of manufacture of claim 18, wherein the video stream further comprises at least one of a sequence of depth images and a sequence of RGB images. 20. The physical article of manufacture of claim 18, wherein the computer process further comprises detecting a velocity of the noise object relative to the patient and comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. Patent Number: 11328572 See Exemplary Claims 1-9 1. A method, comprising: receiving, using a processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient; determining a physiological parameter for the patient; detecting, using machine learning, presence of a noise object and setting a interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; and modifying an alarm level based on the value of the interaction-flag and comparison of the quality level of the sequence of depth images with the threshold quality level. 2. The method of claim 1, wherein the video stream further comprising at least one of a sequence of depth images and a sequence of RGB images. 3. The method of claim 1, determining the physiological parameter for the patient further comprising determining the physiological parameter for the patient based on the sequence of images. 4. The method of claim 1, wherein detecting presence of a noise object further comprising detecting presence of a clinician intervention and setting the interaction-flag to a positive value in response to detecting the presence of a clinician intervention for a predetermined cool-off period. 5. The method of claim 1, wherein detecting presence of a noise object further comprising detecting a velocity of the noise object relative to the patient. 6. The method of claim 5, further comprising: comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. 7. The method of claim 1, wherein modifying the alarm level further comprising delaying the alarm level in response to determining that the noise object is a caregiver's hand. 8. The method of claim 1, wherein detecting presence of a noise object further comprising adding a bounding box around an object in one or more of the sequence of images. 9. The method of claim 8, further comprising identifying the object in the bounding box using a multi-object classifier. 10. The method of claim 1, further comprising reporting the physiological parameter to a clinician if the quality level of the sequence of images is above the threshold quality level. 11. In a computing environment, a method performed at least in part on at least one processor, the method comprising: receiving, using the processor, a video stream, the video stream comprising a sequence of images for at least a portion of a patient; determining a physiological parameter for the patient based on the sequence of images; detecting, using machine learning, presence of a noise object and setting a interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; reporting the physiological parameter to a clinician if the quality level of the sequence of images is above the threshold quality level; and modifying an alarm level based on the value of the interaction-flag and comparison of the quality level of the sequence of depth images with the threshold quality level. 12. The method of claim 11, wherein the video stream further comprising a sequence of depth images. 13. The method of claim 11, wherein the video stream further comprising a sequence of RGB images. 14. The method of claim 11, wherein detecting presence of a noise object further comprising detecting presence of a clinician intervention and setting the interaction-flag to a positive value in response to detecting the presence of a clinician intervention for a predetermined cool-off period. 15. The method of claim 11, wherein detecting presence of a noise object further comprising detecting a velocity of the noise object relative to the patient. 16. The method of claim 15, further comprising comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. 17. The method of claim 11, wherein detecting presence of a noise object further comprising adding a bounding box around an object in one or more of the sequence of images and wherein the method further comprising identifying the object in the bounding box using a multi-object classifier. 18. A physical article of manufacture including one or more tangible computer-readable storage media, encoding computer-executable instructions for executing on a computer system a computer process to provide an automated connection to a collaboration event for a computing device, the computer process comprising: receiving a video stream, the video stream comprising a sequence of images for at least a portion of a patient; determining a physiological parameter for the patient based on the sequence of images; detecting, using machine learning, presence of a noise object and setting a interaction-flag to a positive value in response to detecting the noise object; comparing a quality level of the sequence of images with a threshold quality level; reporting the physiological parameter to a clinician if the quality level of the sequence of images is above the threshold quality level; and modifying an alarm level based on the value of the interaction-flag and comparison of the quality level of the sequence of depth images with the threshold quality level. 19. The physical article of manufacture of claim 18, wherein the video stream further comprising at least one of a sequence of depth images and a sequence of RGB images. 20. The physical article of manufacture of claim 18, wherein the computer process further comprising detecting a velocity of the noise object relative to the patient and comparing the velocity of the noise object to a range of velocities that are consistent with physical movement of an arm to determine presence of a caregiver's hand. The claims will be allowed upon the submission of a Terminal Disclaimer by applicant’s representative. The prior art fails to teach among other features of the claims the combination of in response to detecting the presence of the noise object, setting an interaction-flag to a positive value; analyzing, using machine learning, the noise object to detect a presence of a clinician intervention; and setting the interaction-flag to the positive value for a predetermined cool-off period. In another embodiment, the prior art fails to teach and suggest in response to detecting the noise object, setting an interaction-flag to a positive value in response to detecting the noise object; analyzing, using machine learning, the noise object to detect a presence of a clinician intervention; and modifying an alarm condition to one of: a non-alarm condition, a delayed alarm condition, or a low-priority alarm condition Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANTZ B JEAN whose telephone number is (571)272-3937. The examiner can normally be reached 8-5 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton B. Burgess can be reached at 5712723949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANTZ B JEAN/Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598235
METHOD FOR TRANSMITTING CROSS-RESOURCE EVENT NOTIFICATION, ELECTRONIC DEVICE, SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579303
SYSTEM AND METHOD FOR MANAGEMENT OF CONFIDENTIAL INFORMATION
2y 5m to grant Granted Mar 17, 2026
Patent 12580912
SYSTEM AND METHOD FOR TRANSMITTING A MESSAGE IN A COMMUNICATION NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12546820
TEST MODE CONTROL CIRCUIT, SEMICONDUCTOR APPARATUS AND SYSTEM, AND METHOD THEREOF
2y 5m to grant Granted Feb 10, 2026
Patent 12545193
VEHICLE-MOUNTED CAMERA
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+8.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 837 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month