Prosecution Insights
Last updated: April 19, 2026
Application No. 19/093,703

METHODS AND SYSTEMS FOR EXECUTION OF IMPROVED LEARNING SYSTEMS FOR IDENTIFICATION OF RULES COMPLIANCE BY COMPONENTS IN TIME-BASED DATA STREAMS

Final Rejection §102§103§112
Filed
Mar 28, 2025
Examiner
RUSH, ERIC
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Leela AI Inc.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
97%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
383 granted / 628 resolved
-1.0% vs TC avg
Strong +36% interview lift
Without
With
+36.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
32 currently pending
Career history
660
Total Applications
across all art units

Statute-Specific Performance

§101
10.8%
-29.2% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 628 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is responsive to the amendments and remarks received 15 December 2025. Claims 1 - 10 are currently pending. Priority Applicant’s claim for the benefit of a prior-filed application, Application No. 63/571,537, under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Claim Objections The objections to claims 1, 5, 9 and 10, due to minor informalities, are hereby withdrawn in view of the amendments and remarks received 15 December 2025. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a machine vision component processing”, “a learning system…analyzing” and “a state machine…analyzing” in claim 9. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 - 8 and 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which learning system “the learning system” recited on line 8, along with subsequent recitations of “the learning system”, are referencing. Are they referring to the “learning system” recited on line 1 of claim 1 or the “learning system” recited on lines 4 - 5 of claim 1? Additionally, it is unclear as to whether the “learning system” recited on line 1 of claim 1 and the “learning system” recited on lines 4 - 5 of claim 1 are the same learning system or are different learning systems. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same learning system. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which generated recommendation “the generated recommendation” recited on lines 2 - 3 is referencing. Is it referring to the generated “recommendation” recited on line 16 of claim 1 or the generated “recommendation” recited on lines 1 - 2 of claim 7? Additionally, it is unclear as to whether the generated “recommendation” recited on line 16 of claim 1 and the generated “recommendation” recited on lines 1 - 2 of claim 7 are the same recommendation or different recommendations. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claims as requiring and referencing a single same generated recommendation. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention because it is unclear as to which learning system “the learning system” recited on line 9, along with subsequent recitations of “the learning system”, are referencing. Are they referring to the “learning system” recited on line 1 of claim 10, the “learning system” recited on lines 4 -5 of claim 10 or the “learning system” recited on line 8 of claim 10? Additionally, it is unclear as to whether the “learning system” recited on line 1 of claim 10, the “learning system” recited on lines 4 -5 of claim 10 and the “learning system” recited on line 8 of claim 10 are the same learning system or are different learning systems. Clarification and appropriate correction are required. For purposes of examination the Examiner will treat the claim as requiring and referencing a single same learning system. Claims 2 - 7 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection. The rejection to claim 9 under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, is hereby withdrawn in view of the amendments and remarks received 15 December 2025. Claim Rejections - 35 USC § 112(d) The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 7 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 7 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form because it does not further limit the subject matter of the claim upon which it depends, claim 1. Claim 7 recites “The method of claim 1 further comprising generating, by the learning system, a recommendation for improving a level of compliance with the at least one rule.” However, claim 1 already includes a limitation of “generating, by the learning system, a recommendation for improving a level of compliance with the at least one rule;”, see lines 16 - 17 of claim 1. Therefore, claim 7 is found to not specify a further limitation of claim 1. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim 8 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 8 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form because it does not further limit the subject matter of the claim upon which it depends, claim 1 via claim 7. Claim 8 recites “The method of claim 7, wherein modifying further comprises modifying, by the learning system, a user interface to display a description of the generated recommendation.” However, claim 1, upon which claim 7 depends, already includes a limitation of “modifying, by the learning system, a user interface… wherein modifying the user interface further comprises modifying the user interface to display a description of the generated recommendation”, see lines 18 - 21 of claim 1. Therefore, claim 8 is found to not specify a further limitation of claim 1. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Response to Arguments Applicant's arguments filed 15 December 2025 have been fully considered but they are not persuasive. On pages 8 - 9 of the remarks the Applicant’s Representative argues that Chaudhry et al. fail to teach or suggest “executing a method in which images need not be processed to have safety zones identified through the use of such calibration images.” Furthermore, the Applicant’s Representative argues that “the pending claims do not require the identification of a safety zone or that safety requirements be assigned only to characteristics of objects in such zones” and that Chaudhry et al. do “not provide a suggestion for expanding its system to assess objects appearing in any part of an image when the underlying image has not been assigned to any safety zone.” Moreover, the Applicant’s Representative argues that Chaudhry et al. “cannot teach or suggest each and every limitation of the pending claims, which recite determining that an object is prohibited from appearing with the attribute in the video file by at least one rule without requiring assessment of calibration images” at least because Chaudhry et al. fail to teach or suggest “how to assess safety compliance without the assignment of a safety zone to a calibration image.” The Examiner respectfully disagrees. Initially, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “executing a method in which images need not be processed to have safety zones identified through the use of such calibration images”, “assess[ing] objects appearing in any part of an image when the underlying image has not been assigned to any safety zone”, “assess[ing] safety compliance without the assignment of a safety zone to a calibration image” and/or “determining that an object is prohibited from appearing with the attribute in the video file by at least one rule without requiring assessment of calibration images” (emphasis added)) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The Examiner asserts that the corresponding claim limitation of instant claim 10 merely recites and requires “determining, by the learning system, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule” and that the broadest reasonable interpretation of instant claim 10 does not preclude including a calibration image(s) and/or a calibration process in the claimed method. The Examiner asserts that the instant claims, which recite the transitional phrase “comprising”, do not exclude additional, unrecited elements or method steps. See MPEP § 2111.03. Furthermore, the Examiner asserts that Chaudhry et al. disclose “determining, by the learning system, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule”, see at least the abstract, figures 1 - 4, page 1 paragraphs 0012 - 0013, page 2 paragraphs 0017 - 0019 and 0021, page 3 paragraphs 0023 - 0024 and 0031 - 0032 and page 4 paragraph 0036 - page 5 paragraph 0041 of Chaudhry et al. wherein they disclose that their “system analyzes the images over a time period (e.g., series of two or more frames) to detect objects and determine various object characteristics (e.g., speed, direction, location, personal protective equipment configuration, cell phone or device usage within certain locations, risky user behavior, etc.). The object characteristics are compared to safety requirements and monitored environment information, such as location of safety zones, to assess compliance with safety requirements. In one example, the system may utilize a machine learned classifier that analyzes the images and monitored environment information to output predictions regarding safety compliance” [0012], that “computer vision algorithms or trained image classifiers may be used to automatically scan the images and identify the safety zones 112, 114 or other features via image recognition” [0023], that “a personal protective equipment (PPE) requirement may be defined as encompassing the entire outdoor area of the monitored environment 102, a behavior requirement may be defined as areas where certain behaviors or actions are considered risky (e.g., cellphone usage in driveways, unrestrained loading on upper shelves, or the like). It should be noted that the safety requirements may be assigned or correlated to specific locations within the monitored environment 102, e.g., safety zones, or may be applicable globally or across the entire monitored environment 102” [0024], that “the object characteristics may be determined using a trained model, such as a PPE configuration model” [0036], that “object characteristics may be compared to one or more safety requirements. For example, the detected object characteristics, e.g., speed, PPE configuration, location, and the like, may be compared against safety requirements regarding the identified object, whether that is a vehicle or a person. For a vehicle, the determined speed and type of vehicle may be compared to a stored safety requirement regarding the speed limit for the type of vehicle and location” [0038], that for “a person, the detected object characteristics, such as the presence and configuration of PPE may be compared against the safety requirements for PPE configuration for the location of the person, e.g., within the ware house a first type of PPE configuration may be required as compared to outside of the warehouse” [0039] and that the “safety requirements may be stored in a database and may include rules or other requirements as may be established by an organization. To safety requirements may be stored with tags indicating select object characteristics, such as location, speed, or the like. The system may then utilize an algorithm or machine learning model to analyze the safety requirements with the detected object characteristics” [0040]. The Examiner asserts that, as shown herein above and in the cited portions, Chaudhry et al. disclose, for example, utilizing a machine learning model to identify characteristics of a person in captured images and determine that the identified characteristics of the person do not comply with a safety requirement. Therefore, the Examiner asserts that, at least, Chaudhry et al. disclose “determining, by the learning system, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule”. On pages 9 - 10 of the remarks the Applicant’s Representative argues that “the combination of Chaudhry and Lucic and Osman fail to teach or suggest the limitations of the pending claims as hereby amended.” In particular, the Applicant’s Representative argues that Osman et al. describe “issuing alerts when non-compliance is identified and issues instructions to come into compliance” but that Osman et al. do not teach or suggest “taking the additional steps of generating a recommendation for how to improve a level of compliance with the at least one rule and of modifying a user interface to display the generated recommendation.” Therefore, the Applicant’s Representative argues that “the combination of Chaudhry and Lucic and Osman fail to teach or suggest the limitations of the pending claims as hereby amended.” The Examiner respectfully disagrees. Initially, in response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Additionally, the Examiner asserts that Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Furthermore, the Examiner asserts that, at least, Osman et al. disclose steps of “generating a recommendation for how to improve a level of compliance with the at least one rule and of modifying a user interface to display the generated recommendation”, see at least figures 1A - 3B, 7A and 7B, page 1 paragraphs 0004 - 0005, page 2 paragraphs 0011 - 0012, page 3 paragraph 0059 - page 4 paragraph 0066, page 10 paragraphs 0107 - 0109 and page 11 paragraphs 0114 - 0116 of Osman et al. wherein they disclose “executing artificial intelligence processes to perform mechanical lifting non-compliance detection, generic load lifting non-compliance detection, personnel proximity to moving vehicle detection by processing the data to produce processed image data; aggregating the processing image data at various levels in the workshop; determining an estimate of the safety non-compliance from the aggregated data;… generating a first instruction to implement behavior to correct the safety non-compliance; and generating a plan to address the performance” [0004], that their “method includes generating an instruction to implement behavior to correct the safety non-compliance” [0012], that in “the example from FIG. 2, as well as other examples, a digital box is added by the system after the computer image is analyzed. Moreover, in all examples, the system reports an instance of non-compliance in order to mitigate the situation and ensure compliance with the health and safety procedures. Reports are optionally electronically sent to people on the shop floor or located elsewhere” [0064], that in “response to determining that a non-compliance event has occurred, the system generates an instruction to implement behavior to correct the safety non-compliance. For example, the technician's phone could sound an audible alarm to alert the technician that, for example, gloves are not worn or a hard hat is not worn. The technician could quickly come into compliance” [0107], that in “one embodiment, throughout the functionality, the system learns and improves as the machine learning processes continue to receive feedback on accuracy and other parameters” [0114] and that “method 700 includes determining 712 an estimate of the safety non-compliance in the use cases from the corrected data, wherein the estimate is based on pre-selected thresholds for the use cases, providing 714 alerts associated with the estimate, and determining 716 the performance for workflows associated with the workshop based on the corrected data, the estimate, or both. The method 700 includes displaying 718 the performance and the personnel during the safety non-compliance… and a display of the personnel includes anonymizing the personnel in the display, wherein the anonymizing includes blurring the display. The method 700 includes generating 710 an instruction to implement behavior to correct the safety non-compliance” [0115]. The Examiner asserts that, as shown herein above and in the cited portions, Osman et al. disclose generating and displaying an instruction to implement behavior to correct the safety non-compliance, i.e., a recommendation for how to improve a level of compliance with at least one rule. In addition, the Examiner asserts that Chaudhry et al. disclose, at least, generating a reminder for improving a level of compliance with the at least one rule and modifying the user interface to display a description of the generated reminder, see at least figures 1, 2 and 4, page 2 paragraphs 0018 - 0019 and page 4 paragraph 0040 - page 5 paragraph 0041 of Chaudhry et al. wherein they disclose that “user device 122 may be a personal computer that may include a user interface 124 that outputs compliance information to the user. In one example, the user interface 124 may include a representation (e.g., map, images, or video feeds) of the monitored environment 102 and reflect safety compliance information on the same, such as via indicators (e.g., icons representative of safety issues at the approximate location of the same) and/or a compliance record 126 of the safety issue. The compliance record 126 may be output to a user as a list or other graphical representation and include information, such as location, time, and type of compliance issues identified by the system 100. Additionally, alerts, such as notifications, emails, messages, or the like, may be transmitted to various distributed user devices 122, such as to people working or walking in the monitored environment 102 to help ensure mitigation of the safety issues identified” [0018] and that “method 250 may optionally output a user notification in operation 265. For example, the user devices 122 may receive an alert, message, email, or the like, which may help to identify in-progress or current safety violations to people in the monitored environment 102 to help mitigate or remedy the violation. For example, the system 100 may send a notification to user device to alert of a vehicle parked improperly within a safety zone. As another example, the system 100 may send a notification to users including a reminder on PPE requirements based on identified compliance issues with PPE requirements” [0041]. Therefore, the Examiner asserts that, as shown herein above and in the cited portions, the previously cited prior art, Chaudhry et al. in view of Lucic in view of Osman et al., discloses generating a recommendation for how to improve a level of compliance with the at least one rule and of modifying a user interface to display the generated recommendation. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 10 is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chaudhry et al. U.S. Publication No. 2023/0230379 A1. - With regards to claim 10, Chaudhry et al. disclose a method for executing a learning system, (Chaudhry et al., Abstract, Figs. 3 & 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0018, Pg. 3 ¶ 0023 - 0024, 0026 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0045 - 0049, Pg. 6 ¶ 0052 - 0053) the method comprising: processing, by a machine vision component in communication with a learning system, (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) a video file to detect at least one object in the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024, 0026 and 0028 - 0032, Pg. 4 ¶ 0036 - 0037) generating, by the machine vision component, (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) an output including data relating to the at least one object and the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0028 - 0032, Pg. 4 ¶ 0035 - 0040, Pg. 5 ¶ 0042 - 0044) analyzing, by a learning system, the output; (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) identifying, by the learning system, an attribute of the video file, the attribute associated with the at least one object; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 3 ¶ 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) analyzing, by the learning system, the output and the attribute and the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040) determining, by the learning system, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 4 ¶ 0036 - 0040) and modifying, by the learning system, a user interface to display an indication of the determination by the learning system. (Chaudhry et al., Figs. 1, 2, 4 & 6, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1 - 9 are rejected under 35 U.S.C. 103 as being unpatentable over Chaudhry et al. U.S. Publication No. 2023/0230379 A1 in view of Lucic U.S. Publication No. 2022/0343649 A1 in view of Osman et al. U.S. Publication No. 2024/0281954 A1. - With regards to claim 1, Chaudhry et al. disclose a method for executing a learning system, (Chaudhry et al., Abstract, Figs. 3 & 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0018, Pg. 3 ¶ 0023 - 0024, 0026 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0045 - 0049, Pg. 6 ¶ 0052 - 0053) the method comprising: processing, by a machine vision component in communication with a learning system, (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) a video file to detect at least one object in the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024, 0026 and 0028 - 0032, Pg. 4 ¶ 0036 - 0037) generating, by the machine vision component, (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) an output including data relating to the at least one object and the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0028 - 0032, Pg. 4 ¶ 0035 - 0040, Pg. 5 ¶ 0042 - 0044) analyzing, by the learning system, the output; (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) identifying, by the learning system, an attribute of the video file, the attribute associated with the at least one object; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 3 ¶ 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) analyzing, by the learning system, the output and the attribute and the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040) determining, by the learning system, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 4 ¶ 0036 - 0040) generating, by the learning system, a reminder for improving a level of compliance with the at least one rule; (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) and modifying, by the learning system, a user interface to display an indication of the determination by the learning system, (Chaudhry et al., Figs. 1, 2, 4 & 6, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) wherein modifying the user interface further comprises modifying the user interface to display a description of the generated reminder. (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) Chaudhry et al. fail to disclose explicitly analyzing and determining by a state machine in communication with the learning system, generating a recommendation for improving a level of compliance with the at least one rule and displaying a description of the generated recommendation. Pertaining to analogous art, Lucic discloses a method for executing a learning system, (Lucic, Abstract, Figs. 1 - 6, Pg. 1 ¶ 0015 - 0019 and 0024, Pg. 2 ¶ 0028 - 0034, 0038 and 0043, Pg. 3 ¶ 0045 - 0048, Pg. 4 ¶ 0057 - 0058, Pg. 5 ¶ 0071 - 0072) the method comprising: processing, by a machine vision component in communication with a learning system, (Lucic, Abstract, Figs. 1 - 5, Pg. 2 ¶ 0028 and 0033 - 0034, Pg. 3 ¶ 0045 - 0051 and 0054 - 0056) a video file to detect at least one object in the video file; (Lucic, Figs. 1 - 5, Pg. 1 ¶ 0024, Pg. 3 ¶ 0046 - 0051 and 0054, Pg. 4 ¶ 0067 - 0069) generating, by the machine vision component, an output including data relating to the at least one object and the video file; (Lucic, Abstract, Fig. 5, Pg. 1 ¶ 0024, Pg. 3 ¶ 0046 - 0051 and 0054 - 0055, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) analyzing, by the learning system, the output; (Lucic, Abstract, Figs. 2 - 7D, Pg. 2 ¶ 0027 - 0028 and 0033 - 0034, Pg. 3 ¶ 0048 - 0054, Pg. 4 ¶ 0064 - Pg. 5 ¶ 0070) identifying, by the learning system, an attribute of the video file, the attribute associated with the at least one object; (Lucic, Abstract, Figs. 2 - 7D, Pg. 3 ¶ 0048 - 0054, Pg. 4 ¶ 0064 - Pg. 5 ¶ 0070) analyzing, by a state machine in communication with the learning system, (Lucic, Abstract, Figs. 2 - 5, Pg. 2 ¶ 0025 - 0029, Pg. 3 ¶ 0054 - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) the output and the attribute and the video file; (Lucic, Abstract, Fig. 5, Pg. 1 ¶ 0024 - Pg. 2 ¶ 0028, Pg. 3 ¶ 0044 - 0046, Pg. 3 ¶ 0054 - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) determining, by the state machine, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule; (Lucic, Abstract, Figs. 2 - 6, Pg. 3 ¶ 0054, - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) and modifying, by the learning system, a user interface to display an indication of the determination by the state machine, (Lucic, Figs. 1 - 5, Pg. 1 ¶ 0016, Pg. 2 ¶ 0034 - 0037, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) wherein modifying the user interface further comprises modifying the user interface to display a description of the identified violation. (Lucic, Figs. 1 - 6, Pg. 2 ¶ 0034 - 0036, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) Lucic fails to disclose explicitly generating a recommendation for improving a level of compliance with the at least one rule and displaying a description of the generated recommendation. Pertaining to analogous art, Osman et al. disclose generating, by the learning system, a recommendation for improving a level of compliance with the at least one rule; (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) and modifying, by the learning system, a user interface to display an indication of the determination by the state machine, (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) wherein modifying the user interface further comprises modifying the user interface to display a description of the generated recommendation. (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) Chaudhry et al. and Lucic are combinable because they are both directed towards analyzing video data of an environment with machine learning techniques to detect the occurrence of one or more rule violations in the environment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chaudhry et al. with the teachings of Lucic. This modification would have been prompted in order to substitute the algorithm or machine learning model utilized to analyze safety requirements with detected object characteristics of Chaudhry et al. for the state machine of Lucic. The state machine of Lucic could be substituted in place of the algorithm or machine learning model utilized to analyze safety requirements with detected object characteristics of Chaudhry et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination a state machine in communication with a learning system would be utilized to determine compliance or non-compliance with the at least one rule of the base device of Chaudhry et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Chaudhry et al. that an algorithm or machine learning model may be utilized to analyze safety requirements with detected object characteristics, that implementation of their teachings is not limited to any specific combination of hardware circuitry and/or software and that the resulting implementation of their teachings and disclosed logical operations is a matter of choice, see at least page 4 paragraph 0040, page 5 paragraph 0049 and page 6 paragraph 0052 of Chaudhry et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a state machine in communication with a learning system would be utilized to determine compliance or non-compliance with the at least one rule of the base device of Chaudhry et al. In addition, Chaudhry et al. in view of Lucic and Osman et al. are combinable because they are all directed towards analyzing video data of an environment with machine learning techniques to detect the occurrence of one or more rule violations in the environment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Chaudhry et al. in view of Lucic with the teachings of Osman et al. This modification would have been prompted in order to enhance the combined base device of Chaudhry et al. in view of Lucic with the well-known and applicable technique Osman et al. applied to a comparable device. Generating and displaying a recommendation for improving a level of compliance with the at least one rule, as taught by Osman et al., would enhance the combined base device by helping improve compliance with the at least one rule by monitored individuals so as to help ensure mitigation of identified safety issues and reduce a number of possible dangerous incidents resulting from non-compliance with safety requirements from occurring. Furthermore, this modification would have been prompted by the teachings and suggestions of Chaudhry et al. that compliance information, including a reminder on requirements for compliance with rules and/or requirements based on identified compliance issues, may be output to users to help mitigate or remedy identified violations, see at least page 2 paragraph 0018 and page 4 paragraph 0040 - page 5 paragraph 0041 of Chaudhry et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a recommendation for improving a level of compliance with the at least one rule would be generated and displayed in order to help improve compliance with at least one rule by monitored individuals thereby facilitating mitigation of identified safety issues and increasing an overall level of safety of the monitored individuals. Therefore, it would have been obvious to combine Chaudhry et al. with Lucic and Osman et al. to obtain the invention as specified in claim 1. - With regards to claim 2, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1, wherein analyzing further comprises analyzing, by the learning system, a plurality of objects detected in the video file. (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024, 0026 and 0028 - 0031, Pg. 4 ¶ 0036 - 0040) In addition, analogous art Lucic discloses analyzing, by the learning system, a plurality of objects detected in the video file. (Lucic, Fig. 5, Pg. 3 ¶ 0046 - 0055, Pg. 4 ¶ 0064 and 0067, Pg. 4 ¶ 0069 - Pg. 5 ¶ 0070) - With regards to claim 3, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1, wherein identifying further comprises identifying an attribute identifying a physical location depicted in the video file. (Chaudhry et al., Figs. 3 - 5, Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024 and 0032, Pg. 4 ¶ 0035 - 0039, Pg. 5 ¶ 0042 - 0044) In addition, analogous art Lucic discloses identifying an attribute identifying a physical location depicted in the video file. (Lucic, Pg. 3 ¶ 0050 - 0054, Pg. 4 ¶ 0069 - Pg. 5 ¶ 0070) - With regards to claim 4, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1, wherein identifying further comprises identifying an attribute identifying a time of day depicted in the video file. (Chaudhry et al., Pg. 2 ¶ 0017 - 0018 and 0021, Pg. 3 ¶ 0027 and 0032, Pg. 4 ¶ 0038 and 0040, Pg. 5 ¶ 0042 - 0044) - With regards to claim 5, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1, wherein identifying further comprises: identifying, by the learning system, an attribute identifying at least a second object in the video file; (Chaudhry et al., Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0036 - 0039) and determining that the at least one object is prohibited from appearing with the at least second object in the video file by the at least one rule. (Chaudhry et al., Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024, Pg. 4 ¶ 0037 - Pg. 5 ¶ 0041) - With regards to claim 6, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1 further comprising: generating, by the learning system, an alert regarding the determination; (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0013, Pg. 2 ¶ 0018, Pg. 5 ¶ 0041) and transmitting, by the learning system, to at least one user of the learning system, the alert. (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0013, Pg. 2 ¶ 0018, Pg. 5 ¶ 0041) In addition, analogous art Lucic discloses generating, by the learning system, an alert regarding the determination; (Lucic, Abstract, Figs. 2 - 6, Pg. 2 ¶ 0025 - 0027 and 0034 - 0036, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) and transmitting, by the learning system, to at least one user of the learning system, the alert. (Lucic, Abstract, Figs. 2 - 6, Pg. 2 ¶ 0025 - 0027 and 0034 - 0036, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) - With regards to claim 7, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 1, further comprising generating, by the learning system, a reminder for improving a level of compliance with the at least one rule. (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) Chaudhry et al. fail to disclose explicitly generating a recommendation for improving a level of compliance with the at least one rule. Pertaining to analogous art, Osman et al. disclose generating, by the learning system, a recommendation for improving a level of compliance with the at least one rule. (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) - With regards to claim 8, Chaudhry et al. in view of Lucic in view of Osman et al. disclose the method of claim 7, wherein modifying further comprises modifying, by the learning system, a user interface to display a description of the generated reminder. (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) Chaudhry et al. fail to disclose explicitly displaying a description of the generated recommendation. Pertaining to analogous art, Lucic discloses wherein modifying further comprises modifying, by the learning system, a user interface to display a description of the identified violation. (Lucic, Figs. 1 - 6, Pg. 2 ¶ 0034 - 0036, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) Lucic fails to disclose explicitly displaying a description of the generated recommendation. Pertaining to analogous art, Osman et al. disclose wherein modifying further comprises modifying, by the learning system, a user interface to display a description of the generated recommendation. (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) - With regards to claim 9, Chaudhry et al. disclose a system (Chaudhry et al., Abstract, Figs. 3 & 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0018, Pg. 3 ¶ 0023 - 0024, 0026 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0045 - 0049, Pg. 6 ¶ 0052 - 0053) comprising: a machine vision component (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) processing a video file to detect at least one object in the video file (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024, 0026 and 0028 - 0032, Pg. 4 ¶ 0036 - 0037) and generating an output including data relating to the at least one object and the video file; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0028 - 0032, Pg. 4 ¶ 0035 - 0040, Pg. 5 ¶ 0042 - 0044) a learning system, in communication with the machine vision component, (Chaudhry et al., Figs. 1 - 6, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 and 0031, Pg. 4 ¶ 0036 - 0040, Pg. 5 ¶ 0049 - Pg. 6 ¶ 0053) analyzing the output Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) and identifying an attribute of the video file, the attribute associated with the at least one object (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 3 ¶ 0032, Pg. 4 ¶ 0034 - 0040, Pg. 5 ¶ 0042 - 0044) and generating a user interface; (Chaudhry et al., Figs. 1, 2, 4 & 6, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) and the learning system analyzing the output and the attribute and the video file (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0016 - 0018 and 0021, Pg. 3 ¶ 0023 - 0024 and 0031 - 0032, Pg. 4 ¶ 0036 - 0040) and determining, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule; (Chaudhry et al., Abstract, Fig. 4, Pg. 1 ¶ 0012, Pg. 2 ¶ 0021, Pg. 4 ¶ 0036 - 0040) wherein the learning system further comprises functionality for generating a reminder for improving a level of compliance with the at least one rule, (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) and wherein the learning system further comprises functionality for modifying the user interface to display an indication of the determination by the learning system (Chaudhry et al., Figs. 1, 2, 4 & 6, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) and to display a description of the generated reminder. (Chaudhry et al., Fig. 4, Pg. 1 ¶ 0012 - 0013, Pg. 2 ¶ 0017 - 0019, Pg. 4 ¶ 0040 - Pg. 5 ¶ 0041) Chaudhry et al. fail to disclose explicitly analyzing and determining by a state machine in communication with the learning system, generating a recommendation for improving a level of compliance with the at least one rule and displaying a description of the generated recommendation. Pertaining to analogous art, Lucic discloses a system (Lucic, Abstract, Figs. 1 - 6, Pg. 1 ¶ 0015 - 0024, Pg. 2 ¶ 0028 - 0034, 0037 - 0038 and 0043, Pg. 3 ¶ 0045 - 0048, Pg. 4 ¶ 0057 - 0058, Pg. 5 ¶ 0071 - 0072) comprising: a machine vision component processing a video file to detect at least one object in the video file (Lucic, Abstract, Figs. 1 - 5, Pg. 1 ¶ 0024, Pg. 2 ¶ 0028 and 0033 - 0034, Pg. 3 ¶ 0045 - 0051 and 0054 - 0056, Pg. 4 ¶ 0067 - 0069) and generating an output including data relating to the at least one object and the video file; (Lucic, Abstract, Fig. 5, Pg. 1 ¶ 0024, Pg. 3 ¶ 0046 - 0051 and 0054 - 0055, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) a learning system, in communication with the machine vision component, (Lucic, Abstract, Figs. 1 - 5, Pg. 2 ¶ 0028 and 0033 - 0034, Pg. 3 ¶ 0045 - 0051 and 0054 - 0056) analyzing the output (Lucic, Abstract, Figs. 2 - 7D, Pg. 2 ¶ 0027 - 0028 and 0033 - 0034, Pg. 3 ¶ 0048 - 0054, Pg. 4 ¶ 0064 - Pg. 5 ¶ 0070) and identifying an attribute of the video file, the attribute associated with the at least one object (Lucic, Abstract, Figs. 2 - 7D, Pg. 3 ¶ 0048 - 0054, Pg. 4 ¶ 0064 - Pg. 5 ¶ 0070) and generating a user interface; (Lucic, Figs. 1 - 5, Pg. 1 ¶ 0016, Pg. 2 ¶ 0034 - 0037, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) and a state machine, in communication with the learning system, (Lucic, Abstract, Figs. 2 - 5, Pg. 2 ¶ 0025 - 0029, Pg. 3 ¶ 0054 - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) analyzing the output and the attribute and the video file (Lucic, Abstract, Fig. 5, Pg. 1 ¶ 0024 - Pg. 2 ¶ 0028, Pg. 3 ¶ 0044 - 0046, Pg. 3 ¶ 0054 - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) and determining, that the at least one object is prohibited from appearing with the attribute in the video file by at least one rule; (Lucic, Abstract, Figs. 2 - 6, Pg. 3 ¶ 0054, - Pg. 4 ¶ 0060, Pg. 4 ¶ 0067 - Pg. 5 ¶ 0070) wherein the learning system further comprises functionality for modifying the user interface to display an indication of the determination by the state machine (Lucic, Figs. 1 - 5, Pg. 1 ¶ 0016, Pg. 2 ¶ 0034 - 0037, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) and to display a description of the identified violation. (Lucic, Figs. 1 - 6, Pg. 2 ¶ 0034 - 0036, Pg. 3 ¶ 0044, Pg. 4 ¶ 0058) Lucic fails to disclose explicitly generating a recommendation for improving a level of compliance with the at least one rule and displaying a description of the generated recommendation. Pertaining to analogous art, Osman et al. disclose wherein the learning system further comprises functionality for generating a recommendation for improving a level of compliance with the at least one rule, (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) and wherein the learning system further comprises functionality for modifying the user interface to display a description of the generated recommendation. (Osman et al., Figs. 1A - 3B, 7A & 7B, Pg. 1 ¶ 0004 - 0005, Pg. 2 ¶ 0011 - 0012, Pg. 3 ¶ 0059 - Pg. 4 ¶ 0066, Pg. 10 ¶ 0107 - 0109, Pg. 11 ¶ 0114 - 0116) Chaudhry et al. and Lucic are combinable because they are both directed towards analyzing video data of an environment with machine learning techniques to detect the occurrence of one or more rule violations in the environment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chaudhry et al. with the teachings of Lucic. This modification would have been prompted in order to substitute the algorithm or machine learning model utilized to analyze safety requirements with detected object characteristics of Chaudhry et al. for the state machine of Lucic. The state machine of Lucic could be substituted in place of the algorithm or machine learning model utilized to analyze safety requirements with detected object characteristics of Chaudhry et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination a state machine in communication with a learning system would be utilized to determine compliance or non-compliance with the at least one rule of the base device of Chaudhry et al. Furthermore, this modification would have been prompted by the teachings and suggestions of Chaudhry et al. that an algorithm or machine learning model may be utilized to analyze safety requirements with detected object characteristics, that implementation of their teachings is not limited to any specific combination of hardware circuitry and/or software and that the resulting implementation of their teachings and disclosed logical operations is a matter of choice, see at least page 4 paragraph 0040, page 5 paragraph 0049 and page 6 paragraph 0052 of Chaudhry et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a state machine in communication with a learning system would be utilized to determine compliance or non-compliance with the at least one rule of the base device of Chaudhry et al. In addition, Chaudhry et al. in view of Lucic and Osman et al. are combinable because they are all directed towards analyzing video data of an environment with machine learning techniques to detect the occurrence of one or more rule violations in the environment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Chaudhry et al. in view of Lucic with the teachings of Osman et al. This modification would have been prompted in order to enhance the combined base device of Chaudhry et al. in view of Lucic with the well-known and applicable technique Osman et al. applied to a comparable device. Generating and displaying a recommendation for improving a level of compliance with the at least one rule, as taught by Osman et al., would enhance the combined base device by helping improve compliance with the at least one rule by monitored individuals so as to help ensure mitigation of identified safety issues and reduce a number of possible dangerous incidents resulting from non-compliance with safety requirements from occurring. Furthermore, this modification would have been prompted by the teachings and suggestions of Chaudhry et al. that compliance information, including a reminder on requirements for compliance with rules and/or requirements based on identified compliance issues, may be output to users to help mitigate or remedy identified violations, see at least page 2 paragraph 0018 and page 4 paragraph 0040 - page 5 paragraph 0041 of Chaudhry et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a recommendation for improving a level of compliance with the at least one rule would be generated and displayed in order to help improve compliance with at least one rule by monitored individuals thereby facilitating mitigation of identified safety issues and increasing an overall level of safety of the monitored individuals. Therefore, it would have been obvious to combine Chaudhry et al. with Lucic and Osman et al. to obtain the invention as specified in claim 9. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC RUSH/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Mar 28, 2025
Application Filed
Jun 12, 2025
Non-Final Rejection — §102, §103, §112
Dec 15, 2025
Response Filed
Jan 16, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586229
COMPUTER IMPLEMENTED METHODS AND DEVICES FOR DETERMINING DIMENSIONS AND DISTANCES OF HEAD FEATURES
2y 5m to grant Granted Mar 24, 2026
Patent 12548292
METHOD AND SYSTEM FOR IDENTIFYING REFLECTIONS IN THERMAL IMAGES
2y 5m to grant Granted Feb 10, 2026
Patent 12548395
SYSTEMS, METHODS AND DEVICES FOR MONITORING BETTING ACTIVITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12541856
MASKING OF OBJECTS IN AN IMAGE STREAM
2y 5m to grant Granted Feb 03, 2026
Patent 12518504
METHOD FOR CALIBRATING AN OBJECT RE-IDENTIFICATION SOLUTION IMPLEMENTING AN ARRAY OF A PLURALITY OF CAMERAS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
97%
With Interview (+36.2%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 628 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month