Prosecution Insights
Last updated: April 19, 2026
Application No. 18/174,859

AUTOMATIC HIGH BEAM CONTROL FOR AUTONOMOUS MACHINE APPLICATIONS

Non-Final OA §101§102§103
Filed
Feb 27, 2023
Examiner
ROSARIO, DENNIS
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
385 granted / 557 resolved
+7.1% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
591
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claims 9-14 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation: Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Weinzaepfel et al. (US 2020/0364509 A1): Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of ZHANG et al. (Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes: 15 July 2020): Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of KAINO (DE 10 2019 104 113 A1) with SEARCH machine translation: Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1): Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of TAKAHITO (DE 112015000723 T) with SEARCH machine translation: Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of Alsallakh et al. (US 2019/0034557 A1): Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied in claims 9,12,14 further in view of JIANG et al. (CN 111402336 A: Date Published 2020-07-10: July 10, 2020) with SEARCH machine translation: Claim(s) 1,2,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1): Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 further in view of KAINO (DE 10 2019 104 113 A1) with SEARCH machine translation: Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1): Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of Aoba e al. (US 2019/0012790 A1) further in view of TAKAHITO (DE 112015000723 T) with SEARCH machine translation: Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of Epstein (US 8,538,175 B1) in view of Alsallakh et al. (US 2019/0034557 A1): Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of JIANG et al. (CN 111402336 A: Date Published 2020-07-10: July 10, 2020) with SEARCH machine translation: Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/5/2025 has been entered. Claims 1-20 pending: PNG media_image1.png 720 83 media_image1.png Greyscale Priority PNG media_image2.png 995 944 media_image2.png Greyscale Applicant’s claim for the benefit of a prior-filed application under 35 U.S .C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 62/885,774 08/12/2019, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application: Claims 5,7,8,10,11,13,16 not given the benefit of Application No. 62/885,774 08/12/2019 and thus given the 35 USC 102(a) date of 8/12/2020 of CON of 16/991,242 08/12/2020 PAT 11,613,201: Re claims 5 &10, The claimed “stationary” of claims 5 and 10 is not in Application No. 62/885,774 08/12/2019. Re claim 7, The claimed “dim region masks” of claim 7 is not in Application No. 62/885,774 08/12/2019 that instead discloses “a binary mask”, 1st page, [0004]: penultimate sentence; and Re claims 8 & 16, The claimed “recursively weighting” of claims 8 and 16 is not in Application No. 62/885,774 08/12/2019. Re claim 11, The claimed “select the one or more class labels” and “selected one or more class labels” of claim 11 is not in Application No. 62/885,774 08/12/2019. Re claim 13, The claimed “consistent assignments” of claim 13 is not in Application No. 62/885,774 08/12/2019. Thus claims 5,7,8,10,11,13,16 not given the benefit of Application No. 62/885,774 08/12/2019 and thus given the 35 USC 102(a) date of 8/12/2020 of CON of 16/991,242 08/12/2020 PAT 11,613,201: PNG media_image3.png 720 308 media_image3.png Greyscale Response to Arguments Rejections based on 35 USC 101 Applicant's arguments filed 12/05/2025, page 1, have been fully considered but they are not persuasive (in part): Amended (12/05/2025) Claims 9-14 is rejected under 35 USC 101, as detailed below. Thus applicant’s remarks regarding claims 9-14 is not persuasive regarding 35 USC 101. However Amended (12/05/2025) Claim 1-8 and 15-20 are not rejected under 35 USC 101 reflecting the disclosure’s ([0005] reproduced below) improvement to the functioning of a computer with automated lighting. Thus applicant’s remarks regarding claims 1-8 and 15-20 is persuasive regarding 35 USC 101. Rejections based on 35 USC 102 Applicant’s arguments, see remarks, pages 1,2, filed 12/05/2025, with respect to the rejection(s) of claim(s) 1,3,4,6,8,15,17,18 and 20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 1,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation: wherein YI teaches the claimed segmentation mask confidence (or “posterior probability” “space confidence”, pg. 7, 5th txt blk, wherein “the posterior probability…may be understood as…the foreground mask”, pg. 7, 6th txt blk, “using image segmentation”, pg. 7, 5th txt blk) and temporal filtering (or “prior1 probability” “filtering”, pg. 7, 8th txt blk). Applicant’s arguments, see remarks, pages 2,3 filed 12/05/2025, with respect to the rejection(s) of claim(s) 1,3,4,6,8,15,17,18 and 20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 1,2,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1): wherein Stam teaches the claimed “temporal filtering”(resulting in a “prior2 steps”- “filtered image”, [0081] 2nd S: fig. 10: prior steps: maps to fig. 5:502: “Extract Features of Light Sources From Image”); and wherein El-Khamy teaches the difference of claim 1 of: (“confidence in the”) segmentation mask (“502”, [0070]: fig. 1A: fig. 1B:2400: “Calculate segmentation mask at each resolution”) (confidences). Applicant’s arguments, see remarks, pages 4 filed 12/05/2025, with respect to the rejection(s) of claim(s) 9,13 and 14 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1): wherein Ge teaches in the motivation statement the claimed “segmentation mask confidence”: Since Stein teaches training a classifier, one of skill in the art of training classifiers can make Stein’s “confidence of the classification (step 313)”, [0043] 2nd to last S, be as Ge’s “object score (e.g., a likelihood and/or confidence score) for each object”, [0031] last S, segmentation proposal mask—i.e., be as Ge’s segmentation proposal mask confidence score--predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers. Rejections based on 35 USC 103 Applicant’s arguments, see remarks, page 5, filed 12/05/2025, with respect to the rejection(s) of claim(s) 2 and 16 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Weinzaepfel et al. (US 2020/0364509 A1): Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of KAINO (DE 10 2019 104 113 A1) with SEARCH machine translation: Applicant’s arguments, see remarks, page 5, filed 12/05/2025, with respect to the rejection(s) of claim(s) 5 and 19 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: Applicant’s arguments, see remarks, page 5,6, filed 12/05/2025, with respect to the rejection(s) of claim(s) 7 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of ZHANG et al. (Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes: 15 July 2020): Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): Applicant’s arguments, see remarks, page 6, filed 12/05/2025, with respect to the rejection(s) of claim(s) 9,13,14 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1): Applicant’s arguments, see remarks, page 6,7 filed 12/05/2025, with respect to the rejection(s) of claim(s) 10 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of TAKAHITO (DE 112015000723 T) with SEARCH machine translation: Applicant’s arguments, see remarks, page 7 filed 12/05/2025, with respect to the rejection(s) of claim(s) 11 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of Alsallakh et al. (US 2019/0034557 A1) Applicant’s arguments, see remarks, page 7 filed 12/05/2025, with respect to the rejection(s) of claim(s) 12 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of: Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1): Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 9-14 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step zero: establish broadest reasonable interpretation as shown in footnotes in this Office action. Step 1: Claim 1 is a process; claim 9 is a machine; claim 15 is a machine; Step 2A, prog 1: The claim(s) recite(s): an abstract idea and math: 9. A system comprising:… computing…,…based at least on image data …, one or more segmentation mask confidences … the one or more actors detect in one or more regions of the one or more image… determining a beam configuration based at least on the one or more segmentation mask confidences”. PNG media_image4.png 826 818 media_image4.png Greyscale Step 2A, prong 2: This judicial exception is not integrated into a practical application because the additional elements (such as “circuits”; “neural networks”; “environment” “class labels”; “pixels”; “actors” “states”; “beam configuration” “controlling a lighting system” in claim 9) combined with the abstract idea and math is not: improving the neural network3 computer (recognition) function: “An improvement in the functioning of a computer, or an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a);” or making an autonomous vehicle integral to the claim: “Implementing a judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b)” in view of applicant’s disclosure, [0005]: PNG media_image5.png 982 879 media_image5.png Greyscale Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because each additional element (such as “circuits”; “neural networks”; “environment” “class labels”; “pixels”; “actors” “states”; “beam configuration” “controlling a lighting system” in claim 9) considered individually or with the mental process & math adheres to conventional practices as indicated in applicant’s specification’s background4 and under 35 USC 112: explanation not needed to one of skill in the art: 35 USC 112(a): explanation not needed to one of skill in the art of neural networks: MPEP 2106.05(d)I.2. A factual determination is required to support a conclusion that an additional element (or combination of additional elements) [“neural network”] is well-understood, routine, conventional activity: “As such, an examiner should determine that an element (or combination of elements) is well-understood, routine, conventional activity only when the examiner can readily conclude, based on their expertise in the art, that the element is widely prevalent or in common use in the relevant industry. The analysis as to whether an element (or combination of elements) is widely prevalent or in common use is the same as the analysis under 35 U.S.C. 112(a) as to whether an element is so well-known that it need not be described in detail in the patent specification.”: PNG media_image6.png 523 715 media_image6.png Greyscale Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation: MPEP 904.03 Conducting the Search [R-07.2022] The best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) should always be the one used in rejecting the claims (1,3,4,5,6,7 and 15,16,17,18,19,20). Sometimes the best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) will have a publication date56 (07 May 2020) less than a year prior to the application filing date (8/12/2019), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (IDS cited Stam et al.: US 2004/0143380 A1) exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims (1,2,3,4,5,6,7 and 15,16,17,18,19,20) should be additionally rejected thereon: PNG media_image7.png 733 309 media_image7.png Greyscale Re 1. (Currently Amended), WANG teaches A method comprising: determining7 (via “analyze” [0022] 4th S, resulting in “recognized” “object”, pg. 6, [0022]), using one or more neural networks (NNs) (“CNN”, pg. 6 [0022]) and based8 at least on9 image data representative of one or more (“series”, WANG [0036] last S) images of an environment10, (A) one or (B) more indicators (i.e., “environmental information”11 [0022] 4th S, resulting in “recognized” “object”, pg. 6, [0022]) of12 segmentation13 mask14 (“filtering”, pg. 13: [0035] last S: Wang teaches “mask filtering” but not modified by “segmentation” and thus does not teach “segmentation mask” ;“high”, pg. 24: [0063] last S) confidences (WANG only adjectivally teaches singular “high confidence level”, pg. 24 [0063], and thus teaches the apposition range of “confidences”) of one or more (“moving” [0036] last S) states of one or more (detected) actors (via a “proximity sensor” [0041] 2nd S) 15detected (“with high confidence level”, pg. 24 [0063], last S) in the one or more (bounding) images; determining16 , based1718 at least on temporally1920 (“mask”, pg. 13 [0035]) filtering (Wang teaches “mask filtering” but does not teach the (adjective) modifier “temporally”) the (A) one or (B) more segmentation mask confidences over (or “filter…in”, pg. 13 [0035]) the one or more images,21 (C) one or (D) more pixels (expressing the action of the verb (determine) or its result, product (“the determined one or more pixels”), material, etc. (the art of building; a new building; cotton wadding ): Wang does not teach “pixel”) of the22 (E) one or (F) more images corresponding to the (G) one or (H) more (“moving” [0036] last S) states; determiningst S) based23 at least on the determined one or more pixels corresponding to the (environmental) (G) one or (H) more (moving) states (comprised by “coarse movement information” [0063], last S) of the (I) one or (J) more (car) actors; and controlling the lighting system based24 at least on the beam configuration (via a “controller 104…lighting adjustment…according to the lighting adjustment configuration” [0022] last S). WANG does not teach the difference of claim 1 of: -- segmentation25 mask26 (confidences27)28…29 determining, based3031 at least on temporally3233 (filtering)34…the one or more segmentation mask (confidences)35…,one or more pixels… the determined one or more pixels corresponding to (the or more states)--. YI teaches the difference of claim 1: segmentation36 mask37 (or “segmentation” “masking” “solving space confidence map”, pg. 7, 5th txt blk, shown below as “prob_o”) (confidences38)39…40 (via: PNG media_image8.png 98 881 media_image8.png Greyscale ) determining, based4142 at least on temporally4344 (or block a prior foreground/backgnd pixel via “prior probability”4546-“filtering”47, pg. 7, 9th txt blk ) (filtering)48…the one or more segmentation mask (or “segmentation” “masking” “solving space confidence map”, pg. 7, 5th txt blk, shown below as “prob_o”) (confidences)49… (via: PNG media_image8.png 98 881 media_image8.png Greyscale ) , one or more pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”, pg. 7, 6th txt blk: said “forlikehood”) … the determined one or more pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”, pg. 7, 6th txt blk: said “forlikehood”) corresponding to (via “motion detection…comparing50 the new pixel and51 the background model”, pg. 6, 6th txt blk) (the or more states). Since WANG teaches tracking (“image”-“tracked”-“target object” [0036] penult S), one of skill in the art of tracking can make WANG’s be as YI’s predictably recognizing the change as “improved” “target tracking”, YI, pg. 3, 12 txt blk: PNG media_image9.png 1124 1526 media_image9.png Greyscale Re 3. (Currently Amended),WANG of the combination of WANG,YI teaches The method of claim 1, wherein the determining the beam configuration includes one or more of: adjusting the beam configuration to reduce (via “decreasing” [0023] 2nd S) illumination for one or more first portions of the one or more adjusting the beam configuration to increase (via “increasing” [0023] 2nd S) illumination for one or more second portions of the one or more images; or adjusting the beam configuration to selectively pivot one or more beam lights using one or more motors based at least on the one or more pixels. Re 4. (Previously Amended), WANG of the combination of WANG,YI teaches The method of claim 1, wherein the determining the beam configuration is based at least on generating one or more two-dimensional mappings (or a “stereo image” “map”, [0039] 2nd S) from one or more (“sensing” [0070) locations of one or more (“combination of” [0021] 1st S) sensors used to generate the (environmental) image data to (via said sensor fusion “match” [0019] last S) one or more beam locations (via “laser” “distance” [0019] 3rd S) corresponding to the (adjusting) beam configuration. Re 6. (Currently Amended), WANG of the combination of WANG,YI teaches The method of claim 1, wherein the one or more (“moving” [0036] last S) states include an active state for at least one actor of the one or more actors and the beam configuration maintains or increases(“/decreasing”, WANG: [0023], 2nd S) illumination for the one or more pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”,YI: pg. 7, 6th txt blk: said “forlikehood”) based at least on the one or more pixels corresponding to (via “motion detection…comparing52 the new pixel and53 the background model”, YI: pg. 6, 6th txt blk) the active (“moving” [0036] last S) state for the at least one actor. Claim 15 is rejected like claim 1: Re 15. (Currently Amended), WANG of the combination of WANG,YI teaches At least one processor (fig. 1:104: “Controller”) comprising: one or more circuits (fig. 1:110: “Communication Circuit”) to control[[ling]] a lighting system based at least on a beam configuration (via fig. 1:106: “Lighting system”) of the lighting system, the beam configuration being determined based at least on: one or more (environmental image) indicators (i.e., “environmental information”54 [0022] 4th S, resulting in “recognized” “object”, pg. 6, [0022]) of segmentation mask confidences of one or more (moving) states of one or more (cars) actorsdetected (“with high confidence level”, pg. 24 [0063], last S) in one or more images, the one or more (boxed-image) indicators determined based at least on one or more (“object recognition” [0022] 5th S) neural networks (NNs) (“CNN” [0022]) processing (indicative) image (information) data representative of the one or more (indicative) images of an (information) environment; and one or more pixels of the one or more (“series”, WANG [0036] last S) images corresponding to the one or more states, the one or more pixels determined, based at least on temporally filtering the one or more segmentation mask confidences over (or “filter…in”, pg. 13 [0035]) the one or more (“series”, WANG [0036] last S) images. . Claim 17 is rejected like claim 3: Re 17. (Currently Amended), WANG of the combination of WANG,YI teaches The at least one processor of claim 15, wherein the determining the beam configuration includes one or more of: adjusting the beam configuration to reduce illumination for one or more first portions of the one or moreimages; adjusting the beam configuration to increase illumination for one or more second portions of the one or more images; or adjusting the beam configuration to selectively pivot one or more beam lights using one or more motors based at least on the one or more pixels. Claim 18 is rejected like claim 4: Re 18. (Currently Amended), WANG of the combination of WANG,YI teaches The at least one processor of claim 15, wherein the based configuration determined, at least, by generating one or more two-dimensional mappings from one or more locations of one or more sensors used to generate the image data to one or more beam locations corresponding to the beam configuration. Claim 20 is rejected like claim 14: Re 20., (Previously Presented) WANG of the combination of WANG,YI teaches The at least one processor of claim 15, wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Weinzaepfel et al. (US 2020/0364509 A1): PNG media_image10.png 733 459 media_image10.png Greyscale Re 2. (Currently Amended) , WANG of the combination of WANG,YI teaches The method of claim 1, wherein the segmentation mask confidences (WANG of the combination of WANG,YI teaches singular “high confidence level”55, pg. 24 [0063], and thus teaches the apposition range of “confidences”) are of one or more class56 labels (via “segmentation” “masking”57 “solving space confidence map”, YI: pg. 7, 5th txt blk, shown below as “prob_o”) representing that pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”, YI: pg. 7, 6th txt blk: said “forlikehood”) of the (“series”, WANG [0036] last S) image depict58 the one or more actors (via a “proximity sensor” [0041] 2nd S) in one or more inactive or active (“moving”59, WANG [0036] last S) states. WANG of the combination of WANG,YI does not teach the difference of claim 2 of “class”60. Weinzaepfel teaches the difference of claim 2 of class61 (“label o, i.e., the identifier of the detected object-of-interest and a confidence score” [0044] 2nd S). Since WANG of the combination of WANG,YI teaches detection, one of skill in the art of detection can make WANG’s of the combination of WANG,YI be as Weinzaepfel’s predictably recognizing the change “enabling improved detection and matching of objects-of-interest at test time with novel viewpoints”, Weinzaepfel [0093]. Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: PNG media_image11.png 733 459 media_image11.png Greyscale Re 5. (Currently Amended), WANG of the combination of WANG,YI teaches The method of claim 1, wherein the one or more (“moving” [0036] last S) states include an inactive stationary (“moving” [0036] last S) state62 for63 at least one (“moving” [0036] last S) actor (via a “proximity sensor” [0041] 2nd S) of the one or more (“moving” [0036] last S) actors and the beam configuration maintains or increases (via “increasing” [0023] 2nd S) illumination for the one or more pixels based64 at least on65 (“the command”, WANG [0023] 2nd S) the one or more pixels corresponding66 to (via a form-of-“be”-word) the inactive stationary (“moving” [0036] last S) state67 for68 the at least one (“moving” [0036] last S) actor (via a “proximity sensor” [0041] 2nd S). WANG of the combination of WANG,YI does not teach the difference of claim 5 of: “inactive stationary … inactive stationary”. YUREVICH teaches the difference of claim 5: inactive stationary (or “pixel”-“sense”-“inactive”-“stationary” via “The problem of detecting abandoned stationary objects is…inactive in the sense of changing the brightness of pixels over time”, pg, 2, 3rd txt blk, “subject to the following conditions: - the object is not detected”, pg. 4, 5th txt blk) (state69)70 … inactive stationary (“fixed object…density…is higher than the specified”, pg. 4, 5th txt blk) (state)71. Since WANG of the combination of WANG,YI teaches a moving object, one of ordinary skill in the art of moving objects can make WANG’s of the combination of WANG,YI be as YUREVICH’s predictably recognizing the change “to improve the quality of detection of left objects in the video stream by reducing the number of false positives and ensuring the reliability of the analysis results”, YUREVICH, pg. 2, 4th txt blk. Claim 19 is rejected like claim 5: Re 19., (Currently Amended) WANG of the combination of WANG,YI teaches The at least one processor of claim 15, wherein the beam configuration is determined, at least, by determining the one or more (“moving” [0036] last S) states as corresponding to one or more inactive (“adjustment”, pg. 20: [0050]) actors and the beam configuration maintains or increases illumination (or “light intensity”, pg. 20: [0050]) for72 the one or more pixels based at least on the one or more (“moving” [0036] last S) states corresponding to the one or more inactive (“adjustment”, pg. 20: [0050]) actors. WANG of the combination of WANG,YI does not teach the difference of claim 19 of: “inactive …73inactive”. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of ZHANG et al. (Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes: 15 July 2020): Re claim 7: MPEP 904.03 Conducting the Search [R-07.2022] The best reference should always be the one used in rejecting the claims (claim 7). Sometimes the best reference (ZHANG et al. (Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes)) will have a publication date (15 July 2020) less than a year prior to the application filing date (8/12/2020), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (Li et al. (US 2015/0278616 A1)) exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims (claim 7) should be additionally rejected thereon. PNG media_image12.png 733 459 media_image12.png Greyscale Re 7. (Currently Amended), WANG of the combination of WANG,YI teaches The method of claim 1, comprising: generating,74 using the one or more segmentation mask[[s]] confidences75,76 one or more dim region masks (maps to “mask filtering” [0035] last S) indicating the one or more pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”, YI: pg. 7, 6th txt blk: said “forlikehood”) for adjustment of illumination, wherein the (adjusting) beam configuration is determined using the one or more dim region masks (maps to “mask filtering” [0035] last S). Wang does not teach the difference of claim 7 of: --dim region (masks)77 …dim region (masks)--. Zhang teaches the difference of claim 7: 7. (Currently Amended) The method of claim 1, comprising: generating, using the one or more segmentation mask[[s]] confidences , one or more (“foreground”, pg.130372, section B. FOREGROUND DETECTION, 1st S) dim region masks (fig. 7:description thereof: “night” “detection masks” and fig. 8: description thereof: “night video” “detection masks”, pg. 130374) indicating the one or more pixels for adjustment (“to adapt to scenes that change after segmenting the foreground pixels” as binary 1, pg. 130372, C. BACKGROUND UPDATES, 1st S) of illumination (given that video comprises “the turning on/off of high or low beam lights”, pg. 130375, lcol, penult para, 2nd S), wherein the (high-low) beam configuration is determined (or classified as 1 or 0) using the one or more dim (night) region (video) masks. Since Wang suggest using a mask via “mask filtering”, [0035] last S, one of skill in the art of mask filtering can make Wang’s be as Zhang’s predictably recognizing the change detecting vehicles “completely and precisely” while resisting more noise as compared to others, Zhang, pg. 130375, lcol, 1st full S: PNG media_image13.png 2086 1135 media_image13.png Greyscale Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): Re claim 7: MPEP 904.03 Conducting the Search [R-07.2022] The best reference should always be the one used in rejecting the claims (claim 7). Sometimes the best reference (ZHANG et al. (Background Subtraction Using an Adaptive Local Median Texture Feature in Illumination Changes Urban Traffic Scenes)) will have a publication date (15 July 2020) less than a year prior to the application filing date (8/12/2020), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (Li et al. (US 2015/0278616 A1)) exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims (claim 7) should be additionally rejected thereon. PNG media_image14.png 733 567 media_image14.png Greyscale Re 7. (Currently Amended), WANG of the combination of WANG,YI teaches The method of claim 1, comprising: generating, using the one or more segmentation mask[[s]] confidences78, one or more dim region masks (maps to “mask filtering” [0035] last S) indicating the one or more (bounding) pixels (via said prob_o: “solving…the pixel belonging to the foreground mask”, YI: pg. 7, 6th txt blk: said “forlikehood”) for adjustment of illumination, wherein the (adjusting) beam configuration is determined using the one or more dim region masks (maps to “mask filtering” [0035] last S). Wang does not teach the difference of claim 7 of: -- dim region (masks) …dim region (masks)--. Li teaches the difference of claim 7: Re 7. (Currently Amended), The method of claim 1, comprising: generating, using the one or more segmentation mask[[s]] confidences , one or more (different- from-the-“background”-“scene” “headlights and/or shadow” [0035] 4th S) dim region masks (or a “shadow”-“foreground”-“mask” [0036] 2nd S: FIG. 5: S516: “GENERATE FIRST MASK”) indicating the one or more (background shadow) pixels for (an “updated” [0052] 2nd S) adjustment (figs. 3,5:S306,S506: “UPDATE CURRENT BACKGROUND MODEL USING INCOMING FRAME”) of illumination (as shown in fig. 1), wherein the beam configuration (fig. 1:cars with beam configurations) is determined (via said headlight & shadow mask) using the one or more dim region masks (fig. 10B: mask of a headlight). Since Wang suggest using a mask via “mask filtering”, [0035] last S, one of skill in the art of mask filtering can make Wang’s be as Li’s predictably recognizing the change detecting vehicles “with high accuracy”, Li [0051] 2nd S: PNG media_image15.png 1780 938 media_image15.png Greyscale Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation as applied in claims 1,3,4,6 and 15,17,18,20 above further in view of KAINO (DE 10 2019 104 113 A1) with SEARCH machine translation: PNG media_image16.png 733 567 media_image16.png Greyscale Claim 8 is rejected like claim 1: Re 8. (Currently Amended), WANG of the combination of WANG,YI teaches The method of claim 1, wherein the 79filtering (WANG teaches “mask filtering” but does not teach the (adjective) modifier “temporally”; however, WANG of the combination of WANG,YI teaches “temporally”) includes recursively weighting the one or more segmentation mask (“filtering”, WANG: pg. 13: [0035] last S: WANG teaches “mask filtering” but not modified by “segmentation” and thus does not teach “segmentation mask”; however, WANG of the combination of WANG,YI teaches “segmentation mask” ;“high”, pg. 24: [0063] last S) confidences (WANG only adjectivally teaches singular “high confidence level”, pg. 24 [0063], and thus teaches the apposition range of “confidences”) over (or “filter…in”, pg. 13 [0035]) the one or more images. WANG of the combination of WANG,YI does not teach the difference of claim 8 of “recursively80 weighting”. KAINO teaches the difference of claim 8: recursively weighting (or “recursively” “weighting”, pg. 2, last txt blk). Since WANG of the combination of WANG,YI teaches recognition of a traffic light, one of skill in the art of recognition can make WANG’s of the combination of WANG,YI be as KAINO’s predictably recognizing the change “to improve the recognition rate”, KAINO, pg, 17, 9th txt blk. Claim 16 is rejected like claim 8: Re 16. (Currently Amended), WANG of the combination of WANG,YI,KAINO teaches The at least one processor of claim 15, wherein the filtering includes recursively weighting the one or more segmentation mask confidences over the one or more images. Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1): MPEP 904.03 Conducting the Search [R-07.2022] The best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018) should always be the one used in rejecting the claims (1-20). Sometimes the best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) will have a publication date (07 May 2020) less than a year prior to the application filing date (8/12/2019), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (Stein et al. (US 2007/0221822 A1)) exists which cannot be so overcome and which, though inferior (or used as a secondary reference under35 USC 103), is an adequate basis for rejection, the claims (9,10,11,12,13,14) should be additionally rejected (via the above 35 USC 102(a)(1) rejection of Stein) thereon. PNG media_image17.png 733 567 media_image17.png Greyscale Claim 9 is rejected like claim 1: Re 9. (Currently Amended), WANG teaches A system comprising: one or more circuits (fig. 1:110: “Communication Circuit) to perform operations including81: computing82, using one or more (“convolutional” [0022] 5th S) neural networks (NNs) (or “CNN”) and based at least on image data (“from the at least one sensor (e.g., the image)” [0034] penult S) representative of one or more images of an environment, one or more segmentation mask confidences (via “high confidence level” [0063] last S) of one or more class labels representing83 (via said “the image” [0034] penult S) that84 pixels of an image (via said “the image” [0034] penult S) of the one or more images depict one or more (“moving”85 [0036] last S) actors (tracked-target-object-actor) in inactive or (being) active states86 one or more images determining a beam (adjusting) configuration (via said “associated with the object according to the lighting adjustment configuration” [0022], last S) based at least on the one or more segmentation mask confidences (via “high confidence level” [0063] last S); and controlling a lighting system based at least on the beam configuration (via said “associated with the object according to the lighting adjustment configuration” [0022], last S). WANG does not teach the difference of claim 9 of:. A) segmentation mask (confidences)87…88 B) one or more class labels…(representing) that89pixels … C) segmentation mask (confidences). YI teaches differences A) & C) of the difference of claim 9 in the rejection of claim 1. Thus WANG of the combination of WANG,YI does not teach the remaining difference of claim 9 of: B) one or more class labels…(representing) that90pixels. Aoba teaches the remaining difference of claim 9: B) one or more class labels (or “The class label 520” [0042] 8th S: fig. 4B:520)…(representing) that91 (automobile) pixels (or “a pixel whose class is ‘sky’…and a pixel whose class is ‘non-sky’” [0042] 8th S: fig. 4B: a tree or “automobile” [0040] 3rd S). PNG media_image9.png 1124 1526 media_image9.png Greyscale Since WANG of the combination (illustrated above: FIG. 4 & Figure 1) of WANG,YI teaches classification, one of skill in classification can make WANG’s of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]: Fig. 4 & FIG. 2G: PNG media_image18.png 888 1282 media_image18.png Greyscale Claim 12 is rejected like claim 10, below: Re 12. (Currently Amended), WANG of the combination of WANG,YI,Aoba teaches The system of claim 9, wherein the one or more NNs output (via “a command92 from93 the controller 104”, pg. 7 [0023]) data (symbolically) representative94 of the one or more segmentation mask confidences (via “high confidence level”, WANG: [0063] last S, via said--YI teaches differences A) & C) of the difference of claim 9 in the rejection of claim 1--) and a class label (via said making WANG’s CNN classification of the combination of WANG,YI be as Aoba’s--) of the one or more class labels represents that the pixels (via the rejection of claim 9 of making WANG’s of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]) of the image (“from the at least one sensor (e.g., the image)”, WANG: [0034] penult S) depict the one or more (“moving”95 [0036] last S) actors in an active state in which the one or more actors are in motion. Re 14. (Original), WANG of the combination of WANG,YI,Aoba teaches The system of claim 9, wherein the system us comprised in at least one of: a control system for an autonomous or semi-autonomous machine (with “an automatic driving system”, Wang: [0022] 2nd to last S) ; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of TAKAHITO (DE 112015000723 T) with SEARCH machine translation: PNG media_image19.png 733 567 media_image19.png Greyscale Re 10. (Currently Amended), WANG of the combination of WANG,YI,Aoba teaches The system of claim 9, wherein the one or more (“convolutional” [0022] 5th S) NNs96 (or “the computing system (e.g., controller 104)”, pg. 13, 2nd S: fig. 1:104: “Controller”) output9798 (via “a command from99 the controller 104”, pg. 7 [0023]) data (via “advanced driver-assistance” “information”100, WANG: pg. 3 [0016]) representative (via “environmental information may include an image of an environment of the vehicle”, pg. 11 [0034] 2nd S) of the one or more segmentation mask confidences (via “high confidence level”, WANG: [0063] last S, via said--YI teaches differences A) & C), i.e., segmentation mask, of the difference of claim 9 in the rejection of claim 1--) and101 a class label (via said making WANG’s CNN classification of the combination of WANG,YI be as Aoba’s--) of the one or more class labels represents that102103 the pixels (via “class label 520 represents a pixel whose class is “sky” by white and104 a pixel whose class is “non-sky” by black”, Aoba [0042] 8th S: label 520 in addition represents a pixel whose class is “sky” by white and a pixel whose class is “non-sky” by black”) of the image depict the one or more actors (via said making WANG’s CNN classification of the combination of WANG,YI be as Aoba’s) in an inactive state in which the one or more actors (tracked-target-object-actor) are stationary. WANG of the combination of WANG,YI,Aoba does not teach the difference of claim 10 of “in an inactive state in which … stationary” of: --(a class label105)106…107 (represents that)108109110111112 (the pixels of the image depict the one or more actions)113 …114in an inactive state in which115 … stationary116--. TAKAHITO teaches the difference of claim 10: --(output117 {“a signal”, pg. 5, 6th txt blk, via fig. 1:30: “VEHICLE SPEED SENSOR”} data)118…119in an inactive (“located” “pedestrian standing”, pg., 9, last text blk) state in which … stationary (or the “standing” via fig. 1:50: “VEHICLE FACING DETECTION PART”)--120: PNG media_image20.png 430 699 media_image20.png Greyscale PNG media_image21.png 1070 597 media_image21.png Greyscale Since WANG of the combination of WANG,YI,Aoba teaches detecting pedestrians, one of skill in the art of pedestrian detection can make WANG’s of the combination of WANG,YI,Aoba be as TAKAHITO’s predictably recognizing the change “addresses the problems…that restricts the difficulty in recognizing an obstacle such as a pedestrian in front of an own vehicle by a driver of an approaching vehicle, and confirms a situation around the own vehicle by one Driver of the own vehicle relieved, while the own vehicle stops”, TAKAHITO, pg. 2, 10th txt blk: PNG media_image22.png 900 991 media_image22.png Greyscale Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied to claims 9,12,14 above further in view of Alsallakh et al. (US 2019/0034557 A1): PNG media_image23.png 733 567 media_image23.png Greyscale Claim 11 is rejected like claims 1 and 9: Re 11. (Currently Amended), WANG of the combination of WANG,YI,Aoba teaches The system of claim 9, wherein the beam configuration (via said “associated with the object according to the lighting adjustment configuration”, WANG: [0022], last S) includes temporally filtering (via “filter certain features in an image”, WANG: [0035] last S, via the rejection of claim 1 of making WANG’s tracking be as YI’s predictably recognizing the change as “improved” “target tracking”, YI, pg. 3, 12 txt blk) the one or more segmentation mask (“filtering”, WANG: [0035] last S) confidences (“with high confidence“, WANG: pg. 24 [0063], via the rejection of claim 9 of YI teaching differences A) & C) of the difference of claim 9 in the rejection of claim 1) over (or “filter…in”, WANG: pg. 13 [0035]) the one or more images to select the one or more class labels (via the rejection of claim 9 of said making WANG’s classification of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]), and the beam configuration (via said “associated with the object according to the lighting adjustment configuration”, WANG: [0022], last S) corresponds to the selected one or more class labels (via the rejection of claim 9 of said making WANG’s classification of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]). WANG of the combination of WANG,YI,Aoba does not teach the difference of claim 11 of : --to select (the one or more class labels)…corresponds to the selected (one or more class labels)--. Alsallakh teaches the difference of claim 11: --to select (via “class hierarchy viewer 210”-“selection”121-“sample images”-“250” [0036] 9th S: fig. 3:250: bird-class-images) (the one or more class labels)…corresponds to the selected (via “corresponding to a user selection in the class hierarchy viewer 210” [0036] 9th S: fig. 5:406: “bird”-label) (one or more class labels): PNG media_image24.png 613 1057 media_image24.png Greyscale Since Aoba of the combination of WANG,YI,Aoba teaches class labels, one of skill in the art of labels (“bird” “mammal” “dog” “conveyance” “vehicle”) can make Aoba’s of the combination of WANG,YI,Aoba be as Alsallakh’s predictably recognizing that change “improves upon conventional analytics methods for CNN classifiers because it enables the user to better understand the training process, diagnose the separation power of the different feature detectors, and improve the architecture of the image classification model 30 accordingly to yield significant gain in accuracy.”, Alsallakh [0091]: PNG media_image25.png 1005 1057 media_image25.png Greyscale Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over WANG (WO 2020/087352 A1: filed 31 Oct 2018) with annotated version thereof in view of YI (CN 109948474 A) with SEARCH machine translation further in view of Aoba e al. (US 2019/0012790 A1) as applied in claims 9,12,14 further in view of JIANG et al. (CN 111402336 A: Date Published 2020-07-10: July 10, 2020) with SEARCH machine translation: PNG media_image26.png 733 567 media_image26.png Greyscale Re 13. (Currently Amended), WANG of the combination of WANG,YI,Aoba teaches The system of claim 9, wherein the determining the beam configuration (via said “associated with the object according to the lighting adjustment configuration” [0022], last S) is determined based at least on the one or more class labels (via the rejection of claim 9 of said making WANG’s classification of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]) based at least on determining that the one or more segmentation mask confidences (“with high confidence“, WANG: pg. 24 [0063], via the rejection of claim 9 of YI teaching differences A) & C) of the difference of claim 9 in the rejection of claim 1) correspond to consistent assignments of122 the one or more class labels (via the rejection of claim 9 of said making WANG’s classification of the combination of WANG,YI be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]) over a threshold quantity (or “value”, WANG: pg. 16 [0041] last S or “percentage”, WANG: pg. 17 [0043] or “distance”, WANG: pg. 24 [0063]) of frames. WANG of the combination of WANG,YI,Aoba does not teach the difference of claim 13 of: -- correspond123 to consistent assignments of124…over…of frames--. JIANG teaches the difference of claim 13: correspond125 to consistent assignments126 (via “corresponding to…the… same127…type128”, pg. 3, 6th txt blk: fig. 2: “Movement point judgement”) of129…over…of (“current…and previous”, pg. 3, 2nd txt blk) frames (“for feature matching”: fig. 2: “Input the current frame and the previous frame image.”): PNG media_image27.png 1034 1092 media_image27.png Greyscale Since WANG of the combination of WANG,YI,Aoba teaches tracking, one of skill in the art of tracking can make WANG’s of the combination of WANG,YI,Aoba be as JIANG’s predictably recognizing the change “provides…tracking…successfully”, JIANG, pg. 4, 7th txt blk. Claim(s) 1,2,3,4,6 and 15,17,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1): MPEP 904.03 Conducting the Search [R-07.2022] The best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) should always be the one used in rejecting the claims (1,3,4,5,6,7 and 15,16,17,18,19,20). Sometimes the best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) will have a publication date (07 May 2020) less than a year prior to the application filing date (8/12/2019), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (IDS cited Stam et al.: US 2004/0143380 A1) exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims (1,2,3,4,5,6,7 and 15,16,17,18,19,20) should be additionally rejected thereon: PNG media_image28.png 726 377 media_image28.png Greyscale Re 1., Stam teaches A method (fig. 5) comprising: determining (clearly by pointing out via fig. 12:1208: “Analyze through Headlamp Classification Network”), using one or more neural networks ([0137] 1st S) (NNs) and based at least on image data representative of one or more images (or “image kernel”, ([0137] 1st S) of an environment (comprising “a rain drop” [0145] 1st S), one or more (“output”130 [0098] 2nd S) indicators of segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, [0095] penult S) of one or more (“oncoming” [0098]/ “is applied” [0045] last S) states of one or more (“detected”-car “107”,[0033] 1st S) actorsdetected (“to classify detected light sources” [0099] 1st S) in the one or more images; determining131, based132 at least on133 temporally filtering134 (resulting in a “prior steps”- “filtered image”, [0081] 2nd S: fig. 10: prior steps: maps to fig. 5:502: “Extract Features of Light Sources From Image”) the one or more segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, [0095] penult S: fig. 12:1208: “Analyze through Headlamp Classification Network”) over the one or more images (“such that the kernel may be temporarily centered on every pixel within the image” [0137] 3rd S) , one or more pixels (“is greater than each of its neighbors” [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?”: expressing the action of the verb (determine) or its result (or any result after fig. 10:1002), product, material, etc (or anything after fig. 10:1002)) of the one or more (“corresponding” [0040] 2nd S) images corresponding135 to the one or more (“oncoming” state [0098]/ “is applied” state [0045] last S/ “be synthetic high dynamic range image” state [0040] 2nd S) states136; determining (via a “switch decision” [0125] 2nd S such that “high beams are activated” [0121] 5th S: fig. 5:505: “Select and Set Desired Forward Lighting State”) the determined one or more pixels (“is greater than each of its neighbors” [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?” : expressing the action of the verb (determine) or its result (or any result after fig. 10:1002), product, material, etc (or anything after fig. 10:1002)) corresponding to the one or more 137 regions of space” [0034] 4th S) of the one or more (car) actors; and controlling the lighting system (to turn ON via fig. 14:1403: an “ON STATE” configuration) based at least on the beam configuration (or the OFF configuration comprised by a “configured” “headlamp control system”, [0120] 1st S, “in the OFF STATE 1401”138 [0120], 2nd S). Stam does not teach the difference of claim 1 of “segmentation mask” (confidences)139. El-Khamy teaches the difference of claim 1 of: (“confidence in the”) segmentation mask (“502”, [0070]: fig. 1A: fig. 1B:2400: “Calculate segmentation mask at each resolution”) (confidences). Since Stam teaches confidence, one of skill in the art of confidences can make Stam’s be as El-Khamy’s predictably recognizing the change “to improve detections in crowded scenes and of small objects”, El-Khamy [0047]: PNG media_image29.png 1195 1812 media_image29.png Greyscale Re 2. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, further comprising: wherein the segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, [0095] penult S: fig. 12:1208: “Analyze through Headlamp Classification Network”) are of one or more class labels (or “ ‘car’ “ “label” or human-people-class label, El-Khamy: [0044], wherein “ ‘car’ ” is equal to class “where the classes may include140, for example, humans, dogs, cats, cars, debris, furniture, and the like)”, El-Khamy: [0056] 4th S) representing that pixels (“in the detection instance”, El-Khamy: [0069] sentence before equations) of the image depict the one or more (detected-car) actors (or human-“people walking” (pedestrians), El-Khamy [0044] 1st S) in one or more inactive or active (“walking”141, El-Khamy [0044] 1st S) states. Re 3. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, wherein the determining (or setting or deciding) the beam configuration includes one or more of: adjusting (via “vehicle headlamp control” [0034] last S) the (high) beam configuration to reduce (via fig. 15: “Level” going down) illumination for one or more first (kernel-pixel) portions of the one or more images; adjusting the beam configuration to increase illumination for one or more second portions of the one or more images; or adjusting the beam configuration to selectively pivot one or more beam lights using one or more motors based at least on the one or more pixels. Re 4. (Previously Presented), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, wherein the determining (or clearly pointing-out) the (ON/OFF) beam configurations is based at least on generating one or more two-dimensional mappings (or classification “probability functions” [0094]) from one or more (pixel) locations of one or more sensors used to generate the image data to one or more beam locations (via “headlamps” [0033] of fig. 1:101 being in the front location) corresponding to the (high) beam configuration. Re 6. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, wherein the one or more (“oncoming” [0098]) states include an active (“oncoming”) state for at least one actor of the one or more (car) actors and the beam configuration (or the OFF configuration comprised by a “configured” “headlamp control system”, [0120] 1st S, “in the OFF STATE 1401”142 [0120], 2nd S) maintains or increases (via a “lamp” “TRANSITION STATE 1402” [0121]: fig. 14: fig. 12:1211: “Determine Appropriate Lighting Setting Based Upon Most Significant Light”) illumination for143 the one or more pixels (“is greater than each of its neighbors” [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?”) based144 at least on the one or more pixels (“is greater than each of its neighbors” [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?”) corresponding to the active (oncoming) state for the at least one (car) actor. Claim 15 is rejected like claim 1: Re 15. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches At least one processor comprising: one or more (“board” [0035] 3rd S) circuits to control[[ling]] a lighting system based at least on a (high) beam (by-levels) configuration of the lighting system, the (OFF/ON) beam configuration being determined based at least on: one or more (output-signal) indicators of segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, [0095] penult S: fig. 12:1208: “Analyze through Headlamp Classification Network”) of one or more (oncoming) states of one or more (car) actorsdetected in one or more images, the one or more (signal) indicators (represented as signal-arrows in fig. 12) determined based at least on one or more neural networks (NNs) (comprising said kernel) processing (for classifying) image data representative of the one or more images of an environment (comprising said rain-drop or “foggy” [0149] 6th S); and one or more pixels (“for the purpose of classifying the type of light source associated with the peak”-“pixel”, [0137] last S) of the one or more images corresponding to the one or more states (or “the same regions of space” [0034] 4th S), the one or more pixels (“for the purpose of classifying the type of light source associated with the peak”-“pixel”, [0137] last S: fig. 10:1004: “Pixel is a Peak Store Peak Value to Memory”) determined, based145 at least on temporally filtering146 (resulting in a “prior steps”- “filtered image”, [0081] 2nd S: fig. 10: prior steps: maps to fig. 5:502: “Extract Features of Light Sources From Image”: expressing the action of the verb (filter) or its result, product, material, etc.147) the one or more segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, [0095] penult S: fig. 12:1208: “Analyze through Headlamp Classification Network”) over the one or more images (“such that the kernel may be temporarily centered on every pixel within the image” [0137] 3rd S). . Claim 17 is rejected like claim 3: Re 17. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The at least one processor of claim 15, wherein the determining the beam configuration includes one or more of: adjusting the beam configuration to reduce illumination for one or more first portions of the one or more images; adjusting the beam configuration to increase illumination for one or more second portions of the one or more images; or adjusting the beam configuration to selectively pivot one or more beam lights using one or more motors based at least on the one or more pixels. Claim 18 is rejected like claim 4: Re 18. (Previously Presented), Stam of the combination of Stam,El-Khamy teaches The at least one processor of claim 15, wherein the based configuration is determined, at least, by generating one or more two-dimensional mappings from one or more locations of one or more sensors used to generate the image data to one or more beam locations corresponding to the beam configuration. Re 20. (Previously Presented), Stam of the combination of Stam,El-Khamy teaches The at least one processor of claim 15, wherein the processor is comprised in at least one of: a control system (or “automatic headlamp control system” [0120]) for an autonomous or semi-autonomous machine (or a machine being independent regarding operating a vehicle exterior light via “automatic vehicle exterior light controller(s)”148149 [0185] last S); a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Claim(s) 5 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of YUREVICH (RU 2 676 028 C1) with SEARCH machine translation: PNG media_image30.png 726 530 media_image30.png Greyscale Re 5. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, wherein the one or more (“oncoming” [0098]/ “is applied” [0045] last S) states include an inactive stationary (“oncoming” [0098]/ “is applied” [0045] last S) state for at least one (“detected”-car “107”,[0033] 1st S) actor of the one or more actors and the (high) beam configuration maintains or increases illumination (“due to the closing distance” [0134]) for the one or more pixels based at least on the one or more pixels corresponding to the inactive stationary (“oncoming” [0098]/ “is applied” [0045] last S) state for the at least one (“detected”-car “107”,[0033] 1st S) actor. Stam of the combination of Stam,El-Khamy does not teach the difference of claim 5 of: inactive stationary (state)150 …151 inactive stationary (state)152. YUREVICH teaches (again) the difference of claim 5: inactive stationary (or “pixel”-“sense”-“inactive”-“stationary” via “The problem of detecting abandoned stationary objects is…inactive in the sense of changing the brightness of pixels over time”, pg, 2, 3rd txt blk, “subject to the following conditions: - the object is not detected”, pg. 4, 5th txt blk) (state153)154 … inactive stationary (“fixed object…density…is higher than the specified”, pg. 4, 5th txt blk) (state)155. Since Stam of the combination of Stam,El-Khamy teaches a moving object (motion), one of ordinary skill in the art of moving objects (motion) can make Stam’s of the combination of Stam,El-Khamy be as YUREVICH’s predictably recognizing the change “to improve the quality of detection of left objects in the video stream by reducing the number of false positives and ensuring the reliability of the analysis results”, YUREVICH, pg. 2, 4th txt blk. Claim 19 is rejected like claim 5: Re 19. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The at least one processor of claim 15, wherein the beam configuration is determined, at least, by determining the one or more states156 (“if they are vehicular head lamps”, Stam [0040] penult S) as corresponding to one or more inactive (“reflection”157, Stam: [0090] 9th S) actors and the beam configuration maintains or increases illumination for the one or more pixels (“is greater than each of its neighbors”, Stam: [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?”: expressing the action of the verb (determine) or its result (or any result after fig. 10:1002), product, material, etc (or anything after fig. 10:1002)) based at least on the one or more states (“if they are vehicular head lamps”, Stam [0040] penult S) corresponding to the one or more inactive (“reflection”158, Stam: [0090] 9th S) actors. Stam of the combination of Stam,El-Khamy does not teach the difference of claim 19 of “inactive”. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 above further in view of Li et al. (US 2015/0278616 A1): MPEP 904.03 Conducting the Search [R-07.2022] The best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) should always be the one used in rejecting the claims (1,3,4,5,6,7,8 and 15,16,17,18,19,20). Sometimes the best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) will have a publication date (07 May 2020) less than a year prior to the application filing date (8/12/2019), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (IDS cited Stam et al.: US 2004/0143380 A1) exists which cannot be so overcome and which, though inferior, is an adequate basis for rejection, the claims (1,2,3,4,5,6,7,8 and 15,16,17,18,19,20) should be additionally rejected thereon: PNG media_image31.png 726 530 media_image31.png Greyscale Re 7. (Currently Amended) Stam of the combination of Stam,El-Khamy teaches The method of claim 1, comprising: generating, using the one or more segmentation mask[[s]] confidences159, one or more dim region masks (“identifying the pixels of the image that correspond to the separate instance of the object. For example, if the semantic segmentation system detects three cars and two pedestrians in the image, five separate instance masks are output: one for each of the cars and one for each of the pedestrians.”, El-hamy [0044] last Ss) indicating the one or more pixels (“is greater than each of its neighbors”, Stam: [0076] 4th S: fig. 10:1002: “Is Pixel > Neighbors?”: expressing the action of the verb (determine) or its result (or any result after fig. 10:1002), product, material, etc (or anything after fig. 10:1002)) for adjustment (i.e., control) of illumination, wherein the (high) beam configuration is determined (or set or decided) using the one or more dim region (car/pedestrian instance) masks. Stam of the combination of Stam,El-Khamy does not teach the difference of claim 7-- dim region … dim region --. Li teaches the difference of claim 7: Re 7. (Currently Amended), The method of claim 1, comprising: generating, using the one or more segmentation mask[[s]] confidences, one or more (different- from-the-“background”-“scene” “headlights and/or shadow” [0035] 4th S) dim region (classification “to distinguish” “shadow”-“foreground “objects” [0036] 2nd S: fig. 7A,8A: shadows) masks (FIG. 5: S516: “GENERATE FIRST MASK”: figs. 7C,8C: mask) indicating the one or more pixels for (an “updated” [0052] 2nd S) adjustment (figs. 3,5:S306,S506: “UPDATE CURRENT BACKGROUND MODEL USING INCOMING FRAME”) of illumination (as shown in fig. 1), wherein the beam configuration (fig. 1:cars with beam configurations) is determined (via said headlight & shadow mask) using the one or more dim region masks (fig. 10B: mask of a headlight). Since Stam of the combination of Stam,El-Khamy suggests a selection of segments [0137]: “variety of image segments”, one of skill in the art of segmentation can make Stam’s of the combination of Stam,El-Khamy be as Li’s predictably recognizing the change detecting vehicles “with high accuracy”, Li [0051] 2nd S. Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over IDS cited Stam et al. (US 2004/0143380 A1) in view of El-Khamy et al. (US 2019/0057507 A1) as applied in claims 1,2,3,4,6 and 15,17,18,20 further in view of KAINO (DE 10 2019 104 113 A1) with SEARCH machine translation: PNG media_image32.png 726 530 media_image32.png Greyscale Re 8. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The method of claim 1, wherein the filtering (resulting in a “prior steps”- “filtered image”, Stam: [0081] 2nd S: fig. 10: prior steps: maps to fig. 5:502: “Extract Features of Light Sources From Image”) includes recursively weighting (closest mapping to the adjective form of weight: “weighting factors”, Stam: [0138], last S; thus, Stam does not teach “recursively weighting”) the one or more segmentation mask confidences (“used to variably control the rate of change of the controlled vehicle's exterior lights, with a higher confidence causing a more rapid change”, Stam: [0095] penult S: fig. 12:1208: “Analyze through Headlamp Classification Network”) over the one or more images (“such that the kernel may be temporarily centered on every pixel within the image” [0137] 3rd S). Stam of the combination of Stam,El-Khamy does not teach the difference of claim 8 of “recursively weighting”. KAINO teaches the difference of claim 8: recursively weighting (or “recursively” “weighting”, pg. 2, last txt blk). Since Stam of the combination of Stam,El-Khamy teaches recognition, one of skill in the art of recognition can make Stam’s of the combination of Stam,El-Khamy be as KAINO’s predictably recognizing the change “to improve the recognition rate”, KAINO, pg, 17, 9th txt blk. Claim 16 is rejected like claim 8: Re 16. (Currently Amended), Stam of the combination of Stam,El-Khamy teaches The at least one processor of claim 15, wherein the filtering includes recursively weighting the one or more segmentation mask confidences over the one or more images. Claim(s) 9,12,14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1): MPEP 904.03 Conducting the Search [R-07.2022] The best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) should always be the one (best reference) used in rejecting the claims (1-20 such as shown in the below 35 USC 103 rejection of claims 9,13,14). Sometimes the best reference (WANG (WO 2020/087352 A1:filed 31 Oct 2018)) will have a publication date (07 May 2020) less than a year prior to the application filing date (8/12/2019), hence it will be open to being overcome under 37 CFR 1.130 or 1.131. In such circumstances, if a second reference (Stein et al. (US 2007/0221822 A1)) exists which cannot be so overcome and which, though inferior (regarding all claims 1-20), is an adequate basis for rejection (via 35 USC 102(a)(1)), the claims (9,13,14) should be additionally rejected thereon (via an additional 35 USC 102(a)(1) rejection of claims 9,13,14): PNG media_image33.png 720 306 media_image33.png Greyscale Re 9., Stein teaches A system comprising: one or more circuits (“as a chip or circuit” [0037] 3rd S) to perform operations including: computing, using one or more neural networks (NNs) (“classifiers” [0046] 5th S) and based at least on image data representative of one or more images (“obtained from a camera mounted on a vehicle” [0002] ]: fig. 4: “Dark Road Scene”) of an environment, one or more segmentation mask confidences (“of the classification” [0043], 2nd to last S: fig. 3:313: “CLASSIFY SPOT”) of 160one or more class labels representing (via said “obtained from a camera [0002]) that (“multiple” [0040] 7th S) pixels of an image of the one or more images (“obtained from a camera mounted on a vehicle” [0002]: fig. 4: “Cark Road Scene”) depict one or more actors in one or more inactive or active (“oncoming”161 [0038] 6th S) states 162163 [0016], last S) of the one or more images (“obtained from a camera mounted on a vehicle” [0002] ]: fig. 4: “Dark Road Scene”) determining a (“low” [0041] 1st S) beam configuration (such that “shape of each cluster are computed”164165 [0054] 9th S: fig. 3:309: FILTER/PROCESS DATA: fig. 3:313: “CLASSIFY SPOT”: fig. 3:23: “CONTROL HEADLIGHT ACTIVATE/DEATIVATE HIGH BEAMS”) based166 at least on167 the one or more segmentation mask confidences (“of the classification” [0043], 2nd to last S); and controlling a lighting system (fig. 3:23: “CONTROL HEADLIGHT ACTIVATE/ DEATIVATE HIGH BEAMS”) based at least on the (filtered) beam (shape) configuration. Stein does not teach the difference of claim 9 of: “segmentation mask (confidences)168…169of one of more class labels…segmentation mask (confidences)”. Ge teaches the difference of claim 9 of: segmentation mask (or “instance segmentation mask” [0024] penult S: fig. 3A:232: “Instance Mask”) (confidences)170…of one of more class labels (via “based on171 the predicted image-level class labels” [0056] 1st S)…segmentation mask (“which masks the pixels that are believed172 to contribute to the visualization of the object” [0024] penult S:fig. 3A:252: Instance Mask”: fig. 8: bird-car-pixels) (confidences)173. Since Stein teaches training a classifier, one of skill in the art of training classifiers can make Stein’s “confidence of the classification (step 313)”, [0043] 2nd to last S, be as Ge’s “object score (e.g., a likelihood and/or confidence score) for each object”, [0031] last S, segmentation proposal mask—i.e., be as Ge’s segmentation proposal mask confidence score--predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers: PNG media_image34.png 1505 1131 media_image34.png Greyscale Re 12. (Currently Amended), Stein of the combination (illustrated above) of Stein,Ge teaches The system of claim 9, wherein the one or more NNs (via said making Stein’s classifier training be as Ge’s predictably recognizing the change “improves the training”, Ge [0064] 3rd S) output data (represented as arrows in Ge’s figs. 1 & 2B) representative (given images of dogs, birds, cars) of the one or more segmentation mask confidences and (via Ge’s fig. 2B:220: “Multi-Label Classification Module” being afterwards of fig. 2B: 240: “Object Detection Module”) a class label (via “based on174 the predicted image-level class labels”, Ge: [0056] 1st S: fig. 2B:206:220: “Images Labels”-“Multi-Label Classification Module”: Labels175- Multi-Label: Classes-Multi-Label:Class-Multi-Label: Class-Labels) of the one or more class labels represents that the (“multiple”, Stein [0040] 7th S) pixels of the image (“obtained from a camera mounted on a vehicle”, Stein: [0002] ]: fig. 4: “Dark Road Scene”) depict the one or more (“oncoming”176, Stein: [0038] 6th S) actors in an active state in which the one or more actors are in motion. Re 14. (Original), The system of claim 9, wherein the system us comprised in at least one of: a control system (via “vehicle control systems” [0036] 4th S) for an autonomous or semi-autonomous machine (or a machine being independent regarding operating a vehicle headlight via “automatic vehicle headlight control”177178 [0003], last S); a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing light transport simulation; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of Aoba e al. (US 2019/0012790 A1) further in view of TAKAHITO (DE 112015000723 T) with SEARCH machine translation: PNG media_image35.png 720 525 media_image35.png Greyscale Re 10. (Currently Amended), Stein as combined (illustrated above) with Ge teaches The system of claim 9, wherein the one or more NNs (via said making Stein’s classifier training be as Ge’s predictably recognizing the change “improves the training”, Ge [0064] 3rd S) output data (represented as arrows in Ge’s figs. 1 & 2B) representative (given images of dogs, birds, cars) of the one or more segmentation mask confidences and179180181182 (via Ge’s fig. 2B:220: “Multi-Label Classification Module” being afterwards of fig. 2B: 240: “Object Detection Module”) a class label (via “based on183 the predicted image-level class labels”, Ge: [0056] 1st S: fig. 2B:206:220: “Images Labels”-“Multi-Label Classification Module”: Labels184- Multi-Label: Classes-Multi-Label:Class-Multi-Label: Class-Labels) of the one or more class (multi-)labels represents185 that186 the (“multiple”, Stein [0040] 7th S) pixels of the image (“obtained from a camera mounted on a vehicle”, Stein: [0002] ]: fig. 4: “Dark Road Scene”) depict the one or more (“oncoming”187, Stein: [0038] 6th S) actors in an inactive state in which the one or more (“oncoming”188, Stein: [0038] 6th S) actors are stationary. Stein of the combination (illustrated above) of Stein,Ge does not teach the difference of claim 10 of: A) represents189 B) an inactive state in which … stationary. Aoba teaches difference A): represents190 (via “class label 520 represents a pixel whose class is “sky” by white and191 a pixel whose class is “non-sky” by black”, Aoba [0042] 8th S: label 520 in addition represents a pixel whose class is “sky” by white and a pixel whose class is “non-sky” by black”). Since Ge of the combination (illustrated above) of Stein,Ge teaches a class label, one of skill in the art of class labels can make Ge’s of the combination (illustrated above) of Stein,Ge be as Aoba’s predictably recognizing the change “to improve the accuracy of processing using a classification result.”, Aoba [0023]: PNG media_image36.png 1865 1130 media_image36.png Greyscale Stein of the combination (illustrated above) of Stein,Ge,Aoba does not teach the last difference of claim 10 of: B) an inactive state in which … stationary. TAKAHITO teaches the last difference of claim 10: --(output192 {“a signal”, pg. 5, 6th txt blk, via fig. 1:30: “VEHICLE SPEED SENSOR”} data)193…194in an inactive (“located” “pedestrian standing”, pg., 9, last text blk) state in which … stationary (or the “standing” via fig. 1:50: “VEHICLE FACING DETECTION PART”)--195: PNG media_image20.png 430 699 media_image20.png Greyscale PNG media_image21.png 1070 597 media_image21.png Greyscale Since Stein of the combination (illustrated above) of Stein,Ge,Aoba teaches detecting pedestrians (Stein: “pedestrian detection”-“applications”-“sensor” [0011]), one of skill in the art of pedestrian detection can make Stein’s of the combination (illustrated above) of Stein,Ge,Aoba be as TAKAHITO’s predictably recognizing the change “addresses the problems…that restricts the difficulty in recognizing an obstacle such as a pedestrian in front of an own vehicle by a driver of an approaching vehicle, and confirms a situation around the own vehicle by one Driver of the own vehicle relieved, while the own vehicle stops”, TAKAHITO, pg. 2, 10th txt blk: PNG media_image37.png 2145 1302 media_image37.png Greyscale Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of Epstein (US 8,538,175 B1) in view of Alsallakh et al. (US 2019/0034557 A1): PNG media_image38.png 720 534 media_image38.png Greyscale Re 11. (Currently Amended), Stein of the combination (illustrated above) of Stein,Ge teaches The system of claim 9, wherein the determining the beam (shape) configuration includes temporally (“red” [0040] 6th S & “Texture Based” [0053]) filtering (fig. 2a:25 & figs. 5a,5b,5c) the one or more segmentation mask confidences (“of the classification” [0043], 2nd to last S, via said make Stein’s be as Ge’s predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers) over the one or more images (“obtained from a camera mounted on a vehicle” [0002] ]: fig. 4: “Dark Road Scene”) to select the one or more class labels (via said make Stein’s be as Ge’s predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers), and the (filtered) beam (shape) configuration corresponds to the selected one or more class labels. Stein of the combination (illustrated above) of Stein,Ge does not teach the difference of claim 11 of: A) temporally196 (filtering)…over… B) to select (the one or more class labels)… C) corresponds to the selected one or more class labels. Epstein teaches difference A) of claim 11: A) temporally197 (via “becoming”198-“texture-filtered image”, c.5,ll.45-50, “before or after199 the filtering”, c.6,ll. 25-27: fig. 3:14: “Texture Filterer”) (filtering)…over (“the original image”, c.5,ll.45-50: fig. 3: “Input Image”). Since Stein of the combination (illustrated above) of Stein,Ge teaches a texture filter, one of skill in the art of texture filters can make Stein’s of the combination (illustrated above) of Stein,Ge be as Epstein’s predictably recognizing the change “blurring out texture from the input image while leaving more of the distinct edges of the figure better intact”, Epstein, c.6,ll. 15-20. Stein of the combination (illustrated above) of Stein,Ge,Epstein does not teach the reamining difference of claim 11 of: B) to select (via “class hierarchy viewer 210”-“selection”200-“sample images”-“250” [0036] 9th S: fig. 3:250: bird-class-images) (the one or more class labels)… C) corresponds to the selected (via “corresponding to a user selection in the class hierarchy viewer 210” [0036] 9th S: fig. 5:406: “bird”-label) (one or more class labels): Alsallakh teaches the difference B) and C) of claim 11: B) to select (via “class hierarchy viewer 210”-“selection”201-“sample images”-“250” [0036] 9th S: fig. 3:250: bird-class-images) (the one or more class labels)… C) corresponds to the selected (via “corresponding to a user selection in the class hierarchy viewer 210” [0036] 9th S: fig. 5:406: “bird”-label) (one or more class labels): PNG media_image24.png 613 1057 media_image24.png Greyscale Since Ge of the combination (illustrated above) of Stein,Ge,Epstein teaches class labels, one of skill in the art of labels (“bird” “mammal” “dog” “conveyance” “vehicle”) can make Ge’s of the combination (illustrated above) of Stein,Ge,Epstein be as Alsallakh’s predictably recognizing that change “improves upon conventional analytics methods for CNN classifiers because it enables the user to better understand the training process, diagnose the separation power of the different feature detectors, and improve the architecture of the image classification model 30 accordingly to yield significant gain in accuracy.”, Alsallakh [0091]. Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stein et al. (US 2007/0221822 A1) in view of Ge et al. (US 2021/0027098 A1) as applied in claims 9,12,14 further in view of JIANG et al. (CN 111402336 A: Date Published 2020-07-10: July 10, 2020) with SEARCH machine translation: PNG media_image39.png 720 534 media_image39.png Greyscale Re 13. (Currently Amended), Stein of the combination of Stein,Ge teaches The system of claim 9, wherein 202 is determined (resulting in a “computed” “cluster”, [0054] 8th S, “which is approximately radially symmetric with a bright point in the center” [0054] 2nd S) based at least on203 the one or more class labels (via said make Stein’s be as Ge’s predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers) based at least on determining that the one or more segmentation mask confidences (via said make Stein’s “confidence of the classification (step 313)”, [0043] 2nd to last S, be as Ge’s “object score (e.g., a likelihood and/or confidence score) for each object”, [0031] last S, segmentation proposal mask—i.e., be as Ge’s segmentation proposal mask confidence score--predictably recognizing the change “improves the training” (Ge [0064] 3rd S) of classifiers) correspond to consistent assignments of the one or more class labels over a (“previously defined”, Stein: [0045], last S) threshold quantity of (“image”) frames (“15”, Stein: [0007] penult S:fig. 2:15). Stein of the combination of Stein,Ge does not teach the difference of claim 13 of: “correspond to consistent assignments of”. JIANG teaches the difference of claim 13: correspond204 to consistent assignments205 (via “corresponding to…the… same206…type207”, pg. 3, 6th txt blk: fig. 2: “Movement point judgement”) of208…over…of (“current…and previous”, pg. 3, 2nd txt blk) frames (“for feature matching”: fig. 2: “Input the current frame and the previous frame image.”): PNG media_image27.png 1034 1092 media_image27.png Greyscale Since Stein of the combination of Stein,Ge teaches tracking (“The motion of the spot is tracked (in image space)”, Stein: [0016] 13th S), one of skill in the art of tracking can make Stein’s of the combination of Stein,Ge be as JIANG’s predictably recognizing the change “provides…tracking…successfully”, JIANG, pg. 4, 7th txt blk. Conclusion The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure. The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action” Citation Relevance ZHUO (WO 2021/017811 A1) with SEARCH machine translation ZHUO teaches temporal filtering & mask-confidence: “first ISP processor 130 may perform one or more image processing operations, such as temporal filtering” (pg. 4, last text blk) and “Specifically, the electronic device may input the image to be processed into the subject detection model, identify the target subject in the image to be processed through the subject detection model, and segment the image to be processed into a foreground image and a background image of the target subject. Further, the segmented binarized mask map can be output through the subject detection model.” (pg. 6, 7th txt blk) and “In operation 902, the subject region confidence map is processed to obtain a subject mask map.” (pg. 12, 9th txt blk) as the closest to the claimed “segmentation mask confidences” and the claimed “temporally filtering” on claim 1. Francois et al. (US 2012/0306904 A1) Francois teaches temporal-filter & mask segmentation confidence: “[0057] In that regard, the filter could be applied to the incoming color data of the scene in a temporal dimension, instead of or in conjunction with a spatial dimension.” “[0080] In still another aspect, the variable update weight "a" above could represent a variable update weight in a low-pass filter, based on the confidence in the segmentation label (foreground or background), e.g., 1-alpha value of the final mask.” as the closest to the claimed “segmentation mask confidences” and the claimed “temporally filtering” on claim 1. Lanz (US 2008/0031492 A1) Previously applied (Office action of 9/5/2025, pg. 116) to claim 11 Lanz teaches “Posterior”209- “time”-“filter”: [0040] FIG. 1 shows an example of Bayes filter iteration according to known criteria: the posterior distribution computed for time t-1 (Posterior t-1) is first projected to time t according to a Prediction model to get a prior distribution for time t (Prior t), and then updated with the likelihood evaluated on the new image, to get a Posterior distribution at time t (Posterior t).” as the closest to the claimed “temporally filtering” of claim 1. ZHOU et al. (CN 106529526 B) with SEARCH machi ZHOU teaches time t-posterior210 probability filtering: “Particle filtering is essentially realized by the non-parametric Monte Carlo simulation to Bayesian filter, namely, posterior probability density using a set of random samples with weight approximately describing the system state. is given until the observation set z1 of t-1 time: t-1 = (z1, z2, ..., zt-1), the target in the best state at time t can be approximately by a maximum posterior probability obtained zt * = argminp (xti | z1, t). wherein xti represents the time t of the ith sample particle system state, the posterior probability p (xti | z1, t) can be obtained by the recursive Bayesian theory.” as the closest to the claimed “temporally filtering the one or more segmentation mask confidences” of claim 1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS ROSARIO/ Examiner, Art Unit 2676 /Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676 1 prior: preceding in time or in order; earlier or former; previous. (Dictionary.com) 2 prior: preceding in time or in order; earlier or former; previous. (Dictionary.com) 3 neural network: Also called neural net. Computers. a hardware or software system in which weighted connections between data nodes are refined to produce increasingly accurate results in information processing, as in pattern recognition or problem solving, with the goal of algorithmic computing that requires minimal human intervention. (Dictionary.com) 4 background: one's origin, education, experience, etc., in relation to one's present character, status, etc., wherein experience is defined: knowledge or practical wisdom gained from what one has observed, encountered, or undergone, wherein practical is defined: of or relating to practice or action, wherein practice is defined: custom, wherein custom is defined: convention, wherein convention is defined: conventionalism, wherein conventionalism is defined: adherence to or advocacy of conventional attitudes or practices (Dictionary.com) 5 What is “publication date”? See MPEP: 2154.01 Prior Art Under AIA 35 U.S.C. 102(a)(2) "U.S. Patent Documents" [R-11.2013] AIA 35 U.S.C. 102(a)(2) sets forth three types of patent documents that are available as prior art as of the date they were effectively filed with respect to the subject matter relied upon in the document if they name another inventor: (1) U.S. patents; (2) U.S. patent application publications; and (3) certain WIPO published applications. These documents are referred to collectively as "U.S. patent documents." These documents may have different prior art effects under pre-AIA 35 U.S.C. 102(e) than under AIA 35 U.S.C. 102(a)(2). Note that a U.S. patent document may also be prior art under AIA 35 U.S.C. 102(a)(1) if its issue or publication date is before the effective filing date of the claimed invention in question. If the issue date of the U.S. patent or publication date of the U.S. patent application publication or WIPO published application is not before the effective filing date of the claimed invention, it may be applicable as prior art under AIA 35 U.S.C. 102(a)(2) if it was "effectively filed" before the effective filing date of the claimed invention in question with respect to the subject matter relied upon to reject the claim. MPEP § 2152.01 discusses the "effective filing date" of a claimed invention. AIA 35 U.S.C. 102(d) sets forth the criteria to determine when subject matter described in a U.S. patent document was "effectively filed" for purposes of AIA 35 U.S.C. 102(a)(2). 6 “effective filing date”? 7 object (gerund) 8 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 9 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 10 Markush element of alternatives follows: [(A) or( B)] 11 information: the act or fact of informing, wherein informing is defined: to give or impart knowledge of a fact or circumstance to, wherein give is defined: to set forth or show, wherein show is defined: to indicate (Dictionary.com) 12 of: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“indicators of segmentation mask confidences”), usually consecutive, that have the same function and the same relation to other elements (“one or more” “confidences”) in the sentence (claim 1), the second expression (“of segmentation mask confidences”) identifying or supplementing the first (“indicators”). In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 13 “segmentation” is a modifier of “mask” 14 “mask” is a modifier of “confidences” 15 vehicle: any means in or by which someone travels or something is carried or conveyed, wherein travel is defined: to go from one place to another, as by car, train, plane, or ship, wherein go is defined: to act so as to come into a certain state or condition, wherein act is defined: performance, wherein performance is defined: a particular action, deed, or proceeding, wherein action is defined: the gestures or deportment of an actor or speaker, wherein deportment is defined: demeanor; conduct; behavior.(Dictionary.com) 16 -ing (of determining): a suffix of nouns formed from verbs (determine), expressing the action of the verb (determine) or its result, product (“the determined one or more pixels”), material, etc. (the art of building; a new building; cotton wadding ). It is also used to form nouns from words other than verbs (offing; shirting ). Verbal nouns ending in -ing are often used attributively (the printing trade ) and in forming compounds (drinking song ). In some compounds (sewing machine ), the first element might reasonably be regarded as the participial adjective, -ing2, the compound thus meaning “a machine that sews,” but it is commonly taken as a verbal noun, the compound being explained as “a machine for sewing.” (Dictionary.com) 17 past participle participating with the action of “determining…one or more pixels”, 18 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 19 Re “temporally”: Applicant’s Disclosure:[00219]As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. 20 DISCLOSURE/CLAIM SCOPE: “temporally” (ADJECTIVE: not adverb) is a modifier of “filtering”: temporal- (of “temporally”): of or relating to time: -ly (of “temporally”: an adjective suffix meaning “-like”: saintly; cowardly, wherein SCOPE is defined: Linguistics, Logic. the range of words or elements of an expression over which a modifier (e.g., one of ordinary skill in the art) or operator (e.g., patent examiner) has control. (Dictionary.com) 21 Markush element of alternatives follow: [(C) or (D)] an adjective suffix meaning “-like”: saintly; cowardly. 22 Markush element of alternatives follow: [(E) or (F)] 23 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 24 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 25 “segmentation” is a modifier of “mask” 26 “mask” is a modifier of “confidences” 27 “confidences” is interpreted as an apposition via the claimed “of”: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“indicators of segmentation mask confidences”), usually consecutive, that have the same function and the same relation to other elements (“one or more”) in the sentence (claim 1), the second expression (“of segmentation mask confidences”) identifying or supplementing the first (“indicators”). In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 28 (italics) represent claim limitations already taught 29 ellipses (…) represent claim limitations already taught 30 past participle participating with the action of “determining…one or more pixels”, 31 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 32 Re “temporally”: Applicant’s Disclosure:[00219]As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. 33 DISCLOSURE/CLAIM SCOPE: “temporally” (ADJECTIVE: not adverb) is a modifier of “filtering”: temporal- (of “temporally”): of or relating to time: -ly (of “temporally”: an adjective suffix meaning “-like”: saintly; cowardly, wherein SCOPE is defined: Linguistics, Logic. the range of words or elements of an expression over which a modifier (e.g., one of ordinary skill in the art) or operator (e.g., patent examiner) has control. (Dictionary.com) 34 (italics) represent claim limitations already taught 35 (italics: i.e., apposition-range of confidences: one or more confidences) represent claim limitations already taught 36 “segmentation” is a modifier of “mask” 37 “mask” is a modifier of “confidences” 38 “confidences” is interpreted as an apposition via the claimed “of”: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“indicators of segmentation mask confidences”), usually consecutive, that have the same function and the same relation to other elements (“one or more”) in the sentence (claim 1), the second expression (“of segmentation mask confidences”) identifying or supplementing the first (“indicators”). In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 39 (italics) represent claim limitations already taught 40 ellipses (…) represent claim limitations already taught 41 past participle participating with the action of “determining…one or more pixels”, 42 based VERB (USED WITHOUT OBJECT): to have a basis; be based (usually followed by on or upon ). (Dictionary.com) 43 Re “temporally”: Applicant’s Disclosure:[00219]As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. 44 DISCLOSURE/CLAIM SCOPE: “temporally” (ADJECTIVE: not adverb) is a modifier of “filtering”: temporal- (of “temporally”): of or relating to time: -ly (of “temporally”: an adjective suffix meaning “-like”: saintly; cowardly, wherein SCOPE is defined: Linguistics, Logic. the range of words or elements of an expression over which a modifier (e.g., one of ordinary skill in the art) or operator (e.g., patent examiner) has control. (Dictionary.com): i.e., time-like filtering 45 prior: preceding in time or in order; earlier or former; previous, wherein earlier is defined: in or during the first part of a period of time, a course of action, a series of events, etc.. (Dictioanry.com) 46 probability: Statistics. the relative frequency with which an event occurs or is likely to occur, wherein frequency is defined: Statistics. the number of items occurring in a given category, wherein category is defined: any general or comprehensive division; a class. (Dictionary.com) 47 filter: Computers. an algorithm that categorizes, sorts, prioritizes, or blocks data through rule-based protocols.(Dictionary.com) 48 (italics) represent claim limitations already taught 49 (italics: i.e., apposition-range of confidences: one or more confidences) represent claim limitations already taught 50 comparing: to consider or describe as similar; liken, wherein liken is defined: to represent as similar or like; compare, wherein like is defined: corresponding or agreeing in general or in some noticeable respect; similar; analogous. (Dictionary.com) 51 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover, wherein with is defined: in correspondence, comparison, or proportion to. (Dictionary.com) 52 comparing: to consider or describe as similar; liken, wherein liken is defined: to represent as similar or like; compare, wherein like is defined: corresponding or agreeing in general or in some noticeable respect; similar; analogous. (Dictionary.com) 53 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover, wherein with is defined: in correspondence, comparison, or proportion to. (Dictionary.com) 54 information: the act or fact of informing, wherein informing is defined: to give or impart knowledge of a fact or circumstance to, wherein give is defined: to set forth or show, wherein show is defined: to indicate (Dictionary.com) 55 confidence level: statistics a measure of the reliability of a result. A confidence level of 95 per cent or 0.95 means that there is a probability of at least 95 per cent that the result is reliable Compare significance, wherein reliable is defined: that may be relied on or trusted; dependable in achievement, accuracy, honesty, etc., wherein trusted is defined: to commit or consign with trust or confidence. (Dictionary.com) 56 “class” is a modifier of “labels” 57 mask: computing a bit pattern which, by convolution with a second pattern in a logical operation, can be used to isolate a specific subset of the second pattern for examination, wherein bit is defined: Also called binary digit. a single, basic unit of digital information that is represented by one of two values, such as 1 or 0, True or False, or Yes or No, wherein represent is defined: to present in words; set forth; describe; state, wherein describe is defined: to pronounce, as by a designating term, phrase, or the like; label. (Dictionary.com) 58 depict: to represent by or as if by painting or other visual image; portray; delineate. (Dictionary.com) 59 -ing (of “moving”): a suffix of nouns formed from verbs (move), expressing the action of the verb (move) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active (Dictionary.com). 60 “class” is a modifier of “labels” 61 “class” is a modifier of “labels” 62 state: the condition of a person or thing, as with respect to circumstances or attributes, wherein condition is defined: a particular mode of [being] of a person or thing; existing state; situation with respect to circumstances, wherein mode is defined: Grammar. mood, wherein mood is defined: Grammar. A) a set of categories for which the verb [being] is inflected in many languages, and that is typically used to indicate the syntactic relation of the clause in which the verb occurs to other clauses in the sentence, or the attitude of the speaker toward what they are saying, such as certainty or uncertainty, wish or command, emphasis or hesitancy. B) a set of syntactic devices in some languages that is similar to this set in function or meaning, involving the use of auxiliary words, such as can, may, might. C) any of the categories of these sets. 63 for: intended to belong to, or be used in connection with (Dictionary.com) 64 “based” is a past participle participating with the action of “maintains or increases” 65 on: in connection, association, or cooperation with; as a part or element of (Dictionary.com) 66 corresponding: associated in a working or other relationship (Dictionary.com) 67 state: Grammar. a set of categories for which the verb [being] is inflected (Dictionary.com) 68 for: intended to belong to, or be used in connection with (Dictionary.com) 69 state: the condition of a person or thing, as with respect to circumstances or attributes, wherein condition is defined: a particular mode of [being] of a person or thing; existing state; situation with respect to circumstances, wherein mode is defined: Grammar. mood, wherein mood is defined: Grammar. A) a set of categories for which the verb [being] is inflected in many languages, and that is typically used to indicate the syntactic relation of the clause in which the verb occurs to other clauses in the sentence, or the attitude of the speaker toward what they are saying, such as certainty or uncertainty, wish or command, emphasis or hesitancy. B) a set of syntactic devices in some languages that is similar to this set in function or meaning, involving the use of auxiliary words, such as can, may, might. C) any of the categories of these sets. 70 (Italics) represent limitations already taught above 71 (Italics) represent limitations (“state”) already taught above 72 For: intended to belong to, or be used in connection with. (Dictionary.com) 73 Ellipses (…) represent claim limitations already taught 74 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“using the one or more segmentation mask[[s]] confidences”) from a main clause (claim 7) (Dictionary.com) 75 The phrase “using the one or more segmentation mask[[s]] confidences” does not limit/restrict claim 7 76 comma: the punctuation mark(,) indicating a slight pause in the spoken sentence and used where there is a listing of items or to separate a nonrestrictive clause or phrase (“using the one or more segmentation mask[[s]] confidences”) from a main clause (claim 7) (Dictionary.com) 77 (italics) represent claim limitations already taught 78 As discussed in another (above) rejection of claim 7, The surrounding-comma phrase “using the one or more segmentation mask[[s]] confidences” does not limit/restrict claim 7 79 level: an extent, measure, or degree of intensity, achievement, etc.., wherein measure is defined: a quantity, degree, or proportion, wherein quantity is defined: an exact or specified amount or measure, wherein amount is defined: the full effect, value, or significance. (Dictionary.com) 80 Re “recursively”: Applicant’s Disclosure:[00219]As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. 81 main verb 82 object 83 “representing” is a participle (i.e., an adjective) participating in the action of “including: computing” 84 that: (used to introduce a subordinate clause (“pixels of an image of the one or more images depict one or more actors in one or more inactive or active states”) as the subject or object of the principal verb (“including”) or as the necessary complement to a statement made, or a clause expressing cause or reason, purpose or aim, result or consequence, etc.). (Dictionary.com) 85 -ing (of “moving”): a suffix of nouns formed from verbs (move), expressing the action of the verb (move) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 86 “states” is/are:(1) a “be” word or (2) a condition 87 (italics) represent claim limitations already taught 88 ellipses (…) represent claim limitations already taught 89 that: (used to introduce a subordinate clause (“pixels of an image of the one or more images depict one or more actors”) as the subject or object of the principal verb or as the necessary complement to a statement made, or a clause expressing cause or reason, purpose or aim, result or consequence, etc.). (Dictionary.com) 90 that: (used to introduce a subordinate clause as the subject or object of the principal verb or as the necessary complement to a statement made, or a clause expressing cause or reason, purpose or aim, result or consequence, etc.). (Dictionary.com) 91 that: (used to introduce a subordinate clause as the subject or object of the principal verb or as the necessary complement to a statement made, or a clause expressing cause or reason, purpose or aim, result or consequence, etc.). (Dictionary.com) 92 command: Computers. A) an electric impulse, signal, or set of signals for initiating an operation in a computer. B) a character, symbol, or item of information for instructing a computer to perform a specific task, , wherein symbol is defined: something used for or regarded as representing something else; a material object representing something, often something immaterial; emblem, token, or sign. (Dictionary.com) 93 from: (used to indicate source or origin), wherein source is defined: any thing or place from which something comes, arises, or is obtained; origin, wherein comes is defined: to be available, produced, offered, etc. (Dictionary.com) 94 representative: pertaining to or of the nature of a mental image or representation, wherein representation is defined: the act of representing. (Dictionary.com) . 95 -ing (of “moving”): a suffix of nouns formed from verbs (move), expressing the action of the verb (move) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 96 neural network: Also called neural net. Computers. a hardware or software system in which weighted connections between data nodes are refined to produce increasingly accurate results (Dictionary.com) 97 output: to produce (Dictionary.com) 98 neural network (NNs) and “output” are identities: different words with the same meaning: produce, where identities is defined: Logic. an assertion that two terms (neural network (NNs) & output) refer to the same thing (produce) (Dictionary.com) 99 from: (used to indicate source or origin), wherein source is defined: any thing or place from which something comes, arises, or is obtained; origin, wherein comes is defined: to be available, produced, offered, etc. (Dictionary.com) 100 Information: Computers. A) important or useful facts obtained as output from a computer by means of processing input data with a program. 101 CLAIM SCOPE: and: then (Dictionary.com) 102CLAIM SCOPE 1: that: (completive-intensive) additionally, all things considered, or nevertheless, wherein additional is defined: added or supplementary (Dictionary.com) 103 CLAIM SCOPE 2: that: used as a function word after a subordinating conjunction (“and”) without modifying its meaning If that thy bent of love be honorable …—William Shakespeare (Merriam-Webster.com): the claimed “that” does not limit claim 10 under the broadest reasonable interpretation of claim 10 104 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover, wherein in is a preposition, wherein preposition is defined: any member of a class of words found in many languages that are used before nouns (“addition”), pronouns, or other substantives to form phrases (“in addition to”) functioning as modifiers of verbs (“represents”), nouns, or adjectives, and that typically express a spatial, temporal, or other relationship, as in, on, by, to, since, wherein addition is defined: something added , (Dictionary.com) 105 label: computing a group of characters, such as a number or a word, appended to a particular statement in a program to allow its unique identification (Dictionary.com) 106 (italics) represent claim limitations already taught above 107 ellipses (…) represent claim limitations already taught above 108 Regarding “represents that” (the phrase “represents that” is not in applicant’s disclosure) is interpreted in view of applicant’s disclosure: [00219]As used herein, a recitation of “and/or” with respect to two or more elements should be interpreted to mean only one element, or a combination of elements. For example, “element A, element B, and/or element C” may include only element A, only element B, only element C, element A and element B, element A and element C, element B and element C, or elements A, B, and C. In addition, “at least one of element A or element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. Further, “at least one of element A and element B” may include at least one of element A, at least one of element B, or at least one of element A and at least one of element B. The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this disclosure. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways [such as 1) “and” (i.e., in addition to) being a prepositional modifier (i.e., an adverb) of the verb “represents” or 2) as a non-limiting functional word or 3) as a “chosen”-“stylistic”-“omission” (Dictionary.com: that: Grammar section) in the rejection of claim 10], to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. 109 CLAIM SCOPE 1: that: (used to introduce a subordinate clause (“the pixels of the image depict the one or more actors”) as the subject or object of the principal verb (“represents”) or as the necessary complement to a statement made, or a clause expressing cause or reason, purpose or aim, result or consequence, etc.). (Dictionary.com) 110 CLAIM SCOPE 2: that: (completive-intensive) additionally, all things considered, or nevertheless (Dictionary.com) 111 CLAIM SCOPE 3: that: of such a nature, character, etc, wherein such is defined: of the sort specified or understood (Dictionary.com): see label-footnote above about a group of characters, such as a number or a word 112 CLAIM SCOPE 4: that: used as a function word after a subordinating conjunction (“and”) without modifying its meaning If that thy bent of love be honorable …—William Shakespeare (Merriam-Webster.com): the claimed “that” does not limit claim 10 under the broadest reasonable interpretation of claim 10 113 ellipses (…) represent claim limitations already taught above 114 ellipses (…) represent claim limitations already taught above 115 “in an inactive state in which” is not taught and thus is a difference of claim 10 116 “stationary” is not taught and thus is a difference of claim 10 117 “output” is a verb (not a noun) 118 (italics) represent claim limitations already taught 119 ellipses (…) represent claim limitations already taught 120 CLAIM DIFFERENCE INTERPRETATION: ~output then inactive state stationary~ 121 selection: an act or instance of selecting or the state of being selected; choice. (Dictionary.com) 122 of: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“consistent assignments of the one or more class labels”), usually consecutive, that have the same function and the same relation to other elements (“the one or more”) in the sentence, the second expression identifying or supplementing the first. In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 123 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (usually followed byto ). (Dictionary.com) 124 of: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“consistent assignments of the one or more class labels”), usually consecutive, that have the same function and the same relation to other elements (“the one or more”) in the sentence, the second expression identifying or supplementing the first. In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 125 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (usually followed byto ). (Dictionary.com) 126 assignments: an act of assigning; appointment (Dictionary.com) 127 same: agreeing in kind, amount, etc.; corresponding, wherein agreeing is defined: to be consistent; harmonize (usually followed bywith ). (Dictionary.com) 128 type: a number of things or persons sharing a particular characteristic, or set of characteristics, that causes them to be regarded as a group, more or less precisely defined or designated; class; category, wherein designated is defined: to nominate or select for a duty, office, purpose, etc.; appoint; assign. (Dictionary.com) 129 of: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“consistent assignments of the one or more class labels”), usually consecutive, that have the same function and the same relation to other elements (“the one or more”) in the sentence, the second expression identifying or supplementing the first. In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 130 output: the current, voltage, power, or signal produced by an electrical or electronic circuit or device, wherein signal is defined: anything that serves to indicate, warn, direct, command, or the like, such as a light, a gesture, an act, etc. (Dictionary.com) 131 -ing of (“determining”): a suffix of nouns formed from verbs (determine), expressing the action of the verb (determine) or its result, product, material, etc. (the art of building; a new building; cotton wadding ).(Dictionary.com) 132 based: to have a basis; be based (usually followed by on or upon), wherein based (VERB (USED WITH OBJECT)) is defined:…(multiple dictionary senses: i.e., CLAIM SCOPE) (Dictionary.com) 133 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 134 -ing of (“filtering”): a suffix of nouns formed from verbs (filter), expressing the action of the verb (filter) or its result, product, material, etc. (the art of building; a new building; cotton wadding ).(Dictionary.com) 135 corresponding: associated in a working or other relationship (Dictionary.com) 136 e.g., a “be”-word. 137 same: agreeing in kind, amount, etc.; corresponding, wherein -ing (of “corresponding”) is defined: a suffix of nouns formed from verbs (correspond), expressing the action of the verb (correspond) or its result, product, material, etc. (the art of building; a new building; cotton wadding )., wherein action is defined: the process or state of acting or of being active. (Dictionary.com) 138 state: the condition of matter with respect to structure, form, constitution, phase, or the like, wherein form is defined: configuration (Dictionary.com) 139 (italics) represent claim limitations already taught 140 Include: to contain, as a whole does parts or any part or element, wherein contain is defined: to be equal to (Dictionary.com) 141 -ing: a suffix of nouns formed from verbs, expressing the action of the verb or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 142 state: the condition of matter with respect to structure, form, constitution, phase, or the like, wherein form is defined: configuration (Dictionary.com) 143 for: intended to belong to, or be used in connection with. (Dictionary.com) 144 base: to have a basis; be based (usually followed by on orupon ) (Dictionary.com) 145 base: base: to have a basis; be based (usually followed by on orupon ) (Dictionary.com) 146 -ing (of “filtering”): a suffix of nouns formed from verbs (filter), expressing the action of the verb (filter) or its result, product, material, etc. (the art of building; a new building; cotton wadding ). (Dictionary.com) 147 etc.: and others; and so forth; and so on (used to indicate that more of the same sort or class might have been mentioned, but for brevity have been omitted). (Dictionary.com) 148 automatic: having the capability of starting, operating, moving, etc., independently 149 controller: Also called control unit, processor. Computers. the key component of a device, as a terminal, printer, or external storage unit, that contains the circuitry necessary to interpret and execute instructions fed into the device, wherein processor is defined: Computers. a computer, wherein computer is defined: a programmable electronic device designed to accept data, perform prescribed mathematical and logical operations at high speed, and display the results of these operations, wherein device is defined: an invention or contrivance, especially a mechanical or electrical one, wherein invention is defined: U.S. Patent Law. a new, useful process, machine, improvement, etc., that did not exist previously and that is recognized as the product of some unique intuition or genius, as distinguished from ordinary mechanical skill or craftsmanship. (Dictionary.com) 150 (italics) represent claim limitations already taught 151 ellipses (…) represent claim limitations already taught 152 (italics) represent claim limitations already taught 153 state: the condition of a person or thing, as with respect to circumstances or attributes, wherein condition is defined: a particular mode of [being] of a person or thing; existing state; situation with respect to circumstances, wherein mode is defined: Grammar. mood, wherein mood is defined: Grammar. A) a set of categories for which the verb [being] is inflected in many languages, and that is typically used to indicate the syntactic relation of the clause in which the verb occurs to other clauses in the sentence, or the attitude of the speaker toward what they are saying, such as certainty or uncertainty, wish or command, emphasis or hesitancy. B) a set of syntactic devices in some languages that is similar to this set in function or meaning, involving the use of auxiliary words, such as can, may, might. C) any of the categories of these sets. 154 (Italics) represent limitations already taught above 155 (Italics) represent limitations (“state”) already taught above 156 states: a “be”-word 157 reflection: the act of reflecting, as in casting back a light or heat, mirroring, or giving back or showing an image; the state of being reflected in this way, wherein act is defined: the process of doing, wherein doing is defined: to act or conduct oneself, wherein act is defined: to perform as an actor. (Dictionary.com) 158 reflection: the act of reflecting, as in casting back a light or heat, mirroring, or giving back or showing an image; the state of being reflected in this way, wherein act is defined: the process of doing, wherein doing is defined: to act or conduct oneself, wherein act is defined: to perform as an actor. (Dictionary.com) 159 This phrase-- using the one or more segmentation mask[[s]] confidences—is non-limiting under the broadest reasonable interpretation. 160 of: (used to indicate possession, connection, or association) (Dictionary.com) 161 -ing (of “oncoming”): a suffix of nouns formed from verbs (come), expressing the action of the verb (come) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 162 vehicle: any means in or by which someone travels or something is carried or conveyed, wherein travel is defined: to go from one place to another, as by car, train, plane, or ship, wherein go is defined: to act so as to come into a certain (“oncoming”) state or condition, wherein act is defined: to perform as an actor, wherein as is defined: in the role, function, or status of (Dictionary.com) 163 area: region; district; locality (Dictionary.com) 164 compute: to determine by using a computer or calculator. (Dictionary.com) 165 shape: the outward form of an object defined by outline, wherein form is defined: the shape or configuration of something as distinct from its colour, texture, etc (Dictionary.com) 166 based: to have a basis; be based (usually followed by on orupon ). (Dictionary.com) 167 on: in connection, association, or cooperation with; as a part or element of (Dictionary.com) 168 (italics) represent claim limitations already taught above 169 ellipses (…) represent claim limitations already taught above 170 (italics) represent claim limitations already taught above 171 on: by the agency or means of (Dictionary.com) 172 believed: to have confidence or faith in the truth of (a positive assertion, story, etc.); give credence to. 173 (italics) represent claim limitations already taught above 174 on: by the agency or means of (Dictionary.com) 175 label: a word or phrase indicating that what follows belongs in a particular category or classification, wherein classification is defined: one of the groups or classes into which things may be or have been classified. classify. (Dictionary.com) 176 -ing (of “oncoming”): a suffix of nouns formed from verbs (come), expressing the action of the verb (come) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 177 automatic: having the capability of starting, operating, moving, etc., independently 178 control: a device for regulating and guiding a machine, as a motor or airplane., wherein device is defined: an invention or contrivance, especially a mechanical or electrical one, wherein invention is defined: U.S. Patent Law. a new, useful process, machine, improvement, etc., that did not exist previously and that is recognized as the product of some unique intuition or genius, as distinguished from ordinary mechanical skill or craftsmanship.(Dictionary.com) 179 and: then (Dictionary.com) 180 and: also, at the same time. (Dictionary.com) 181 and: as a consequence (Dictionary.com) 182 and: afterwards, wherein afterward is defined: at a later or subsequent time; subsequently, wherein subsequently is defined: in a following or succeeding part of something (Dictionary.com) 183 on: by the agency or means of (Dictionary.com) 184 label: a word or phrase indicating that what follows belongs in a particular category or classification, wherein classification is defined: one of the groups or classes into which things may be or have been classified. classify. (Dictionary.com) 185 “represents” is directed to the claimed “pixels” 186 “that” does not limit claim 10 under the broadest reasonable interpretation of claim 10. 187 -ing (of “oncoming”): a suffix of nouns formed from verbs (come), expressing the action of the verb (come) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 188 -ing (of “oncoming”): a suffix of nouns formed from verbs (come), expressing the action of the verb (come) or its result, product, material, etc. (the art of building; a new building; cotton wadding ), wherein action is defined: the process or state of acting or of being active, wherein acting is defined: to perform as an actor. (Dictionary.com) 189 “represents” is directed to the claimed “pixels” 190 “represents” is directed to the claimed “pixels” 191 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover, wherein in is a preposition, wherein preposition is defined: any member of a class of words found in many languages that are used before nouns (“addition”), pronouns, or other substantives to form phrases (“in addition to”) functioning as modifiers of verbs (“represents”), nouns, or adjectives, and that typically express a spatial, temporal, or other relationship, as in, on, by, to, since, wherein addition is defined: something added , (Dictionary.com) 192 “output” is a verb (not a noun) 193 (italics) represent claim limitations already taught 194 ellipses (…) represent claim limitations already taught 195 CLAIM DIFFERENCE INTERPRETATION: ~output then inactive state stationary~ 196 -ly (of “temporally”): an adjective suffix meaning “-like”: saintly; cowardly. (Dictionary.com) 197 -ly (of “temporally”): an adjective suffix meaning “-like”: saintly; cowardly. (Dictionary.com) 198 become: to come into being, wherein come is defined: to approach or arrive in time, in succession, etc. (Dictionary.com) 199 after: later in time than; in succession to; at the close of (Dictionary.com) 200 selection: an act or instance of selecting or the state of being selected; choice. (Dictionary.com) 201 selection: an act or instance of selecting or the state of being selected; choice. (Dictionary.com) 202 control: to regulate or operate (a machine), wherein regulate is defined: to adjust (an instrument or appliance) so that it operates correctly (Dictionary.com) 203 on: in connection, association, or cooperation with; as a part or element of. (Dictionary.com) 204 correspond: to be similar or analogous; be equivalent in function, position, amount, etc. (usually followed byto ). (Dictionary.com) 205 assignments: an act of assigning; appointment (Dictionary.com) 206 same: agreeing in kind, amount, etc.; corresponding, wherein agreeing is defined: to be consistent; harmonize (usually followed bywith ). (Dictionary.com) 207 type: a number of things or persons sharing a particular characteristic, or set of characteristics, that causes them to be regarded as a group, more or less precisely defined or designated; class; category, wherein designated is defined: to nominate or select for a duty, office, purpose, etc.; appoint; assign. (Dictionary.com) 208 of: (used to indicate apposition or identity), wherein apposition is defined: Grammar. a syntactic relation between expressions (“consistent assignments of the one or more class labels”), usually consecutive, that have the same function and the same relation to other elements (“the one or more”) in the sentence, the second expression identifying or supplementing the first. In Washington, our first president, the phrase our first president is in apposition with Washington. (Dictionary.com) 209 posterior: coming after in time; later; subsequent (sometimes followed byto ). (Dictionary.com) 210 posterior: coming after in time; later; subsequent (sometimes followed byto ). (Dictionary.com)
Read full office action

Prosecution Timeline

Feb 27, 2023
Application Filed
Apr 23, 2025
Non-Final Rejection — §101, §102, §103
Jul 28, 2025
Applicant Interview (Telephonic)
Jul 28, 2025
Response Filed
Jul 28, 2025
Examiner Interview Summary
Aug 22, 2025
Examiner Interview (Telephonic)
Sep 02, 2025
Final Rejection — §101, §102, §103
Nov 28, 2025
Interview Requested
Dec 05, 2025
Applicant Interview (Telephonic)
Dec 05, 2025
Examiner Interview Summary
Dec 05, 2025
Request for Continued Examination
Dec 21, 2025
Response after Non-Final Action
Jan 03, 2026
Non-Final Rejection — §101, §102, §103
Mar 31, 2026
Interview Requested
Apr 07, 2026
Response Filed
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 07, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586184
METHODS AND APPARATUS FOR ANALYZING PATHOLOGY PATTERNS OF WHOLE-SLIDE IMAGES BASED ON GRAPH DEEP LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12585733
SYSTEMS AND METHODS OF SENSOR DATA FUSION
2y 5m to grant Granted Mar 24, 2026
Patent 12536786
IMAGE LOCALIZATION USING A DIGITAL TWIN REPRESENTATION OF AN ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12518519
PREDICTOR CREATION DEVICE AND PREDICTOR CREATION METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12518404
SYSTEMS AND METHODS FOR MACHINE LEARNING BASED PHYSIOLOGICAL MOTION MEASUREMENT
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
98%
With Interview (+28.6%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month