Prosecution Insights
Last updated: April 19, 2026
Application No. 17/985,150

CONTROL DEVICE FOR PREDICTING A DATA POINT FROM A PREDICTOR AND A METHOD THEREOF

Final Rejection §101§102§103
Filed
Nov 11, 2022
Examiner
GALVIN-SIEBENALER, PAUL MICHAEL
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Signify Holding B V
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
1 granted / 4 resolved
-30.0% vs TC avg
Minimal -25% lift
Without
With
+-25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.8%
-10.2% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the amendment filed on Dec. 11th, 2025. The amendments are linked to the original application filed on Nov. 11th, 2022. Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. EP21209737.2, filed on Nov. 23rd, 2021. Response to Arguments The Examiner thanks the applicant for the remarks, edits and arguments. Regarding Specification Objections - Title Applicant Remarks: The applicant has amended the title to be more descriptive. The title has been changed to: “A device for predicting labels of lighting pole(s) in an image from predictor and a method thereof”. For this reason, the applicant requests the withdrawal of the specification objection. Examiner Response: The examiner has considered the title amendment and believes the new title is descriptive and clearly indicative of the claimed invention. Therefore, the examiner has withdrawn the specification objection of the title. Regarding Drawing Objection under 37 CFR 1.83(b) Applicant Remarks: The applicant argues that the previously submitted drawings disclose a new and novel system that seeks to improve the overall usability of trained models to improve accuracy. The applicant recites the first claims as an example of this improvement. The applicant believes that the previously submitted drawings are complete and has requested that the objection under 37 CFR 1.83(b) be withdrawn. Examiner Response: The examiner has considered the remarks, specification and the drawing previously submitted. In light of the remarks and the amended claims, the examiner does believe the drawings comply with 37 CFP 1.83(b) and is withdrawing the drawing objection. Regarding Claim Rejections – 35 U.S.C. 101 Applicant Remarks: The applicant has amended the claims to no longer recite abstract ideas without significantly more. The amendments made further discloses a system that relates to a physical environment to detect physical objects. Therefore, the applicant believes the amendments no longer recite abstract ideas without significantly more and request the rejection under 35 U.S.C. be withdrawn. Examiner Response: The examiner does recognize that the claimed invention is executed on real physical environments in real physical environments outside of the human mind. However, MPEP 2106.04(a)(2)(III)(C) states, “Claims can recite a mental process even if they are claimed as being performed on a computer.”. The process disclosed in the claims is claimed to execute on a control device. The specification provided by the applicant states, “The control device 400 may be implemented in a unit separate from the lighting poles 222a-d, 212a-d, such as wall panel, desktop computer terminal, or even a portable terminal such as a laptop, tablet or smartphone.” (Specification, pp. 14, ln. 21-22). This would a person having ordinary skill in the art to recognize that this system is executed on a generic computing device such as a personal computer or laptop. Further the MPEP 2106.04(a)(2)(III)(C)(3) states, “In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.”. Using the MPEP as guidance, the examiner would next evaluate the concepts of the claims and specification for abstract ideas. After reviewing these, the examiner still believes claim 1, in particular, discloses abstract ideas. For example, “determining an overlap of lighting poles between the image and the at least one labeled image based on a comparison of the first output and the second output and/or based on a comparison of the at least one labelled image and the image and determining a level of similarity based on side overlap;” Further, MPEP 2106.04(a)(2)(III)(C), “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions”, Taking these into consideration and the limitation of claim 1, it is reasonable to believe that a human is able to evaluate images and determine the similarity of the images using pen and paper and calculations. Therefore, per the MPEP, the disclosed subject matter does recite abstract ideas which are implemented on a generic computing device. As a result, and with consideration of the remarks and amendments, the rejection under 35 U.S.C. 101 is upheld, see 101 rejection below. Regarding Claim Rejections – 35 U.S.C. 102 Applicant Remarks: The applicant has made amendments to claim 1 and believe the art Li fails to anticipate each and every limitation of the claim. According to the claim dependency structure, since Li I unable to anticipate each and every limitation in claim 1, Li would fail to teach the limitations of claims 11-14. Therefore, the applicant requests the rejection under 35 U.S.C. 102 be withdrawn. Examiner Response: The examiner has considered the arguments and the amended claims. The art Li does disclose an object detection method using machine learning and image data, however, it does not explicitly state it is able to detect the amended “lighting poles” as claimed. After each amendment a complete search is completed on the amended claims. During this search, no single art was found which could anticipate the current amended claims in accordance with 35 U.S.C. 102. As a result, the examiner is withdrawing the rejection under 35 U.S.C. 102. It is noted that Li is still able to, in combination of other arts, teach limitations of the amended claims. Regarding Claim Rejections – 35 U.S.C. 103 Applicant Remarks: The applicant has amended claim 1 and, as stated above, the applicant believes that the art Li fails to anticipate each and every limitation of this claim. Because of this, the art proposed for the remaining dependent claims fail to teach the missing elements of Li in claim 1. Therefore, by virtue of dependency, the applicant believes the remaining claims should be allowed because the art proposed fails to teach each and every limitation of the independent and dependent claims. Examiner Response: After consideration of the remarks and the amended claims it was found, as stated above, the art Li does not teach the current amended claims. The examiner, as stated above, performed a complete search of the amended claims and has found new material. The examiner believes the current new material, in combination with the previously proposed art, teaches the missing elements of Li and the amended claims in accordance with 35 U.S.C. 103. Therefore, the rejection under 35 U.S.C. 103 is upheld, see 103 rejections below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5, 7, and 10-13 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because it recites, "A computer product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method of claim 1" (emphasis added). The submitted specification recites "Storage media suitable for storing computer program instructions include all forms of nonvolatile memory, including but not limited to EPROM, EEPROM and flash memory devices, magnetic disks such as the internal and external hard disk drives, removable disks and CD-ROM disks. The computer program product may be distributed on such a storage medium, or may be offered for download through HTTP, FTP, email or through a server connected to a network such as the Internet." (pp. 15, In. 26-31), (emphasis added), which states this system could be "software only" and therefore would not fall under the four categories of patent eligible subject matter per MPEP 2106.06(1): "Even when a product has a physical or tangible form, it may not fall within a statutory category. For instance, a transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter.". Claim 1 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 1, recites “A method of predicting labels of one or more lighting poles in an image from a predictor,” therefore it is directed to the statutory category of a method. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “determining an overlap of lighting poles between the image and the at least one labeled image based on a comparison of the first output and the second output and/or based on a comparison of the at least one labelled image and the image and determining a level of similarity based on side overlap;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate images to determine a level of similarity. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining a number of one or more lighting poles in the overlapped region;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate an image for objects. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining a similarity information between the at least one labelled image and the image, wherein the similarity information comprises labels of the one or more lighting poles in the overlapped region;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate images to determine a level of similarity. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining an adjustment to the first and/or the second weight based on the determined number of the one or more lighting poles; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate an image and make judgements on how to update values. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining a prediction of labels of one or more lighting poles in the image, wherein the prediction is based on combining the prediction from the trained machine with the adjusted first weight and the similarity information with the adjusted second weight.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate an image and locate objects. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the method comprises the steps executed by a control device: assigning a first function to the at least one labelled image to obtain a first output and a second function to the image to obtain a second output;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “assigning a first weight to a prediction from the trained machine for the data point, and a second weight to the similarity information;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the method comprises the steps executed by a control device: assigning a first function to the at least one labelled image to obtain a first output and a second function to the image to obtain a second output;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “assigning a first weight to a prediction from the trained machine for the data point, and a second weight to the similarity information;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 2 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the sum of the first and the second weight is less than or equal to a predetermined maximum value of the first or the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the sum of the first and the second weight is less than or equal to a predetermined maximum value of the first or the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 3 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “wherein the at least one labelled image comprises an area A;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. This limitation discloses the initialization of variable for a mathematical formula. This claim discloses a math operation and therefore is ineligible. “and the image comprises N samples from an area C;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. This limitation discloses the initialization of variable for a mathematical formula. This claim discloses a math operation and therefore is ineligible. “wherein k be the number of samples that are common to A and C, and wherein the level of similarity (p) comprises: p=k/N.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses mathematical concept of utilizing a mathematical formula to perform calculations. This claim discloses a math operation and therefore is ineligible. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? This claim does not recite any additional limitations which integrate the abstract idea into a practical application. Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea and thus the claim is subject-matter ineligible. Claim 4 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “if the level of similarity exceeds a first threshold, determining an adjustment for the first weight to be smaller than the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “if the level of similarity exceeds a first threshold, determining an adjustment for the first weight to be smaller than the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 5 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “if the level of similarity does not exceed the first threshold, determining an adjustment for the first weight to be larger than the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “if the level of similarity does not exceed the first threshold, determining an adjustment for the first weight to be larger than the second weight.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 6 (Cancelled) Claim 7 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the first and the second function comprise a hash function based on latitude and longitude information of the one or more satellite images.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the first and the second function comprise a hash function based on latitude and longitude information of the one or more satellite images.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 8-9 (Cancelled) Claim 10 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites the abstract ideas of the preceding claims from which it depends. Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “if the level of similarity exceeds the first threshold and if a confidence of prediction from the trained machine does not exceed a confidence threshold, retraining the trained machine.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “if the level of similarity exceeds the first threshold and if a confidence of prediction from the trained machine does not exceed a confidence threshold, retraining the trained machine.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 11 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? A process, as above. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “determining a level of test similarity based on a comparison of the second function and the third function;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and determine a level of similarity. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining a test similarity information between the at least one labelled test image and the image;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate an image and determine a level of similarity. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining an adjustment to the second and/or the third weight as a function of the level of test similarity; and” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to take evaluated images and adjust a model accordingly. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). “determining a prediction for the image, wherein the prediction is based on combining the prediction from the trained machine with the adjusted first weight and the test similarity information with the adjusted third weight.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate an image and detect objects in that image. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the trained machine has been further trained based on a test dataset; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the test dataset comprises at least on labelled test image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the method comprises: assigning a third function to the at least one labelled test image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “assigning a third weight to the test similarity information;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the trained machine has been further trained based on a test dataset; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the test dataset comprises at least on labelled test image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the method comprises: assigning a third function to the at least one labelled test image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “assigning a third weight to the test similarity information;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 12 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 12, recites “A control device for predicting a labels of one or more lighting poles in an image from a predictor,” therefore it is directed to the statutory category of a machine. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “wherein the control device comprises a processor arranged for executing at least some of the steps of method according to claim 1.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to use a computer to execute a method. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 13 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 13, recites “A system for predicting labels of one or more lighting poles in an image from a predictor,” therefore it is directed to the statutory category of a machine. Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? The claim recites, inter alia: “a comparator for determining a level of similarity and similarity information between the at least one labelled image and the image;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate images and determine a level of similarity between the images. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c). Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? The claim recites the additional elements, “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “wherein the system comprises: the training dataset and/or a test dataset;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). “a control device according to claim 12.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible. Claim 14 Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Claim 14 fails to fall within the 4 statutory categories of patent eligible subject matter as stated above. Therefore, the claim is rejected as not being patent eligible. See MPEP 2106.03. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 11, 12, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gomes et al., (Gomes et al., “Mapping Utility Poles in Aerial Orthoimages Using ATSS Deep Learning Method”, 2020, hereinafter “Gomes”) in view of Li et al., (Li et al., “Highly Efficient Forward and Backward Propagation of Convolutional Neural Networks for Pixelwise Classification”, 2014, hereinafter “Li”). Regarding claim 1, Gomes discloses, “A method of predicting labels of one or more lighting poles in an image from a predictor,” (Introduction, pp. 3; “Developing automatic methods to detect poles is crucial to maintain the sustainability of the growing power distribution network, face the increase of extreme weather events and maintain the quality of service. The objective of this study was to evaluate the performance of ATSS to detect and map utility poles in aerial images.” This article discloses a system evaluates aerial images and detect utility poles. This model is able to evaluate many different types of objects in aerial images including street lights. Using the broadest reasonable interpretation, a utility pole is a pole which is connected to a power grid to carry electricity and a light pole is also a pole which carries electricity and it uses that power to output energy. A utility pole would be comparable to a lighting pole.) This method uses ground truth bounding boxes to train the model as well.) “wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image;” (Experimental Setup, pp. 6; “For our experimental setup, we divided the RGB orthoimages into training, validation and test sets. We used 634 (60%), 212 (20%) and 211 (20%) orthoimages from different areas for training, validation and testing, respectively.”) And (Study Area, p. 4; “In total, 1057 orthoimages with dimensions 5619 x 5946 pixels were used in the experiments. These images were split in 99,473 patches, where 111,583 utility poles were identified as ground-truth.” This article discloses a machine learning model that is trained using labeled training.) “Wherein the method comprises the steps executed by a control device: assigning a first function to the at least one labelled image to obtain a first output and a second function to the image to obtain a second output;” (Pole Detection Approach, pp. 6; “The ATSS also differs from other state-of-the-art methods by using an individual IoU threshold for each ground-truth during the training, obtained by adding the average and standard deviation of the anchor box IoU’s proposals in relation to the ground-truth. Bounding boxes with IoU greater than the threshold calculated for the respective ground-truth are considered positive samples.” This model will evaluate an image to produce an output or prediction. During the training the ground truth images are input into the model. The ground truth is used by the model to compare with the prediction output using the Intersection of Union method.) “determining an overlap of lighting poles between the image and the at least one labeled image based on a comparison of the first output and the second output and/or based on a comparison of the at least one labelled image and the image and determining a level of similarity based on side overlap;” (Method Assessment, pp. 7; “To obtain the precision and recall metrics, the Intersection Over Union (IoU) was calculated as the overlapping area between the predicted and the ground-truth bounding boxes divided by the area of union between them.” This model will perform a specified IoU method to determine how similar the prediction was to the ground truth. The model will determine the overlap in the bounding boxes between the ground truth and the predicted output.) “determining a number of one or more lighting poles in the overlapped region;” (Method Assessment, pp. 7; “In the experiments, we used common IoU values of 0.5 and 0.75. If the prediction obtains IoU greater than the threshold, the prediction is considered as true positive (TP); otherwise, it is a false positive (FP). A false negative (FN) occurs when ground-truth boxes are not detected by any prediction.” This will evaluate the predicted image and determine if the utility pole was correctly labeled in the image. This will evaluate the utility pole using a specified IoU model to compare the ground truth bounding box to the predicted bounding box.) “determining a similarity information between the at least one labelled image and the image, wherein the similarity information comprises labels of the one or more lighting poles in the overlapped region;” (Method Assessment, pp. 7; “If the prediction obtains IoU greater than the threshold, the prediction is considered as true positive (TP); otherwise, it is a false positive (FP). A false negative (FN) occurs when ground-truth boxes are not detected by any prediction. Using the metrics described above, precision and recall are estimated using Equations (1) and (2), respectively. The area under the precision–recall curve represents the average precision (AP).” This model uses a specified IoU method to compare the predicted bound box location and the ground truth bounding box. This model uses a threshold amount to determine the accuracy and performance of the output result.) “determining an adjustment to the first and/or the second weight based on the determined number of the one or more lighting poles; and” (Experimental Setup, pp. 6; “We applied a Stochastic Gradient Descent optimizer with a momentum equal to 0.9. For this, we used the validation set to adjust the learning rate and the number of epochs to reduce the risk of overfitting. We empirically assessed learning rates (0.0001, 0.001 and 0.01) and found that the convergence of the loss function is better for 0.001 and stabilized over 24 epochs.” The model is initially trained using forward and back propagation, similar to the process disclosed in Li. The model uses the SGD to determine a gradient to backpropagate and it will adjust the weights accordingly.) “determining a prediction of labels of one or more lighting poles in the image, wherein the prediction is based on combining the prediction from the trained machine with the adjusted first weight and the similarity information with the adjusted second weight.” (Qualitative Analysis, pp. 9; “Regarding the different sizes and types of poles, Figure 7 shows the detections and the corresponding terrestrial image obtained from Google Street View. ATSS obtained a good performance, detecting utility poles of the most varied types and sizes. Faster R-CNN and RetinaNet methods obtained good assertiveness by processing images of smaller scale poles, such as lighting poles or low voltage electric poles, but failed to locate larger poles, such as the models used for high voltage transmission of electrical energy, shown in the last row of Figure 7.” The models are trained in this article using conventual methods of updating weights using back propagation. As stated above, the ATSS model uses a specified IoU module to train that model. After training the models, they were used in experiments to locate utility poles in aerial images and their results are compared.) Gomes fails to explicitly disclose the remaining limitations of this claim. However, Li discloses, “assigning a first weight to a prediction from the trained machine for the data point, and a second weight to the similarity information;” (Backward propagation of convolution layers, pp. 7; “For each non-zero entry W k , d i in the convolution kernel W k , d , its gradient is calculated as the sum of all the weighted errors that are calculated with the entry. The weights for errors are determined by the input values from the input feature map x k for convolution: ∇ W k , d i = ∑ u , v δ k + 1 u , v x k u , v i , where x k u , v i are the input values in x k and are multiplied elementwise by W k , d i during convolution to compute the entry at (u,v) of the output feature map x k + 1 .” As stated in Gomes, “For the training process, we initialized the backbone of all object detection methods with pre-trained weights from ImageNet” (Gomes, Experimental Setup, pp. 6). The models use a conventual training method to initially train the models. This conventual method uses the SGD optimizer. Li discloses a similar training process and uses backward propagation. During backpropagation the weights are adjusted during the training process. Li’s model will determine an error metric and to do this it will compare the initial feedforward weight the adjusted weight.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Gomes and Li. Gomes teaches a model which is able to evaluate aerial images for objects such as utility poles using conventual machine learning training methods. Li teaches a machine learning model that is able to evaluate images using conventual machine learning training methods and is able to speed up the training process. One of ordinary skill would have motivation to combine a machine learning model to detect object in images with another machine learning model which is able to improve object detecting machine learning models, “As shown by the layer wise and overall timing results, our proposed method achieves a speedup of over 1500 times compared with the traditional patch-by-patch approach. Compared with the fast-scanning method [7], our proposed algorithm has a speedup over 10 times at the pool1 layer and a speedup over 2 times at the pool2 layer.” (Li, Running times of practical CNN models, pp. 7). Regarding claim 11, Li discloses, “wherein the trained machine has been further trained based on a test dataset; and” (Convolutional neural network, pp. 4; "The parameters of a K-layer CNN can be optimized by gradient descent. For pixelwise classification tasks, patches centered at pixels of training images are cropped as training samples. For each patch in an input image I, the CNN outputs a prediction or a score. When feeding the image I as the input of the CNN by setting x1 = I, forward propagation cannot generate a prediction at every pixel location due to the greater-than-1 strides in convolution and pooling layers." Training the model in this article is required. This method is trained with images in a particular orientation, size and location in the original image. The models used in the experiments in Gomes are trained using similar conventual methods.) “wherein the test dataset comprises at least on labelled test image;” (Convolutional neural network, pp. 4; "The parameters of a K-layer CNN can be optimized by gradient descent. For pixelwise classification tasks, patches centered at pixels of training images are cropped as training samples. For each patch in an input image I, the CNN outputs a prediction or a score. When feeding the image I as the input of the CNN by setting x1 = I, forward propagation cannot generate a prediction at every pixel location due to the greater-than-1 strides in convolution and pooling layers." This system does an initial pass of the image in order to train the model. This first pass will look at the training set and produce and output prediction in the forward pass.) “wherein the method comprises: assigning a third function to the at least one labelled test image;” (Algorithm 1: Efficient Forward Propagation of CNN, pp. 4; This method will use forward and back propagation to evaluate an image. During the forward propagation the system applies a function at each neuron of the network. At lines 7 and 10 the neurons perform the functions given layer of the network. This teaches the assigning of a function.) “determining a level of test similarity based on a comparison of the second function and the third function;” (Backward propagation of convolution layers, pp. 7; “Calculating the gradients of W k can be converted into a convolution operation: ∇ W k , d = x k * r o t δ k + 1 , d , where δ k + 1 , d denotes that the error map δ k + 1 is inserted with all-zero rows and columns as the kernel W k , d does at layer k. Similarly, the gradient for the bias bk is calculated as the sum of all the entries in δ k + 1 .” In this article the method uses an gradient to measure the error metric. The error metric determines how accurate the current weights are by comparing the output of the node and the correct output. This can also be performed with training data to represent a third data and/or function) “determining a test similarity information between the at least one labelled test image and the image;” (Backward propagation of convolution layers, pp. 7; “Calculating the gradients of W k can be converted into a convolution operation: ∇ W k , d = x k * r o t δ k + 1 , d , where δ k + 1 , d denotes that the error map δ k + 1 is inserted with all-zero rows and columns as the kernel W k , d does at layer k. Similarly, the gradient for the bias bk is calculated as the sum of all the entries in δ k + 1 .” In this article the method uses a gradient to measure the error metric. The error metric determines how accurate the current weights are by comparing the output of the node and the correct output. This can also be performed with training data to represent a third data and/or function) “assigning a third weight to the test similarity information;” (Backward propagation of convolution layers, pp. 7; “For each non-zero entry W k , d i in the convolution kernel W k , d , its gradient is calculated as the sum of all the weighted errors that are calculated with the entry. The weights for errors are determined by the input values from the input feature map x k for convolution: ∇ W k , d i = ∑ u , v δ k + 1 u , v x k u , v i , where x k u , v i are the input values in x k and are multiplied elementwise by W k , d i during convolution to compute the entry at (u,v) of the output feature map x k + 1 .” Since this method uses backward propagation, the weights are adjusted during the training process. The method will determine an error metric and to do this it will compare the initial feedforward weight the adjusted weight. This can also be performed with training data to represent a third data and/or function) “determining an adjustment to the second and/or the third weight as a function of the level of test similarity; and” (Backward propagation of convolution layers, pp. 7; “The weights for errors are determined by the input values from the input feature map x k for convolution: ∇ W k , d i = ∑ u , v δ k + 1 u , v x k u , v i , where x k u , v i are the input values in x k and are multiplied elementwise by W k , d i during convolution to compute the entry at (u,v) of the output feature map x k + 1 .” The weights are adjusted during the back propagation process. The adjustments are based on the error function disclosed. This can also be performed with training data to represent a third data and/or function) “determining a prediction for the image, wherein the prediction is based on combining the prediction from the trained machine with the adjusted first weight and the test similarity information with the adjusted third weight.” (Our approach, pp. 3; “In our approach, the whole image is taken as the input of CNN which predicts the whole label map with only one pass of the modified forward propagation. At each training iteration, existing approaches predict the error of each sampled patch and use it to calculate gradients with backward propagation. If a mini-batch contains K training patches, both forward propagation and backward propagation are repeated for K times and the gradients estimated from the K patches are averaged to update CNN parameters.” This method performs a forward propagation to initialize a network and will repeatedly train using back propagation at the designated layers. The end result will be a determined from the forward and backward propagation. During back propagation the weights are adjusted to give more refined results of the input image. This can also be performed with training data to represent a third data and/or function) Regarding claim 12, Gomes discloses, “A control device for predicting a labels of one or more lighting poles in an image from a predictor, wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image; wherein the control device comprises a processor arranged for executing at least some of the steps of method according to claim 1.” (Experimental Setup, pp. 7-8; “The proposed application was developed using MMDetection framework [26] on the Ubuntu 18.04 operating system. Training and testing procedures were conducted in a computer equipped with an Intel Xeon(E) E3-1270 @3.80 GHz CPU, 64 GB of RAM Memory along with Titan V graphics card, a GPU produced by NVIDIA containing 5120 CUDA (Compute United Device Architecture) cores and 12 GB of graphics memory.” This article discloses that the experiments and the model were executed on a computing device. This device will contain the model and execute its functions. As stated above this model on this system will be trained using labeled ground truth data.) Regarding claim 13, Gomes discloses, “A system for predicting labels of one or more lighting poles in an image from a predictor, wherein the predictor comprises a trained machine which has been trained based on a training dataset comprising at least one labelled image; wherein the system comprises:” (Experimental Setup, pp. 7-8; “The proposed application was developed using MMDetection framework [26] on the Ubuntu 18.04 operating system. Training and testing procedures were conducted in a computer equipped with an Intel Xeon(E) E3-1270 @3.80 GHz CPU, 64 GB of RAM Memory along with Titan V graphics card, a GPU produced by NVIDIA containing 5120 CUDA (Compute United Device Architecture) cores and 12 GB of graphics memory.” This article discloses that the experiments and the model were executed on a computing device. This device will contain the model instructions and execute the functions. As stated above this model on this system will be trained using labeled data.) “the training dataset and/or a test dataset;” (Experimental Setup, pp. 6; “For our experimental setup, we divided the RGB orthoimages into training, validation and test sets.” This model uses training data to train the different models tested.) “a comparator for determining a level of similarity and similarity information between the at least one labelled image and the image;” (Pole DetectionApproach, pp. 6; “The ATSS also differs from other state-of-the-art methods by using an individual IoU threshold for each ground-truth during the training, obtained by adding the average and standard deviation of the anchor box IoU’s proposals in relation to the ground-truth.” One of the models uses a conventional system for comparing images. This model will evaluate the difference in the ground truth and the predicted model and adjust the accuracy accordingly and using a specified IoU method.) “a control device according to claim 12.” (Experimental Setup, pp. 7-8; “The proposed application was developed using MMDetection framework [26] on the Ubuntu 18.04 operating system. Training and testing procedures were conducted in a computer equipped with an Intel Xeon(E) E3-1270 @3.80 GHz CPU, 64 GB of RAM Memory along with Titan V graphics card, a GPU produced by NVIDIA containing 5120 CUDA (Compute United Device Architecture) cores and 12 GB of graphics memory.” This article discloses a computing system which the experiments were executed on.) Regarding claim 14, Gomes discloses, “A computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method of claim 1.” (Experimental Setup, pp. 7-8; “The proposed application was developed using MMDetection framework [26] on the Ubuntu 18.04 operating system. Training and testing procedures were conducted in a computer equipped with an Intel Xeon(E) E3-1270 @3.80 GHz CPU, 64 GB of RAM Memory along with Titan V graphics card, a GPU produced by NVIDIA containing 5120 CUDA (Compute United Device Architecture) cores and 12 GB of graphics memory.” This article discloses the computing system which the experiments were conducted on. This system contains processing systems coupled to memory, which contains the instructions of the method.) Claims 2, 4, 5, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Gomes and Li in view of Onwubolu (Onwubolu, "Manufacturing features recognition using backpropagation neural networks", 1999, hereinafter "Onwubolu"). Regarding claim 2, Onwubolu discloses, “wherein the sum of the first and the second weight is less than or equal to a predetermined maximum value of the first or the second weight.” (Backpropagation neural network algorithm, pp. 296; “In the generalized delta rule, w i j t is the weight from hidden node j or from an input to node j at time t, y i is either the output of node j or is an input, η is the learning-rate parameter, α   ( 0 < α   1 ) is the momentum constant to smooth out the weight change and to accelerate convergence of the network, and ε j is an error term for node j given as: [see equation pp. 296 Col. 2] where k is over all nodes in the layers above node j and y j is either the output of node j or is an input.” The method in this article discloses a feature recognition system that uses back propagation. This system will perform forward propagation similar to Li and will again perform back propagation on the initially trained data. This model teaches an error metric denoted above as η. This would be a threshold for the training adjustments.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Gomes, Li and Onwubolu. Gomes teaches a model which is able to evaluate aerial images for objects such as utility poles using conventual machine learning training methods. Li teaches a machine learning model that is able to evaluate images using conventual machine learning training methods and is able to speed up the training process. Onwubolu teaches a system that is able to also take an input image and define and label features of a 3D object using a neural network with backpropagation to further train the weights and biases. One of ordinary skill would have motivation to combine a machine learning model to detect object in images with another machine learning model which is able to improve object detecting machine learning models, with a system, that uses similar neural network architecture and training, which is able to input an image process it and output a prediction of the features contained in the original image, “The results of the work reported in this paper reveal the potential of using the most popular neural network model in manufacturing application, backpropagation networks (BPN) for feature recognition from a solid B-rep solid model in order to automate process planning. The main advantages of the neural network approach over the rule-based systems include ability to recognize intermediate and complex features without feeding any previous knowledge into the system, high recognition speed, ease of computation, simplicity in implementation, and robustness because a neural network once developed can be used to solve a wide range of problems." (Onwubolu, Conclusions, pp. 298). Regarding claim 4, Onwubolu discloses, “wherein the method further comprises: if the level of similarity exceeds a first threshold, determining an adjustment for the first weight to be smaller than the second weight.” (Backpropagation neural network algorithm, pp. 296; Starting at the output nodes and working back to the ε s hidden layer recursively adjust weights by computing the local gradients es according to the generalized delta rule: [see equation 6] In the generalized delta rule, w i j ( t ) is the weight from hidden node j or from an input to node j at time t, y j is either the output of node j or is an input, η is the learning-rate parameter, α   ( 0 < α   1 ) is the momentum constant to smooth out the weight change and to accelerate convergence of the network, and ε j is an error term for node j given as [See equation on page 296, Col. 2] where k is over all nodes in the layers above node j and y j is either the output of node j or is an input.” This article discloses and method for forward and backward propagation. In this method the weights are evaluated and changed during the back propagation process. The weights are changed based on the learning-rate parameter. This parameter can be used as an upper or lower threshold to adjust the weights. Also, this system discloses different weight adjustments based on what layer they originate from, the under the broadest reasonable interpretation can mean adjusting a weight by a value and another weight by a different value.) Regarding claim 5, Onwubolu discloses, “wherein the method further comprises: if the level of similarity does not exceed the first threshold, determining an adjustment for the first weight to be larger than the second weight.” (Backpropagation neural network algorithm, pp. 296; “Starting at the output nodes and working back to the ε s hidden layer recursively adjust weights by computing the local gradients es according to the generalized delta rule: [see equation 6] In the generalized delta rule, w i j ( t ) is the weight from hidden node j or from an input to node j at time t, y j is either the output of node j or is an input, η is the learning-rate parameter, α   ( 0 < α   1 ) is the momentum constant to smooth out the weight change and to accelerate convergence of the network, and ε j is an error term for node j given as [See equation on page 296, Col. 2] where k is over all nodes in the layers above node j and y j is either the output of node j or is an input.” This article discloses and method for forward and backward propagation. In this method the weights are evaluated and changed during the back propagation process. The weights are changed based on the learning-rate parameter. This parameter can be used as an upper or lower threshold to adjust the weights. Also, this system discloses different weight adjustments based on what layer they originate from, the under the broadest reasonable interpretation can mean adjusting a weight by a value and another weight by a different value.) Regarding claim 10, Onwubolu discloses, “wherein the method further comprises: if the level of similarity exceeds the first threshold and if a confidence of prediction from the trained machine does not exceed a confidence threshold, retraining the trained machine.” (Backpropagation neural network algorithm, pp. 296; Starting at the output nodes and working back to the ε s hidden layer recursively adjust weights by computing the local gradients es according to the generalized delta rule: [see equation 6] In the generalized delta rule, w i j ( t ) is the weight from hidden node j or from an input to node j at time t, y j is either the output of node j or is an input, η is the learning-rate parameter, α   ( 0 < α   1 ) is the momentum constant to smooth out the weight change and to accelerate convergence of the network, and ε j is an error term for node j given as [See equation on page 296, Col. 2] where k is over all nodes in the layers above node j and y j is either the output of node j or is an input.” This article discloses and method for forward and backward propagation. In this method the weights are evaluated and changed during the back propagation process. The weights are changed based on the learning-rate parameter. This parameter can be used as an upper or lower threshold to adjust the weights. This learning rate parameter can be used as an upper threshold for retaining.) Claims 3 is rejected under 35 U.S.C. 103 as being unpatentable over Gomes, Li and Onwubolu in view of Weisstein, (Weisstein, "Arithmetic Mean", First published 2000, hereinafter "Weisstein"). Regarding claim 3, Gomes discloses, “wherein the at least one labelled image comprises an area A; and the image comprises N samples from an area C;” (Study Area, pp. 4; “The aerial RGB orthoimages were provided by the city hall of Campo Grande, state of Mato Grosso do Sul, Brazil. The orthoimages have a ground sample distance (GSD) equal to 10 cm. In total, 1057 orthoimages with dimensions 5619 x 5946 pixels were used in the experiments. These images were split in 99,473 patches, where 111,583 utility poles were identified as ground-truth. Details regarding the experimental setup are presented in Section 2.3.” (Emphasis added) The ground truths were identified for the training images. During the training phase the model will attempt to predict the location of an object and the using a specified IoU method and the prediction will be compared to the ground truths. Each of the ground truths bounding box is interpreted to the Area A. The prediction of the model would be interpreted as the number of samples N in the given image C.) Gomes fails to explicitly disclose the remaining limitations of this claim. However, Weisstein discloses, “wherein k be the number of samples that are common to A and C, and wherein the level of similarity (p) comprises: p=k/N.” (Arithmetic Mean; “The arithmetic mean of a set of values is the quantity commonly called "the" mean or the average. Given a set of samples x i , the arithmetic mean is: x - = 1 N ∑ i = 1 N x i ” This is the bases for determining an average. In this case, k would be the sum of the terms, which comprise of values A and C. Then the number of terms would be viewed as N, which is the total data points in sample C.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Gomes, Li and Weisstein. Gomes teaches a model which is able to evaluate aerial images for objects such as utility poles using conventual machine learning training methods. Li teaches a machine learning model that is able to evaluate images using conventual machine learning training methods and is able to speed up the training process. Weisstein teaches the definition of an average, which is used in many different math equations to find an average in a set of number. One of ordinary skill would have motivation to combine a system that is able to take in image data, processes it and output a prediction of the objects in the image with the average formula to find an average in a set of numbers, "The arithmetic mean of a set of values is the quantity commonly called "the" mean or the average." (Weisstein). Claims 7 is rejected under 35 U.S.C. 103 as being unpatentable over Gomes and Li in view of Moussalli et al., (Moussalli et al., "Fast and Flexible Conversion of Geohash Codes to and from Latitude/Longitude Coordinates", 2015, pp. 179-186). Regarding claim 7, Moussalli discloses, “wherein the first and the second function comprise a hash function based on latitude and longitude information of the one or more satellite images.” (Converting from Lat./Long. To Geohash code, pp. 180; “The longitude space is bounded by the initial interval {-180, 0, +180} being the min, mid, and max. If the longitude of the point of interest is greater than the mid of the interval (i.e. if the point resides in the upper subinterval), then a geohash bit of “1” is produced, and the new interval becomes m i n ,   m i d + m a x 2 , m i d . On the other hand, if the longitude of the point of interest is smaller or equal to the mid of the interval, then a geohash bit of “0” is produced, and the new interval becomes m i n ,   m i d + m a x 2 , m i d . This process is repeated up to the desired precision (number of geohash bits), in a bit-serial fashion. The same method is applied to the latitude, where the initial interval is {-90, 0, +90}. Finally, the longitude and latitude bits are interleaved in the geohash code. Figure 1(b) lists the actual binary geohash code of the red dot depicted in Figure 1(a).” This process is repeated up to the desired precision (number of geohash bits), in a bit-serial fashion. The same method is applied to the latitude, where the initial interval is {-90, 0, +90}. Finally, the longitude and latitude bits are interleaved in the geohash code. Figure 1(b) lists the actual binary geohash code of the red dot depicted in Figure 1(a)." This article discloses a method which is able to hash lat./long. Pairs. The method proposed in this article is used to improve system performance by reducing the computational overhead of handling large amounts of location data.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Gomes, Li and Moussalli. Gomes teaches a model which is able to evaluate aerial images for objects such as utility poles using conventual machine learning training methods. Li teaches a machine learning model that is able to evaluate images using conventual machine learning training methods and is able to speed up the training process. Moussalli teaches a system which is able to hash image data to improve the performance of a system. One of ordinary skill would have motivation to combine a system that is able to take in image data, processes it and output a prediction of the objects in the image with a system that is able to detect light posts in aerial images using machine learning as well as a system that is able to hash and simplify complex location data using hardware, "In summary, the hardware converter achieves best run of the software converter on the CPU socket. As noted in Section IV-C, the speedup shown here can be potentially more than doubled simply by attaching the hardware converter to a higher bandwidth PCl-e platform (no modifications to the converter core required)." (Moussa Iii, Hardware Converter vs. Multi-Threaded Software: a Socket-to-Socket Comparison, pp. 185-186). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL MICHAEL GALVIN-SIEBENALER whose telephone number is (571)272-1257. The examiner can normally be reached Monday - Friday 8AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL M GALVIN-SIEBENALER/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Nov 11, 2022
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §102, §103
Dec 11, 2025
Response Filed
Feb 24, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
0%
With Interview (-25.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month