Prosecution Insights
Last updated: April 19, 2026
Application No. 18/157,535

DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OPERATING A MACHINE

Non-Final OA §103§112
Filed
Jan 20, 2023
Examiner
MOUNDI, ISHAN NMN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 16 resolved
-42.5% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
41 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “An object detector configured to detect…”, “A classifier configured to classify…”, and “An answer set solver configured to determine…” in claim 13. This interpretation applies to all claims depending therefrom. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 13 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification (see page 8 lines 10-15) does not disclose sufficient corresponding structure for the claimed functions of an object detector and classifier (see MPEP 2181 (IV)). The specification (see page 22 lines 14-20) does not disclose sufficient corresponding structure for the claimed functions of an answer set solver (see MPEP 2181 (IV)). Thus, a person of ordinary skill in the art cannot determine how to perform the claimed functions, and the specification fails to demonstrate that the inventor was in possession of the claimed invention at the time of filing. Claim 14 incorporates by reference all limitations of claim 13 and is rejected under 35 U.S.C. 112(a) for similar reasons. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitations “An object detector configured to detect…”, “A classifier configured to classify…”, and “An answer set solver configured to determine…” in claim 13 invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. No association between the structure and the functions can be found in the specification (see page 8 lines 10-15 and page 22 lines 14-20). The specification fails to clearly link the claimed functions to disclosed structures, materials, or acts (see MPEP 2181 (III)). Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claim 14 incorporates by reference all limitations of claim 13 and is rejected under 35 U.S.C. 112(b) for similar reasons. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Objections Claim 3 is objected to because of the following informalities: The claim recites “wherein the digital image the digital image includes…”. This appears to be a mistake. Examiner suggests amending the claim to instead recite “wherein the digital image includes…”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 and 6-15 are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al (Pub. No.: US 12456055 B2), hereafter Yu, in view of Ekambaram (Pub. No.: US 11487949 B2), hereafter Ekambaram, and Mohanty et al (Pub. No.: US 11562554 B1), hereafter Mohanty. Regarding claim 13, claim limitations “An object detector configured to detect…”, “A classifier configured to classify…”, and “An answer set solver configured to determine…” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. These elements are interpreted under 35 U.S.C. 112(f) as processor(s) with the algorithm described in the specification (the algorithms to detect objects in digital images, classify objects, and determine answers to questions) that causes the processor(s) to perform the claimed function. Regarding claims 1, 13, and 15, Yu teaches a computer-implemented method for operating a machine (the present invention may be a method for operating autonomous vehicles, C59:L7-10, C59:34-37), a device for operating a machine (Method for operating autonomous vehicles may include the use of PPU 3200 which is a GPU (graphics processing unit) and display device, C59:L18-23), and a non-transitory computer-readable medium on which is stored a computer program for operating a machine comprising (non-transitory computer-readable storage medium may be used to carry out processes of the current invention, C69:L67, C70:L1-5): providing a digital image (“detect object in images including digital representations of those objects”, abstract); providing a structured representation of a question (users may submit requests using client device 702, C11:L55-58); predicting with an object detector and depending on at least a part of the digital image an area of the digital image, wherein an object is depicted in the digital image (“classification and location information can be used to detect and/or segment objects 182 represented in an image”, C2:L47-49); predicting, with a classifier and depending on at least a part of the digital image within the area, a first score indicating a likelihood that the object is of a first class and a second score indicating a likelihood that the object is of a second class, wherein the first score indicates a higher likelihood than the second score (Different elements of a matrix may have a confidence score associated with a class. The confidence score indicates the confidence that an object in an image belongs to a given class, C6:L62-65);… determining if the second score meets the condition or not (Scores are compared to a threshold value, with being above the threshold being interpreted as meeting the condition, C7:L13-19);… adding at least one fact to the answer set programming program depending on the structured representation of the question (“request can include, for example, input data to be processed using a neural network to obtain one or more inferences or other output values, classifications, or predictions…Relevant data, which may include at least some of input or inference data, may also be stored to a local database 720 for processing future requests”, C12:L34-36, C12:L52-55. Relevant data, including inference or input data based on the request, may be stored in a local database to be used by the machine learning model responsible for answering/responding to requests. The relevant data is interpreted as at least one fact, the request is interpreted as the structured representation of the question, and the machine learning model is interpreted as the answer set programming program)… determining, with an answer set solver, an answer to the answer set programming program (Machine learning model may be used to respond to a user request with one or more inferences, classifications, or predictions, C12:L34-46),… operating the machine depending on the answer (Machine learning model may analyze training data, including input, prediction, and answer data, and navigate a vehicle or device on behalf of a user based on the inferences made, C11:L55-58, C14:L3-10). Yu does not appear to explicitly teach “providing at least one attribute value for the first class; adding to an answer set programming program a first rule including the at least one attribute value of the first class and/or a first constraint including the at least one attribute value of the first class,… based on the second score meeting the condition, providing at least one attribute value for the second class, and adding, to the answer set programming program, a second rule including the at least one attribute value of the second class and/or a second constraint including the at least one attribute value of the second class;…wherein the answer includes at least one attribute value of the first class and/or at least one attribute value of the second class;”. Ekambaram teaches providing at least one attribute value for the first class (Properties are associated with classification labels. Under the broadest reasonable interpretation, the properties are interpreted as attributes belonging to a class based on their classification label, C9:L29-31); adding to an answer set programming program a first rule including the at least one attribute value of the first class and/or a first constraint including the at least one attribute value of the first class (Examiner notes that the term “and/or” is interpreted as “or” in this context and in future limitations. A word web is updated including a set of answers corresponding to a set of properties, the properties being classified. Nodes are added to this word web in accordance to various embedding scores and belief scores. Under the broadest reasonable interpretation, the basis of which nodes are added are interpreted as rules for adding answers to the answer set. C9:L31-64),… based on the second score meeting the condition, providing at least one attribute value for the second class, and adding, to the answer set programming program, a second rule including the at least one attribute value of the second class and/or a second constraint including the at least one attribute value of the second class (Context 108 may be updated and image labels corresponding to a specific class or properties may be updated based on the context. If a confidence score is equal to or exceeds a threshold value, an object is determined to belong to a specific class, C5:L22-35); … wherein the answer includes at least one attribute value of the first class and/or at least one attribute value of the second class (If an answer has a belief score above a threshold value, at least one of the first or second classification labels are removed and the answer is verified. C9:L15-26). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu and Ekambaram before them, to include Ekambaram’s specific teachings of associating classification labels with properties, updating an answer set based on properties associated with classes, determining an object belongs to a class based on a confidence score exceeding a threshold value, and verifying an answer has appropriate classification labels in Yu’s system of Weakly-supervised Object Detection Using One Or More Neural Networks. One would have been motivated to make such a combination of determining an object belongs to a class based on a confidence score exceeding a threshold value, updating context and image labels according to the classification (see Ekambaram C5:L22-35), and using a softmax function to evaluate whether or not elements of an image belong to a specific class (see Yu C6:L62-65). Yu in view of Ekambaram does not appear to explicitly teach “…with a condition determined depending on a mean value and a standard deviation of a distribution of scores that indicate for a plurality of classes their respective likelihood that an object of the respective class is detected by the object detector”. Mohanty teaches …with a condition determined depending on a mean value and a standard deviation of a distribution of scores that indicate for a plurality of classes their respective likelihood that an object of the respective class is detected by the object detector (Mean and standard deviation may be calculated corresponding to candidate predictions, C11:L61-67, C12:L1-12. A classifier then operates on the remaining candidate predictions to determine the likelihood that an object belongs to a specific class, C12:L37-46). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Ekambaram, and Mohanty before them, to include Mohanty’s specific teachings of mean and standard deviation being calculated corresponding to candidate predictions to determine the likelihood that an object belongs to a specific class in Yu’s system of Weakly-supervised Object Detection Using One Or More Neural Networks. One would have been motivated to make such a combination of mean and standard deviation being calculated corresponding to candidate predictions to determine the likelihood that an object belongs to a specific class (see Mohanty C11:L61-67, C12:L1-12, C12:L37-46), and using a softmax function to evaluate whether or not elements of an image belong to a specific class (see Yu C6:L62-65). Regarding claim 2, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 1 as outlined above. Yu further teaches wherein the machine is a robot and/or a vehicle, and wherein the method further comprises detecting the digital image with at least one sensor, the at least one sensor including a camera, or a radar sensor, or a lidar sensor, or a ultrasonic sensor, or an infrared sensor, or a motion sensor (“PPU 3200 is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, … robotics”, C59:L34-37, 41. PPU may include or be coupled to a digital camera, C68:L24-28). Regarding claim 3, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 1 as outlined above. Yu further teaches wherein the digital image the digital image includes at least one object representing a traffic sign, or a traffic surface, or a pedestrian, or a vehicle (Deep neural network analyzes an input image that may include automobiles, pedestrians, road hazards, C10:L61-67, C11:L1-14). Ekambaram further teaches wherein the at least one attribute value of the first class and the at least one attribute value of the second class indicates a type thereof (Objects may be classified according to more broad properties. For example, a tiger may receive the classification labels for both being a mammal and for having four legs. Figure 2, C4:L11-23), and wherein the structured representation of the question includes at least one attribute value of the at least one attribute value of the first class and the at least one attribute value of the second class (Questions are formed based on selected entity nodes which include various properties and classifications. Figure 2, C4:L11-28). Regarding claim 4, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 3 as outlined above. Yu further teaches determining an action depending on the at least one attribute value that the answer includes (Trained neural network may provide instructions for navigating a vehicle or device based on inferences or predictions made in response to user input, C11:L55-58). Regarding claim 6, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 1 as outlined above. Mohanty further teaches providing a set of classes including the first class and the second class (“A classifier can then operate on the remaining candidate predictions to assign a class to each of the remaining candidate predictions. For example, in object detection applications, the classifier can determine which class of objects that each of the remaining candidate regions contains”, C12:L40-46); providing a set of digital images (“The data stores can include permanent or transitory data used and/or operated on by the operating system, application programs, or drivers. Examples of such data include… images”, C21:L33-36); determining with the object detector for the digital images in the set of digital images their respective area (Regions of images may be processed to determine the likelihood of a specific object being present, C12:L46-59); determining with the classifier for areas of the digital images in the set of digital images, respective scores for the classes in the set of classes (A classifier operates on the remaining candidate predictions to determine the likelihood that an object in a specific area belongs to a specific class, C12:L37-46), wherein each of the scores indicates a likelihood that an object that is depicted in the respective area is of one of the set of classes (Candidate regions may be associated with regions of images indicating the likelihood of a specific object is present, C12:L46-59); and determining the mean value depending on a sum of one score per area, that are assigned to the classes in the area (The mean of confidence scores corresponding to candidate regions of an image is determined, C12:L60-67, C13:L1-8). Regarding claim 7, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 6 as outlined above. Mohanty further teaches determining for the one scores per area their respective difference to the mean, and determining the standard deviation depending on the differences (Standard deviation of scores are calculated based on the difference between scores and the calculated mean, C13:L9-19). Regarding claim 8, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 7 as outlined above. Mohanty further teaches determining a threshold depending on a difference between the mean and the standard deviation weighted with a parameter, and determining that the condition is met, when the second score is equal to or larger than the threshold (A threshold is determined based on the difference between the mean and standard deviation multiplied by a multiplier factor. Under the broadest reasonable interpretation, the standard deviation multiplied by a multiplied factor is interpreted as a weighted standard deviation. Confidence scores that are beneath the calculated threshold are discarded, meaning only scores above the threshold meet the condition. C13:L20-27). Regarding claim 9, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 1 as outlined above. Ekambaram further teaches adding the second rule and/or the second constraint to the answer set programming program based on the second score failing to meet the condition and based on the second score being within a predetermined set of scores (In response to confidence values being below a threshold value, the wordweb adds properties to more accurately determine which class an object belongs to, C9:L3-8. Properties are adjusted in the wordweb based on the confidence score values corresponding to entity nodes, C4:L48-59). Regarding claim 10, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 9 as outlined above. Mohanty further teaches determining a plurality of scores for the plurality of classes, wherein each score indicates a likelihood that an object that is depicted in the area is of one of the classes (“The convolution operation 412 may include generating candidate predictions (e.g., candidate regions such as boundary boxes in the case of object detection), and assessing for each candidate prediction a confidence score (e.g., representing the likelihood that an object of interest is in a candidate region)”, C9:L56-61); and adding to the set of scores an amount of highest scores from the plurality of scores (Candidate predictions associated with high confidence scores are kept for analysis with remaining candidate predictions associated with lower scores discarded, C4:L54-67, C5:L1-3). Regarding claim 11, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 1 as outlined above. Yu further teaches … and determining the answer depending on the first constraint weighted with the first weight; … and determining the answer depending on the second constraint weighted with the second weight (Scores may be calculated using weighted values or weighted functions corresponding to objects. Objects may be classified as belonging to a relevant class depending on whether or not their corresponding score is above or below a confidence threshold. C8:L56-67) Mohanty further teaches providing the first constraint with a first weight for weighting the first constraint… and/or providing the second constraint with a second weight for weighting the second constraint (K candidate predictions have a corresponding multiplier factor used to weight standard deviations, C3:L11-19). Regarding claim 12, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 11 as outlined above. Mohanty further teaches determining the first weight depending on the first score and/or determining the second weight depending on the second score (Multiplier factors may be dependent upon K number candidate predictions that are above a threshold, with the candidate predictions having confidence scores being above the threshold. C3:L11-19). Regarding claim 14, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 13 as outlined above. Yu further teaches at least one sensor configured to capture the digital images and/or at least one actuator configured to operate the machine according to the instructions (PPU may include or be coupled to a digital camera, C68:L24-28. “PPU 3200 is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, … robotics”, C59:L34-37, 41). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Yu in view of Ekambaram and Mohanty and further in view of Martin et al (Pub. No.: US 11194330 B1), hereafter Martin. Regarding Claim 5, Yu in view of Ekambaram and Mohanty teaches the limitations of claim 4 as outlined above. Yu further teaches wherein the at least one object includes the traffic sign or the pedestrian (Objects in digital images may include pedestrians, C11:L12-13). Yu in view of Ekambaram and Mohanty does not appear to explicitly teach “…and the action includes performing a stop, when the attribute value of the object representing the sign indicates that the sign is a stop sign or the attribute value of the object representing the pedestrian indicates that the pedestrian is a child”. Martin teaches …and the action includes performing a stop, when the attribute value of the object representing the sign indicates that the sign is a stop sign or the attribute value of the object representing the pedestrian indicates that the pedestrian is a child (“a drone or other autonomous vehicle may be controlled to move based on the classification. For example, if implemented in an autonomous vehicle, the vehicle may be caused to maneuver based on a particular classification, non-limiting examples of which include driving away from “explosive” sounds, or applying brakes to slow down when a classification such as “children playing” is detected”, C10:L20-27). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yu, Ekambaram, Mohanty, and Martin before them, to include Martin’s specific teachings of stopping a vehicle in response to detecting children in Yu’s system of Weakly-supervised Object Detection Using One Or More Neural Networks. One would have been motivated to make such a combination of stopping a vehicle in response to detecting children (see Martin C10:L20-27) and using a deep neural network to identify and classify objects including pedestrians (see Yu C11:L5-14). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 10282616 B2 (Gong et al) teaches a method and system for finding targets within visual data. US 20230244981 A1 (Cella et al) teaches a method and system including a classification model designed to answer questions regarding a machine and attributes of the machine. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - 5 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jan 20, 2023
Application Filed
Jan 15, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561970
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
46%
With Interview (+33.3%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month