Prosecution Insights
Last updated: April 19, 2026
Application No. 18/699,701

Training Models for Object Detection

Non-Final OA §101§102§112
Filed
Apr 09, 2024
Examiner
ORANGE, DAVID BENJAMIN
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Hewlett-Packard Development Company, L.P.
OA Round
1 (Non-Final)
34%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
63%
With Interview

Examiner Intelligence

Grants only 34% of cases
34%
Career Allow Rate
51 granted / 151 resolved
-28.2% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
29.0%
-11.0% vs TC avg
§102
20.2%
-19.8% vs TC avg
§112
32.0%
-8.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 151 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 (all claims) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “a processor resource,” but this is new terminology. MPEP 2173.05(a). Reciting “a processor” is expected to overcome this rejection. If Applicant wishes to identify an electronic circuit (as per specification [0039]) that is a processor resource but not a processor, the examiner will consider it. Claim 1 recites “a non-transitory memory resource,” as above, but this is new terminology. MPEP 2173.05(a). Claim 1 recites instructions that cause a processor resource to “cause” various actions. The broadest reasonable interpretation of this second cause includes putting a message on a screen asking a user to do the following. Additionally, in light of the recent In re Blue Buffalo (Fed. Cir. January 14, 2026, non-precedential, slip opinion retrieved from https://www.cafc.uscourts.gov/opinions-orders/24-1611.OPINION.1-14-2026_2632686.pdf), it is unclear if this language is interpreted as limiting the structure, or if it instead means “capable of.” Claims 8 and 12 recite corresponding language that raises the same issue. Claims 1, 8, and 12 recite “object,” but it is unclear if this is intended to refer to one instance of an object (the plain meaning), or a type of object (as used in, for example, claims 3 and 5). Claims 4, 8, and 12 recite a “threshold amount,” but this is subjective because different people can have different opinions as to the threshold. MPEP 2173.05(b)(IV). Providing an objective basis, such as storing the threshold in memory, is expected to overcome this rejection. Claim 7 recites “face of a subject,” but this is subjective because there is not an objective standard for determining whether someone is a subject. MPEP 2173.05(b)(IV). Removing the phrase “of a subject” is expected to overcome this rejection. Claims 9 and 11 recite “objects intended for detection,” but this is subjective because there is not an objective standard for determining the intent. MPEP 2173.05(b)(IV). Claims 9 and 11 recite “a category of objects,” but this is subjective because different people can have different ideas as to what the category is and what constitutes a category. MPEP 2173.05(b)(IV). Dependent claims are likewise rejected. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 (all claims) are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more. Step 1: Claim 1 (and its dependents) recite a device, and machines satisfy Step 1 of the eligibility test. Claim 8 (and its dependents) recite a non-transitory computer readable storage medium, and manufactures satisfy Step 1 of the eligibility test. Claim 12 (and its dependents) recite a method, and processes satisfy Step 1 of the eligibility test. Step 2A, prong one: All of the elements of claims 1-20 are a mental process because a person can check their work and study harder. Further, the various models are also mental processes, see example 47, claim 2, element (d) (from the July 2024 AI subject matter eligibility examples). MPEP 2106.04(a)(2)(III)(C) explains that use of a generic computer or in a computer environment is still a mental process. In particular, this section begins by citing Gottschalk v. Benson, 409 US 63 (1972). “The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea.” In Benson the Supreme Court did not separately analyze the computer hardware at issue; the specifics of what hardware was claimed is only included in an appendix to the decision. Because there are no additional elements, no further analysis is required for Step 2A, prong two or Step 2B. Amending the claims to be more technical, such that the analogy to what a human would do is less apparent, is expected to assist in overcoming this rejection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-15 (all claims) are rejected under 35 U.S.C. 102(a)(1) and/or (a)(2) as being anticipated by US20200134375A1 (“Zhan”). 1. A computing device, comprising: (Zhan, claim 12, “A semantic segmentation model training apparatus”) a processor resource; and (Zhan, claim 12, “a processor”) a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed, cause the processor resource to: (Zhan, claim 12, “a memory storing processor-executable instructions”) cause a convolutional neural network (CNN) model to be trained with an initial training data set to detect an object included in annotated images included in the initial training data set; (Zhan, claim 12, “obtaining, by a convolutional neural network based on the category of the at least one unlabeled image and a category of at least one labeled image, sub-images respectively corresponding to at least two images and features corresponding to the sub-images.” See also, [0047] “training the semantic segmentation model by a gradient back propagation algorithm, so as to minimize an error of the convolutional neural network.” Fig. 3 shows that Zhan’s CNN is part of Zhan’s semantic segmentation model.) cause the trained CNN model to perform inferencing on unannotated images included in an inference data set to detect the object in the unannotated images; (Zhan, claim 12, “obtaining, by a convolutional neural network based on the category of the at least one unlabeled image and a category of at least one labeled image, sub-images respectively corresponding to at least two images and features corresponding to the sub-images”) determine an error rate of the trained CNN model, wherein the error rate is a rate of misdetection of the object in the unannotated images; and (Zhan, [0047] “training the semantic segmentation model by a gradient back propagation algorithm, so as to minimize an error of the convolutional neural network”) cause the trained CNN model to be further trained based on the error rate. (Zhan, claim 16 “iteratively implementing following operations until the maximum error is lower than or equal to a preset value: … .”) 2 The computing device of claim 1, wherein the processor resource is to cause the trained CNN model to be further trained with a revised training data set to revise the CNN model. (Zhan, claim 12, “training the semantic segmentation model on the basis of the categories of the at least two sub-images and feature distances between the at least two sub-images.”) 3. The computing device of claim 2, wherein the revised training data set includes annotated images having objects that were mis-detected during the inferencing on the set of unannotated images. (Zhan, claim 15, “training the semantic segmentation model by a gradient back propagation algorithm, so as to minimize an error of the convolutional neural network, wherein the error is a triplet loss of the features of the corresponding sub-images obtained based on the convolutional neural network.” Zhan’s triplet loss teaches the claimed mis-detected.) 4. The computing device of claim 1, wherein the processor resource is to cause the trained CNN model to be further trained in response to the error rate being greater than a threshold amount. (Zhan, claim 16 “iteratively implementing following operations until the maximum error is lower than or equal to a preset value: … .”) 5. The computing device of claim 1,wherein the annotated images in the initial training data set are annotated with bounding boxes around the object. (Zhan, 0062, “outputting the image in the select box as a sub-image, and labeling the sub-image as said category.” Zhan’s output images are considered part of the initial training data because Zhan describes them as part of the initial learning. See, e.g., Zhan, [0075]. Zhan Fig. 2, box 211 shows the select box teaches the claimed bounding box around the object.) 6. The computing device of claim 1, wherein the unannotated images in the inference data set include the object without bounding boxes around the object. (Zhan, Fig. 3.) 7. The computing device of claim 1, wherein the object is a face of a subject. (Zhan, Fig. 2, box 211.) Claim 8 is rejected as per claim 3. 9. The non-transitory memory resource of claim 8, wherein the object is included in a category of objects intended for detection. (Zhan, Fig. 2, box 211.) 10. The non-transitory memory resource of claim 8, wherein misdetection of the object includes an image included in the inference data set having an object to be detected that was not detected. (Zhan, [0034] “both the labeled image and the unlabeled image are applied to training, thereby achieving self-supervised training.” Zhan’s self-supervised learning teaches the claimed misdetection.) 11. The non-transitory memory resource of claim 8, wherein misdetection of the object includes an image included in the inference data set having an object that was detected, but not being of a category of objects intended for detection. (Zhan, [0034] “both the labeled image and the unlabeled image are applied to training, thereby achieving self-supervised training.” Zhan’s self-supervised learning teaches the claimed misdetection.) Claims 12-14 are rejected as per claim 3. Claim 15 is rejected as per claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID ORANGE whose telephone number is (571)270-1799. The examiner can normally be reached Mon-Fri, 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 571-272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID ORANGE/ Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 09, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567126
INFRASTRUCTURE-SUPPORTED PERCEPTION SYSTEM FOR CONNECTED VEHICLE APPLICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 11300964
METHOD AND SYSTEM FOR UPDATING OCCUPANCY MAP FOR A ROBOTIC SYSTEM
2y 5m to grant Granted Apr 12, 2022
Patent 10816794
METHOD FOR DESIGNING ILLUMINATION SYSTEM WITH FREEFORM SURFACE
2y 5m to grant Granted Oct 27, 2020
Patent 10433126
METHOD AND APPARATUS FOR SUPPORTING PUBLIC TRANSPORTATION BY USING V2X SERVICES IN A WIRELESS ACCESS SYSTEM
2y 5m to grant Granted Oct 01, 2019
Patent 10285010
ADAPTIVE TRIGGERING OF RTT RANGING FOR ENHANCED POSITION ACCURACY
2y 5m to grant Granted May 07, 2019
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
34%
Grant Probability
63%
With Interview (+29.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 151 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month