Prosecution Insights
Last updated: April 19, 2026
Application No. 18/430,997

METHOD AND SYSTEM FOR GENERATING A DETECTOR FOR PROCESS MONITORING

Non-Final OA §101
Filed
Feb 02, 2024
Examiner
YENTRAPATI, AVINASH
Art Unit
2672
Tech Center
2600 — Communications
Assignee
The Boeing Company
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
69%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
499 granted / 671 resolved
+12.4% vs TC avg
Minimal -5% lift
Without
With
+-5.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
698
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 671 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without reciting significantly more. Independent claims 1, 12 and 17 recite the limitations “obtaining a first training dataset including a first image sequence having a first set of object tags identifying at least one first object class in a corresponding image of the first image sequence; obtaining a first set of ground truth tags based on a ground truth timeline identifying when the at least one first object class appeared in the first image sequence” which are merely data gathering steps and are insignificant extra-solution activities. The limitation “discarding images from the first training dataset by either identifying object tags by class from the first set of object tags without a corresponding ground truth tag from the first set of ground truth tags or identifying object tags by class from the first set of ground truth tags without a corresponding object tag from the first set of object tags to generate a first verified training dataset” falls under the grouping of Mental Processes because a person can mentally determine images to be discarded by visual inspection. The claim further recites the limitation “training a first parts-level detector based on the first verified training dataset”. The detection function can be performed mentally by visual inspection of images. The stipulation training a machine learning algorithm merely indicates a field of use or technological environment in which the judicial exception is performed, and this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) described at a high level of generality and thus fails to add an inventive concept to the claims. Dependent claim 2 recites “wherein the at least one first object class includes a first object in a plurality of different configurations” which merely describes the first object in the images that are gathered in the data gathering step. Dependent claim 3 recites “wherein the first image sequence includes a second set of object tags identifying at least one second object class in a corresponding image of the first image sequence” which merely describes the images that are gathered in the data gathering step. Dependent claim 4 recites “receiving the first image sequence with the at least one first object class identified in at least one image of the first image sequence” which is merely a data gathering step which is insignificant extra-solution activity. The claim further recites “tracking the at least one first object class identified in the at least one image through the first image sequence; tagging a region of interest in each image of the first image sequence where the at least one first object class was tracked” which fall under the grouping of Mental Processes because a person can mentally track the object in the image sequence and tag or identify a region of interest by visually inspecting the images. Finally, the limitation “creating the first training dataset by collecting the region of interest from each image in the first image sequence where the at least one first object class was tracked” is merely a data gathering or output step, i.e., generating or outputting a training dataset from the images that were gathered. This limitation is thus an insignificant extra-solution activity. Dependent claim 5 recites “obtaining an additional training dataset including an additional image sequence having an additional set of object tags identifying at least one additional object class in a corresponding image of the additional image sequence; obtaining an additional set of ground truth tags based on a ground truth timeline identifying when the at least one additional object class appeared in the additional image sequence” which are merely data gathering steps and therefore insignificant extra-solution activities. The claim further recites the limitation “discarding images from the additional training dataset by identifying object tags by class from the additional set of object tags without a corresponding ground truth tag from the additional set of ground truth tags to generate an additional verified training dataset” which falls under the grouping of Mental Processes because a person can mentally determine images to be discarded by visual inspection. The claim further recites the limitation “training an additional parts-level detector based on the additional verified training dataset”. The detection function can be performed mentally by visual inspection of images. The stipulation training a machine learning algorithm merely indicates a field of use or technological environment in which the judicial exception is performed, and this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) described at a high level of generality and thus fails to add an inventive concept to the claims. Dependent claim 6 recites “tracking the at least one additional object class identified in the at least one image through the additional image sequence; tagging a region of interest in each image of the additional image sequence where the at least one additional object class was tracked” which fall under the grouping of Mental Processes because a person can mentally track the object in the image sequence and tag or identify a region of interest by visually inspecting the images. The claim further recites “receiving the additional image sequence with the at least one additional object class identified in at least one image of the additional image sequence” which is merely a data gathering step which is insignificant extra-solution activity. Finally, the claim further recites “creating the additional training dataset by collecting the region of interest from each image in the additional image sequence where the at least one additional object class was tracked” which is merely a data gathering or output step, i.e., generating or outputting a training dataset from the images that were gathered. This limitation is thus an insignificant extra-solution activity. Dependent claim 7 recites “including training a unified detector utilizing the first parts-level detector on the additional verified training dataset and the additional parts-level detector on the first verified training dataset”. The detection function can be performed mentally by visual inspection of images. The stipulation training a machine learning algorithm merely indicates a field of use or technological environment in which the judicial exception is performed, and this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) described at a high level of generality and thus fails to add an inventive concept to the claims. Dependent claim 8 recites “wherein training the unified detector by utilizing the additional parts-level detector on the first verified training dataset includes tagging a region of interest corresponding to where the at least one additional object class appeared in each image of the first verified training dataset to create an updated additional training dataset”. The detection function can be performed mentally by visual inspection of images. The stipulation training a machine learning algorithm merely indicates a field of use or technological environment in which the judicial exception is performed, and this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) described at a high level of generality and thus fails to add an inventive concept to the claims. Dependent claim 9 recites the limitation “including discarding images from the updated additional training dataset by identifying object tags by class from the updated additional training dataset without a corresponding ground-truth tag from an updated set of ground truth tags” which falls under the grouping of Mental Processes because a person can mentally determine images to be discarded by visual inspection. Dependent claim 10 recites the limitation “wherein training the unified detector by utilizing the first parts-level detector on the additional verified training dataset includes tagging a region of interest corresponding to where the at least one first object class appeared in each image of the additional verified training dataset to create an updated first training dataset”. The detection function can be performed mentally by visual inspection of images. The stipulation of training a machine learning algorithm merely indicates a field of use or technological environment in which the judicial exception is performed, and this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine learning) described at a high level of generality and thus fails to add an inventive concept to the claims. Dependent claim 11 recites the limitation “including discarding images from the updated first training dataset by identifying object tags by class from the updated first training dataset without a corresponding ground-truth tag from an updated set of ground truth tags” which falls under the grouping of Mental Processes because a person can mentally determine images to be discarded by visual inspection. With regard to claims 13-16 and 18-20, see discussion of corresponding claims above. The claims do not recite additional limitations that would integrate the abstract idea into a practical application nor do they provide an inventive concept. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVINASH YENTRAPATI whose telephone number is (571)270-7982. The examiner can normally be reached on 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AVINASH YENTRAPATI/Primary Examiner, Art Unit 2672
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602803
HEAD-MOUNTED DISPLAY AND METHOD FOR DEPTH PREDICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12579791
AUTOMATED METHODS FOR GENERATING LABELED BENCHMARK DATA SET OF GEOLOGICAL THIN-SECTION IMAGES FOR MACHINE LEARNING AND GEOSPATIAL ANALYSIS
2y 5m to grant Granted Mar 17, 2026
Patent 12562264
METHOD FOR THE RECOMPOSITION OF A KIT OF SURGICAL INSTRUMENTS AND CORRESPONDING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12536646
STRUCTURE DAMAGE CAUSE ESTIMATION SYSTEM, STRUCTURE DAMAGE CAUSE ESTIMATION METHOD, AND STRUCTURE DAMAGE CAUSE ESTIMATION SERVER
2y 5m to grant Granted Jan 27, 2026
Patent 12536654
THE SYSTEM AND METHOD FOR STOOL IMAGE ANALYSIS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
69%
With Interview (-5.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 671 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month