Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,001

SYSTEMS AND METHODS FOR IMAGE LABELING UTILIZING MULTI-MODEL LARGE LANGUAGE MODELS

Non-Final OA §101§103
Filed
Apr 01, 2024
Examiner
BEZUAYEHU, SOLOMON G
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Plainsight Technologies Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 618 resolved
+13.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
30 currently pending
Career history
648
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 618 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 1-20 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1, and similar rationale applies to independents Claim 8 and Claim 15. The rationale, under MPEP § 2106, for this finding is explained below. The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter? When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a machine since the claim is directed to a non-transitory computer readable medium. Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception? The Examiner interprets that the judicial exception applies since Claim 1 limitation of determining, based on the multiple responses, a label for the first image; are directed to an abstract. The claim is related to mental process by having a person label the images. Training a computer vision model [algorithm] based on the model training data set and applying the computer vision model to the second image mathematical concept. If/when the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two. Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The additional claim limitations receiving a set of first images; for each first image in a subset of the set of first images: generating multiple inputs to multiple artificial intelligence model systems; providing the multiple inputs and the first image to the multiple artificial intelligence model systems; adding the first image and the label to a model training data set; and receiving a second image, are data gathering; which is insignificant extra solution activity. Artificial intelligence model systems and computer vision model are used to generally apply the abstract idea without limiting how it functions. receiving multiple responses (output) from the multiple artificial intelligence model systems; and receiving an output from the computer vision model are mere output recited at high level of generality. Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception. The Examiner interprets that the Claims do not amount to significantly more since the claim state labeling images using multiple Artificial intelligence model and training a computer vision model. Furthermore, the generic computer components or machine learning algorithm of the processor/memory recited as performing generic computer or machine learning functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. The Examiner finds that Claims 2-7 does not state significantly more since the claim only recites additional steps for analyzing and labeling images using artificial intelligence model. Thus, claims 1-20 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more. Therefore, all claims are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-12, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over McKay et al. (Pub. No. US 2021/0192394) in view of Shekhar et al. (Pub. No. US 20220253630). Regarding claim 1 McKay teaches a non-transitory computer-readable medium comprising executable instructions, the executable instructions being executable by one or more processors to perform a method [Para. 14 and 233], the method comprising: receiving a set/plurality of first images [para. 5 “machine learning (ML) labeler that receives a plurality of labeling requests, each of which includes a data item to be labeled.”; Para. 59 “During execution of a graph, the same data item to be labeled (e.g., image, video, word document, or other discrete unit to be labeled) may be sent to one or more ML labeling platforms to be processed by one or more ML models 120 and to one or more human labeler computer systems 140 to be labeled by one or more human users”; for each first image in a subset of the set of first images: generating multiple inputs (requests) to multiple artificial intelligence model systems [Para. 122 “Here the splitter splits a request to label an image (or image training data) into requests to constituent ML labelers 1002a, 1002b, 1002c, 1002d, where each constituent ML labeler is trained for a particular product category”]; providing the multiple inputs (requests) and the first image to the multiple artificial intelligence model systems (ML Labelers) [ Para. 122 “here the splitter splits a request to label an image (or image training data) into requests to constituent ML labelers 1002a, 1002b, 1002c, 1002d, where each constituent ML labeler is trained for a particular product category” and “splitter 1004 routes the labeling request to i) labeler 1002a to label the image with any tools that labeler 1002a detects in the image, ii) labeler 1002b to label the image with any vehicles that labeler 1002b detects in the image, iii) labeler 1002c to label the image with any clothing items that labeler 1002c detects in the image, and iv) labeler 1002d to label the image with any food items that labeler 1002d detects in the image”]; receiving multiple responses (labeling results) from the multiple artificial intelligence model systems [Para. 125]; determining, based on the multiple responses (labels), a label (final) for the first image [Para. 59 “Based on the labels output for the data item by one or more labelers 110, the workflow can output a final labeled result”]; training a computer vision model (ML model) based on the model training data set [Para. 5 “the first portion of the augmented results are provided to an experiment coordinator, which iteratively trains the ML model using this portion of the augmented results”]. McKay doesn’t explicitly teach the rest of claim limitations. However, Shekhar teaches adding (including) the first (sample) image and the label (annotations) to a model training data set [Para. “The training set includes the newly annotated or labeled sample images, in which the annotations have been verified or corrected at previous operation 215” it’s clear that the image and tags/annotations/labels are included in the training data]; training a computer vision model (object detection network) based on the model training data set [Para. 52 “The training set is used to retrain the object detection network.”. Para 2 “In the field of computer vision, object detection refers to the task of identifying objects or object instances from a digital photograph”]; receiving a second image [Para. 30 “the object detection apparatus 110 receives an image including one or more of instances of an object”; Para. 89 “the system receives an image including a set of instances of an object.”]; applying the computer vision model (object detection network) to the second image [Para. 49 and Para. 92 “the system generates annotation data for the image using an object detection network that is trained at least in part together with a policy network that selects predicted output from the object detection network for use in training the object detection network”]; and receiving an output (identify instances) from the computer vision model [Para. 95]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify McKay by incorporating the missing features as correctly mapped above, feature as taught by Shekhar; because the modification enables the system implement policy based active learning expressly improves training efficiency by selecting the most informative images for labeling and retraining, thereby reducing human labeling effort while improving model accuracy. Claim 8 is rejected for the same reasons as claim 1. Furthermore, McKay teaches having a method to perform all claim limitations [Abstract and Para. 5]. Claim 15 is rejected for the same reason as claim 1. Furthermore, McKay teaches having a system comprising at least one processor and memory containing executable instructions, the executable instructions being executable by the at least one processor [Abstract, and Para. 4]. Regarding Claims 2, 9 and 16, McKay in view of Shekhar teaches all claim limitation above. Furthermore, Shekhar teaches wherein the computer vision model includes an image classification model (pre-trained classifier), and the output includes a class (instance) of an object in the second image [Para. 32, and fig. 6 step 610 and related description]. Regarding claims 3, 10 and 17, McKay teaches wherein the computer vision model includes an object detection model, and the output includes a location (bounding box) of an object in the second image [Para. 2]. Regarding claims 4, 11 and 18, McKay teaches wherein the multiple artificial intelligence model systems include the computer vision model (object detection network) [Para. 5 and 7]. Regarding claims 5, 12 and 19, McKay in view of Shekhar teaches all claim limitation above. Furthermore, Shekhar teaches wherein the set of first images is a first set of first images, and wherein the method further comprises: receiving a second set of first images, the second set of first images a superset of the first set of first images [fig. 1, 6, 7 and related description]; and selecting the first set of first images from the second set of first images [fig. 1, 6, 7 and related description]. Regarding claims 6, 7, 13, 14 and 20, McKay in view of Shekbar further in view of Anderson (Pub. No. US 2013/0124474) doesn’t explicitly teach the claim limitations. Regarding claims 6, 13 and 20, McKay in view of Shekhar doesn’t explicitly teach the rest of claim limitations. However, Anderson teaches wherein for each first image in a subset of the set of first images, determining, based on the multiple responses, the label (match code) for the first image includes performing one or more of a strict comparison, a fuzzy comparison (fuzzy match), and a semantic comparison (semantic for quality) of the multiple responses and determining, based on the performance (similar score) of one or more of the strict comparisons, the fuzzy comparison, and the semantic comparison of the multiple responses, the label for the first image [Para. 226 “Short field-values consisting of one or two characters may often only be compared for equality as there may be no basis for distinguishing error from intent.”; Para. 96 “Identifying such false positives is useful as they represent tokens paired on the basis of similarity that should not be paired on the basis of semantic meaning”; Para. 106 “match quality states within a match code for a pair of compared field values might include "exact match" if the values were identical or "fuzzy match" if the similarity score were greater than a fuzzy match threshold”]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify McKay in view of Shekhar by incorporating the missing features as correctly mapped above, feature as taught by Anderson; because the modification enables the system to improve the scalability and speed of large-scale record clustering by reducing expensive comparisons using search-based candidate selection and enabling efficient parallel processing. Regarding claims 7 and 14, McKay in view of Shekbar doesn’t explicitly teach the claim limitations. However, Anderson teaches comprising determining that the performance (best score) one or more of the strict comparisons, the fuzzy comparison, and the semantic comparison of the multiple responses exceeds a threshold (match threshold) [Para. 236 “If the best score is above a match threshold, the query record is added to the corresponding cluster”]. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify McKay in view of Shekhar by incorporating the missing features as correctly mapped above, feature as taught by Anderson; because the modification enables the system to improve the scalability and speed of large-scale record clustering by reducing expensive comparisons using search-based candidate selection and enabling efficient parallel processing. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOLOMON G BEZUAYEHU whose telephone number is (571)270-7452. The examiner can normally be reached on Monday-Friday 10 AM-7 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, O’Neal Mistry can be reached on 313-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-0101 (IN USA OR CANADA) or 571-272-1000. /SOLOMON G BEZUAYEHU/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Apr 01, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602717
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR CONTEXTUALIZED EQUIPMENT RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12602946
DOCUMENT CLASSIFICATION USING UNSUPERVISED TEXT ANALYSIS WITH CONCEPT EXTRACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12591350
TECHNIQUES FOR POSITIONING SPEAKERS WITHIN A VENUE
2y 5m to grant Granted Mar 31, 2026
Patent 12586355
ROAD AND INFRASTRUCTURE ANALYSIS TOOL
2y 5m to grant Granted Mar 24, 2026
Patent 12561852
Cross-Modal Contrastive Learning for Text-to-Image Generation based on Machine Learning Models
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+30.9%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 618 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month