Prosecution Insights
Last updated: April 19, 2026
Application No. 18/333,198

APPARATUS, METHOD, AND COMPUTER PROGRAM PRODUCT FOR MULTI-LABEL CLASSIFICATION USING ADAPTED MULTI-CLASS CLASSIFICATION MODEL

Non-Final OA §103
Filed
Jun 12, 2023
Examiner
VAUGHAN, MICHAEL R
Art Unit
2431
Tech Center
2400 — Computer Networks
Assignee
Honeywell International Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
626 granted / 799 resolved
+20.3% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
23 currently pending
Career history
822
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 799 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION The instant application having Application No. 18/333,198 is presented for examination by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over NPL entitled “ Multi-label Classification with Meta-labels ” by Read et al., hereinafter Read listed on Applicant’s IDS filed 11/25/24 in view of USP Application Publication 2021/0035556 to Shen et al., hereinafter Shen . As per claims 1, 11, and 20 Read teaches a computer-implemented method comprising: transforming an original training data set [ examples {(xi,yi)}N , i=1 ] corresponding to a multi-label classification task [label power set LP] into an adapted training data set corresponding to a multi-class classification task [ treat label combinations as multi-class values; pg. 1 ] based at least in part on a predefined transformation protocol [partitioning] , wherein a multi-class classification model [multi-class classifier produced by LP] configured to perform the multi-class classification task is trained based at least in part on the adapted training data set [ LP (a multi-class classifier) can now be applied to learn each meta-label ; see Relabeling section, pgs. 2-3] ; transforming multi-class output data generated with respect to an input data set using the trained multi-class classification model into multi-label output data corresponding to the multi-label classification task based at least in part on the predefined transformation protocol [ y = [yAB,yC] = [01,1] recombines toy=[yA,yB,yC]=[0,1,1] ; see Recombination section. Pg. 3]. Read is silent in explicitly teaching to cause at least one enterprise management operation based at least in part on the multi-label output data. On the other hand, Shen teaches at least one enterprise management operation based at least in part on the multi-label output data (0076). While Read is more directed to the algorithm of using multi-class label for multi-label data set, he does not place the protocol within an enterprise example. However, Shen which also produced a model, explicitly teaches that it can then be stored and used to perform a trained task (0076). Shen also embodies the machine learning process within a hardware computing entity (Fig. 4). Taking a model and using it within an enterprise is one obvious result for developing the model in the first place. The hope is that the invested resources will pay dividends in performing at an actual tangible action. The claim is obvious because one of ordinary skill in the art can combine methods known before the effective filing date which produce predictable results. Using the multi-label output data to perform at least one enterprise management operation produces a predictable result. As per claims 2 and 12 Read teaches the original training data set comprises a multi-label classification of training objects each into at least one original classification set of a plurality of original classification sets, in which at least one training object of the training objects is classified into a plurality of different original classification sets [Table 1] . As per claims 3 and 13 Read teaches the adapted training data set comprises a multi-class classification of the training objects each into a corresponding combined classification set of at least one combined classification set, in which the combined classification set corresponding to each particular training object of the training objects represents a combination of each original classification set into which the particular training object is classified in the multi-label classification of the original training data set [ LP creates one multi-class classifier, which assigns values ∈ {010,101,011} ; pg. 1] . As per claims 4 and 14 Read teaches the multi-class output data comprises a classification of production objects from the input data set each into a corresponding combined classification set of the at least one combined classification set from the adapted training data set [LP creates one multi-class classifier, which assigns values ∈ {010,101,011}; pg. 1] . As per claims 5 and 15 Read teaches the multi-label output data comprises a classification of the production objects from the input data set each into at least one original classification set of the plurality of original classification sets from the original training data set [ y = [yAB,yC] = [01,1] recombines toy=[yA,yB,yC]=[0,1,1] ; pg. 3]. As per claims 6 and 16 Read teaches train the multi-class classification model based at least in part on the adapted training data set [general multi-label task is to learn from examples and LP creates one multi-class classifier, which assigns values ∈ {010,101,011}; pg. 1] . As per claims 7 and 17 Read teaches generate the multi-class output data with respect to the input data set using the trained multi-class classification model [ LP creates one multi-class classifier, which assigns values ∈ {010,101,011}; pg. 1] . As per claims 8 and 18 Read teaches Read is silent in explicitly teaching for each particular training object of training objects classified in the original training data set, combining all original classification labels corresponding to original classification sets into which the particular training object is classified in the original training data set into a combined classification label comprising all of the original classification labels for the particular training object delimited via a label combiner operator defined according to the predefined transformation protocol. Read combines the labels but does not explicitly teach using a delimiter via a label combiner operator. However, Shen for each particular training object of training objects classified in the original training data set, combining all original classification labels corresponding to original classification sets into which the particular training object is classified in the original training data set into a combined classification label comprising all of the original classification labels for the particular training object delimited via a label combiner operator defined according to the predefined transformation protocol (0017 and 0066) . The delimiter is used so that multi-class labels can later be identified separately when recombining back into multi-label sets. Read needs a way to perform this function to as he combines labels and then separates them back into their constituent parts (section C and Table IV). Read could have used a delimiter with predictable results. This allows the labels to be parsed out as Shen teaches. Using the delimiters would have worked the same way in Read as in Shen. The claim is obvious because one of ordinary skill in the art can combine methods known before the effective filing date which produce predictable results. As per claims 9 and 19 , the combined system of Read and Shen teaches , wherein transforming the multi-class output data into the multi-label output data comprises, for each particular production object of production objects classified in the multi-class output data, parsing, based at least in part on a label combiner operator defined according to the predefined transformation protocol, a combined classification label corresponding to a combined classification set into which the particular production object is classified in the multi-class output data into at least one original classification label corresponding to an original classification set from the original training data set into which the particular production object is to be classified in the multi-label output data, wherein the combined classification label comprises the at least one original classification label delimited via the label combiner operator [Shen: the delimiters are what allows the labels to be uniquely identified in both input and output; 0017, 0018, and 0066] . Claim(s) 10 i s rejected under 35 U.S.C. 103 as being unpatentable over Read and Shen as applied to claim 1 above, and further in view of NPL entitled “ Data mining model to predict Fosamax adverse events" by Belal et al., hereinafter Belal . As per claim 10, Read and Shen are silent is silent in explicitly teaching the multi-label classification task includes classification of text objects containing text c haracterizing quality management events associated with products produced by an enterprise into classification sets corresponding to medical terms from a medical dictionary. Belal teaches the multi-label classification task [§III. A. 3] includes classification of text objects [AERS; section C] containing text characterizing quality management events associated with products produced by an enterprise [AERS for Fosamax, pg. 2] into classification sets corresponding to medical terms from a medical dictionary [MedDra; [§III. D . 3] . Belal shows a practical example of placing multi-labels into multi-class sets in the field of medicine [§I V . D. 3] . Thus, the use of this powerset technique was known before the effective filing date as applied to medical terms. The protocol of Read and Shen could have then been applied to the same type of medical data. The claim is obvious because one of ordinary skill in the art can combine methods known before the effective filing date which produce predictable results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure is listed on the enclosed PTO-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL R. VAUGHAN whose telephone number is (571)270-7316. The examiner can normally be reached on Monday - Friday, 9:30am - 5:30pm, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached on (571) 272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL R VAUGHAN/ Primary Examiner, Art Unit 2431
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598464
POLICIES RELATED TO NON-PUBLIC NETWORKS
2y 5m to grant Granted Apr 07, 2026
Patent 12580933
CORRELATING FIREWALL AND ZERO TRUST DATA TO MONITOR REMOTE AND HYBRID WORKER SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12561488
SYSTEMS AND METHODS FOR CONTEXTUAL ACTIVATION OF ONLOOKER DETECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12563100
RESOURCE-MONITORING TELEMETRY IN A ZERO-TRUST COMPUTING ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12556587
SYSTEM AND METHOD FOR MANAGING SECURITY MODELS THROUGH SCENARIO GENERATION AND EVALUATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+31.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 799 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month