Prosecution Insights
Last updated: April 19, 2026
Application No. 18/184,938

SELF-LEARNING ONTOLOGY-BASED COGNITIVE ASSIGNMENT ENGINE

Non-Final OA §103
Filed
Mar 16, 2023
Examiner
MEYER, JACQUELINE CHRISTINE
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Kyndryl Inc.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +68% interview lift
Without
With
+67.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
24 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on March 16, 2023, March 16, 2023, and July 10, 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner except as noted on the IDS filed on 3/16/2023 where an invalid US patent number is listed and applicant failed to provide copies of the non-patent literature documents. Specification The disclosure is objected to because of the following informalities: Paragraph 0065 reads "In one embodiment, program code 307 for a method for a self-learning ontology-based cognitive assignment engine_is integrated into a client" but should read "In one embodiment, program code 307 for a method for a self-learning ontology-based cognitive assignment engine is integrated into a client." Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 5, 7, 8, 10, 11, 13-15, 16, 17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Buhler et al. (US10769503), hereinafter Buhler, in view of Mandal et al. (Cognitive system to achieve human-level accuracy in automated assignment of helpdesk email tickets), hereinafter Mandal. Regarding claim 1, Buhler teaches: a processor device operatively coupled to a memory, the processor device being configured for: intercepting an incoming message that comprises graphical content; (Buhler, column 20, lines 25-28: “The method 700 includes receiving (708) an input image having content. For example , the client device 104-1 receives an input image 108-1 via an image or document search application 106, according to some implementations.”) identifying characteristics of one or more ontological structures from the graphical content; (Buhler, column 20, lines 32-36: “Referring next to FIG. 7B, the method 700 also includes generating (712) a feature vector corresponding to the input image using a trained classifier model (e.g., a trained CNN, such as the classifier described above in reference to FIG. 5. The feature vector has a plurality of components.” – The feature vector being generated using the trained classifier model is analogous to identifying characteristics of one or more ontological structures as the feature vector has a plurality of components.) associating one or more numeric values with each of the identified characteristics; (Buhler, column 20, lines 36-44: “The feature vector has a plurality of components. In some implementations, each component is (714) represented using a floating-point number. In some implementations, a majority of the components range (716) between 0.0 and 1.0. In some implementations, a first integer value corresponding to a first component of the plurality of components has (718) a length that is distinct from a second integer value corresponding to a second component of the plurality of components” – the characteristics is analogous to the plurality of components; thus each component being represented using a floating-point number is analogous to the one or more numeric values associated with the characteristics.) computing a hash code of the incoming message by performing a hash function upon a subset of the numeric values; (Buhler, column 20, lines 51-54: “Referring next to FIG. 7C, the method 700 also includes encoding (724) the feature vector as a similarity hash (e.g., the hash patterns 112 generated by the hashing engine 328) by quantizing each component.” – Encoding the feature vector as a similarity hash is analogous to computing a hash code of the incoming message.) retrieving, as a function of the hash code, one or more matching templates of a set of previously stored templates; (Buhler, column 21, lines 5-14: “Referring next to FIG. 7D, the method 700 also includes performing (734) a sequence of steps for each reference image in a plurality of reference images (e.g., the images in the image library 120). The sequence of steps includes obtaining (736) a reference hash (e.g., from the hash patterns 112) corresponding to the respective reference image. The sequence of steps also includes computing (738) similarity between the input image and the respective reference image by computing a distance between the reference hash and the similarity hash.” – The reference hash corresponding to the reference image is analogous to the stored templates. Performing a similarity between the reference hash is analogous to retrieving the one or more matching templates.) organizing the matching templates into a hierarchical structure as a function of the ontological structures; and (Buhler, column 21, lines 23-29: “In some implementations, the method 700 further includes grouping (746) the input image with one or more images of the plurality of reference images that are similar to the input image to form a cluster of images, as illustrated in FIGS. 6A-6D. Some implementations also assign a label to the cluster. Examples of clustering in hash space are described above in the section on Clustering in Hash Space.” – Clustering the images with the one or more references images that are similar is analogous to organizing the matching templates into a hierarchical structure.) Buhler does not explicitly teach: assigning the incoming message to an appropriate responder of a plurality of responders as a function of a semantic meaning associated with the matching templates. However, Mandal teaches: assigning the incoming message to an appropriate responder of a plurality of responders as a function of a semantic meaning associated with the matching templates. (Mandal, abstract: “In this paper, we present an end-to-end automated helpdesk email ticket assignment system, which is also offered as a service. The objective of the system is to determine the nature of the problem mentioned in an incoming email ticket and then automatically dispatch it to an appropriate resolver group (or team) for resolution.” – Buhler performs the semantic similarity (see Buhler column 19 lines 6-16) which matches the semantic meaning of the incoming message with the matching template. This is analogous to Mandal’s determining the nature of the problem which then gets dispatched to an appropriate resolver group with is analogous to assigning the incoming message to an appropriate responder.) Mandal is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Buhler, which already teaches an image classification system that classifies incoming documents and messages but does not explicitly teach that the classified documents and messages are being routed to an appropriate responder, to include the teachings of Mandal which does teach that the classified documents and messages are being routed to an appropriate responder in order to “achieve human level accuracy with more than 90% coverage on all the datasets with the proposed system using minimal computational resources.” (Mandal, page 2, section 1.1, paragraph 1) Regarding claim 4, Buhler and Mandal teach the system of claim 1, as cited above. Buhler does not explicitly teach: the semantic meaning comprises information from which the reporting of the error condition can be inferred; and the responder is a technical-support resource configured to respond to the error condition. However, Mandal further teaches: the semantic meaning comprises information from which the reporting of the error condition can be inferred; and the responder is a technical-support resource configured to respond to the error condition. (Mandal, page 2, paragraph 1: “The dispatch of a ticket to the correct group of practitioners is a critical step in the speedy resolution of a ticket. Incorrect dispatch decisions can significantly increase the total turnaround time for ticket resolution, as observed in a study of an actual production system [2]. Several factors make the dispatcher’s job challenging such as requirement of knowledge of the IT portfolio being managed, roles and responsibilities of the individual groups, ability to quickly parse the ticket text describing the problem and map it to the right group, which is often not straightforward given the heterogeneous and informal nature of the problem description.” – The IT portfolio being managed would include IT services. Thus, parsing the ticket text describing the problem and dispatching to the correct group of practitioners would be indicative that the IT services needed would include the semantic meaning comprising information reporting of error conditions. Thus, the ticket being dispatched to the correct group of practitioners is analogous to the responder being a technical-support resource.) Regarding claim 5, Buhler and Mandal teach the system of claim 1, as cited above. Buhler further teaches: wherein the identified characteristics comprise an absolute location, within the graphical content, of a first structure of the one or more ontological structures. (Buhler, column 27, lines 20-29: “Some implementations use intra-document locational data generated from the OCR process to reconnect labels separated from their respective data. Some implementations use data returned from the OCR process (e.g., polygon vertex coordinates) specifying the location of the discovered text. Some implementations iteratively extend polygons (e.g., from output returned by the OCR process). For example, a polygon is extended in the direction of text until the polygon intersects with other OCR polygons and “join” the text from intersecting OCR polygons.” – The intra-document locational data specifying the location of the discovered text is analogous to the identified characteristics comprising an absolute location of a first structure, e.g., the location of the discovered text is the absolute location of a first structure.) Regarding claim 7, Buhler and Mandal teach the system of claim 1, as cited above. Buhler further teaches: wherein the identified characteristics characterize a graphical element comprised by a first structure of the one or more ontological structures. (Buhler, column 27, lines 20-29: “Some implementations use intra-document locational data generated from the OCR process to reconnect labels separated from their respective data. Some implementations use data returned from the OCR process (e.g., polygon vertex coordinates) specifying the location of the discovered text. Some implementations iteratively extend polygons (e.g., from output returned by the OCR process). For example, a polygon is extended in the direction of text until the polygon intersects with other OCR polygons and “join” the text from intersecting OCR polygons.” – The location of the discovered text is the graphical element comprised by a first structure of one or more ontological structures.) Regarding claim 8, Claim 8 has all the same limitations of claim 1 which is taught by Buhler and Mandal. See claim 1 above. Regarding claim 10, Buhler and Mandal teach the method of claim 8, as cited above. Claim 10 additionally has the same limitations of claim 4 which are taught by Buhler and Mandal – see claim 4 above. Regarding claim 11, Buhler and Mandal teach the method of claim 8, as cited above. Claim 11 additionally has the same limitations of claim 5 which are taught by Buhler and Mandal – see claim 5 above. Regarding claim 13, Buhler and Mandal teach the method of claim 8, as cited above. Claim 13 additionally has the same limitations of claim 7 which are taught by Buhler and Mandal – see claim 7 above. Regarding claim 14, Buhler and Mandal teach the method of claim 8, as cited above. Buhler further teaches: providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer-readable program code in the computer system, wherein the computer-readable program code in combination with the computer system is configured to implement the intercepting, the identifying, the associating, the computing, the retrieving, and the assigning. (Buhler, column 8, lines 26-32: “In some implementations, the memory 214, or the computer readable storage medium of the memory 214, stores the following programs, modules, and data structures, or a subset thereof: an operating system 216, which includes procedures for handling various basic system services and for per forming hardware dependent tasks;” and column 8, lines 46-61: “an image or document search or organizer application 106, which enables a user to search and retrieve, or organize images or documents from one or more remote image or document libraries 120 and/or a local image or document library 234. The search or organizer application 106 provides a user interface 224. The image or document search application 106 also includes a retrieval module 226, which retrieves images or documents corresponding to a match identified by the server 110. The image or document search application 106 accesses one or more sample images or documents 108, which can be selected and/or identified by a user to be the basis for the search (e.g., to match the sample image or document 108 with one or more images or documents in remote image or document libraries 120 or a local image or document library 234).” - The basic system services stored on a computer readable storage medium is analogous to the computer-readable program code hosted in the computer system. The services including searching, organizing, and retrieving documents from one or more remote image or document libraries implementing the steps recited.) Regarding claim 15, Claim 15 has the same limitations of claim 1 which are taught by Buhler and Mandal – see claim 1 above. Buhler further teaches: A computer program product including one or more computer readable storage mediums collectively storing program instructions for a self-learning ontology-based cognitive assignment engine that are executable by a processor or programmable circuitry to cause the processor or programmable circuitry to perform operations comprising: (Buhler, column 8, lines 26-32: “In some implementations, the memory 214, or the computer readable storage medium of the memory 214, stores the following programs, modules, and data structures, or a subset thereof: an operating system 216, which includes procedures for handling various basic system services and for per forming hardware dependent tasks;”) Regarding claim 16, Buhler and Mandal teach the computer program product of claim 15, as cited above. Claim 16 additionally has the same limitations of claim 4 which are taught by Buhler and Mandal – see claim 4 above. Regarding claim 17, Buhler and Mandal teach the computer program product of claim 15, as cited above. Claim 17 additionally has the same limitations of claim 5 which are taught by Buhler and Mandal – see claim 5 above. Regarding claim 19, Buhler and Mandal teach the computer program product of claim 15, as cited above. Claim 19 additionally has the same limitations of claim 7 which are taught by Buhler and Mandal – see claim 7 above. Regarding claim 20, Buhler and Mandal teach the computer program product of claim 15, as cited above. Claim 20 additionally has the same limitations of claim 14 which are taught by Buhler and Mandal – see claim 14 above. Claims 2, 3, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Buhler in view of Mandal in view of Kulis et al. (Kernelized Locality-Sensitive Hashing for Scalable Image Search), hereinafter Kulis. Regarding claim 2, Buhler and Mandal teach the system of claim 1, as cited above. Buhler and Mandal do not explicitly teach: wherein the hash codes are retrieved from a distributed hash table. However, Kulis teaches: wherein the hash codes are retrieved from a distributed hash table. (Kulis, page 2134, section 6, paragraph 2: “Throughout, we present results showing the percentage of database items searched with hashing as opposed to timing results, which are dependent on the particular optimizations of the code. In terms of additional overhead, finding the approximate nearest neighbors given the query hash key is very fast, particularly if the computation can be distributed across several machines (since the random permutations of the hash bits are independent of one another).”) Kulis is considered analogous to the claimed invention as it is relatively pertinent to the problem faced by the invention of image processing and classification. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Buhler and Mandal, which already teaches computing the hash codes by performing a hash function but does not explicitly teach that the hash codes are stored in a distributed hash table, to include the teachings of Kulis which does teach that the hash codes are stored in a distributed hash table in order to achieve “accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.” (Kulis, abstract) Regarding claim 3, Buhler, Mandal, and Kulis teach the system of claim 2, as cited above. Buhler and Mandal do not explicitly teach: retrieving from the distributed hash table a matching value that most closely matches a value of the hash code; and determining that the matching value is an index value capable of enabling the system to retrieve the matching template. However, Kulis further teaches: retrieving from the distributed hash table a matching value that most closely matches a value of the hash code; and (Kulis, page 2132, column 1, paragraph 3: “A query hash key indexes into each sorted order with a binary search, and the 2M nearest examples found are the approximate nearest neighbors.” And paragraph 4: “In this work, the similarity function of interest is an arbitrary kernel function κ: sim(xi,xj)=κ(xi,xj)=φ(xi)Tφ(xj), for some (possibly unknown) embedding function φ(·).” – The binary search finding the examples that are the nearest neighbors is analogous to retrieving the matching value that most closely matches a value of the hash code.) determining that the matching value is an index value capable of enabling the system to retrieve the matching template. (Kulis, page 2136, column 1, paragraph 1: “However, we can use the data to qualitatively show the kinds of images that are retrieved to quantitatively show how well KLSH approximates a linear scan, and to confirm that our algorithm is amenable to rapidly searching very large image collections.” – The query hash indexes discussed above show that the matching value is an index value. Thus, the algorithm searching the collections of images and retrieving them is analogous to using the matching value to retrieve the matching template.) Regarding claim 9, Buhler and Mandal teach the method of claim 8, as cited above. Buhler and Mandal do not explicitly teach: retrieving from a distributed hash table a matching value that most closely matches a value of the hash code; and determining that the matching value is an index value capable of enabling the system to retrieve the matching template. However, Kulis teaches: retrieving from a distributed hash table a matching value that most closely matches a value of the hash code; and (Kulis, page 2134, section 6, paragraph 2: “Throughout, we present results showing the percentage of database items searched with hashing as opposed to timing results, which are dependent on the particular optimizations of the code. In terms of additional overhead, finding the approximate nearest neighbors given the query hash key is very fast, particularly if the computation can be distributed across several machines (since the random permutations of the hash bits are independent of one another).” And page 2132, column 1, paragraph 3: “A query hash key indexes into each sorted order with a binary search, and the 2M nearest examples found are the approximate nearest neighbors.” And paragraph 4: “In this work, the similarity function of interest is an arbitrary kernel function κ: sim(xi,xj)=κ(xi,xj)=φ(xi)Tφ(xj), for some (possibly unknown) embedding function φ(·).” – The binary search finding the examples that are the nearest neighbors is analogous to retrieving the matching value that most closely matches a value of the hash code.) determining that the matching value is an index value capable of enabling the system to retrieve the matching template. (Kulis, page 2136, column 1, paragraph 1: “However, we can use the data to qualitatively show the kinds of images that are retrieved to quantitatively show how well KLSH approximates a linear scan, and to confirm that our algorithm is amenable to rapidly searching very large image collections.” – The query hash indexes discussed above show that the matching value is an index value. Thus, the algorithm searching the collections of images and retrieving them is analogous to using the matching value to retrieve the matching template.) Kulis is considered analogous to the claimed invention as it is relatively pertinent to the problem faced by the invention of image processing and classification. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Buhler and Mandal, which already teaches the method of using hash codes to determine a matching template for image classification and routing the messages to the appropriate responder but does not explicitly teach retrieving the matching values from a distributed hash table to enable the system to retrieve the matching template, to include the teachings of Kulis which does teach retrieving the matching values from a distributed hash table to enable the system to retrieve the matching template in order to achieve “accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.” (Kulis, abstract) Claims 6, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Buhler in view of Mandal in view of Bloch et al. (Directional relative position between objects in image processing: a comparison between fuzzy approaches), hereinafter Bloch. Regarding claim 6, Buhler and Mandal teach the system of claim 1, as cited above. Buhler and Mandal do not explicitly teach: wherein the identified characteristics comprise a set of relative locations, within the graphical content, of a plurality of structures of the one or more ontological structures. However, Bloch teaches: wherein the identified characteristics comprise a set of relative locations, within the graphical content, of a plurality of structures of the one or more ontological structures. (Bloch, page 1563, column 2, paragraph 1: “In this paper, we consider only directional relative position, which provides an important information about the spatial arrangement of objects in the scene.” And page 1564, column 1, paragraph 2: “Usually vision and image processing make use of quantitative representations of spatial relationships.” – The relative position providing important information about the special arrangement of objects in the scene is analogous to the set of relative locations where the objects in the scene are the one or more ontological structures.) Bloch is considered analogous to the claimed invention as it is in the same field of endeavor, machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified Buhler and Mandal, which already teaches the identified characteristics of one or more ontological structures but does not explicitly teach that the identified characteristics comprise a set of relative locations, to include the teachings of Bloch which does teach that the identified characteristics comprise a set of relative locations in order to “recognize objects based on comparison between the characteristics of objects in the scene and objects in the model, and on comparison between relationships of groups of two or more objects in the scene and in the model.” (Block, page 1563, column 1, paragraph 1) Regarding claim 12, Buhler and Mandal teach the method of claim 8, as cited above. Claim 12 additionally has the same limitations of claim 6 which are taught by Buhler, Mandal, and Bloch – see claim 6 above. Regarding claim 18, Buhler and Mandal teach the computer program product of claim 15, as cited above. Claim 18 additionally has the same limitations of claim 6 which are taught by Buhler, Mandal, and Bloch – see claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Beneker and Gips (Using Clustering for Categorization of Support Tickets) Tee and Harper (US20190036760) Yoon et al. (US20180101423) Avila et al. (US20190116265) Jain and Potharaju (US20140006861) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACQUELINE MEYER whose telephone number is (703)756-5676. The examiner can normally be reached M-F 8:00 am - 4:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.C.M./Examiner, Art Unit 2144 /TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Mar 16, 2023
Application Filed
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585981
MANAGING AN INSTALLED BASE OF ARTIFICIAL INTELLIGENCE MODULES
2y 5m to grant Granted Mar 24, 2026
Patent 12468941
SYSTEMS AND METHODS FOR DYNAMICS-AWARE COMPARISON OF REWARD FUNCTIONS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+67.5%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month