Prosecution Insights
Last updated: April 19, 2026
Application No. 18/531,772

DIGITAL WORKER SKILL OVERLAP MITIGATION

Non-Final OA §101§103
Filed
Dec 07, 2023
Examiner
BROCKINGTON III, WILLIAM S
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
41%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
96%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
203 granted / 491 resolved
-10.7% vs TC avg
Strong +54% interview lift
Without
With
+54.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
41 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
3.5%
-36.5% vs TC avg
§112
26.0%
-14.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 491 resolved cases

Office Action

§101 §103
DETAILED ACTION The following is a Non-Final Office Action in response to communications filed January 7, 2026. Claims 1–2, 8–9, and 15–20 are amended. Currently, claims 1–20 are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 7, 2026 has been entered. Response to Amendment/Argument With respect to the previous rejection of claims 1–20 under 35 U.S.C. 101, Applicant’s remarks have been fully considered but are not persuasive. Applicant primarily asserts that the claims embody an improvement in the functioning of the computer or an improvement to other technology or technical field because the claims improve automated task execution and inefficient system operation by allowing the underlying computer system to reliably select a correct skill at run time. Examiner disagrees. As currently presented, the claims are limited to disambiguating skills by detecting skill similarities and evaluating an answer to a disambiguation question. The claims do not include any elements for executing a digital worker, invoking skills, or invoking one skill subsequent to disambiguating multiple skills. As a result, Applicant’s remarks are not persuasive because the remarks are neither commensurate with the scope of the claims nor commensurate with the technical improvements described in Applicant’s Specification. Applicant further asserts that the claims cannot recite a mental process because the claims explicitly require a computer implementation. Examiner disagrees. MPEP 2106.04(a)(2)(III)(C) indicates that a computer process may still recite mental processes when the underlying process can be performed via pen and paper. Here, Examiner maintains that the elements identified under Step 2A Prong One can be practically performed in the mind or by a human using pen and paper because the elements embody observations or evaluations for disambiguating skill functions. As a result, Applicant’s remarks are not persuasive. Applicant’s remaining arguments have been fully considered but are directed to amended subject matter, which is addressed for the first time herein. Accordingly, the previous rejection of claims 1–20 under 35 U.S.C. 101 is maintained and reasserted below. With respect to the previous rejections under 35 U.S.C. 103, Applicant’s remarks have been fully considered but are moot in view of the updated grounds of rejection asserted below. Claim Objections Claims 10–14 and 17–20 are objected to because of the following informalities: Applicant’s Response amends claims 8–9 and 15–16 to remove “-ing” verb forms. However, claims 10–14 and 17–20, which depend from independent claims 8 and 15, were not similarly amended. Examiner recommends amending claims 10–14 and 17–20 to remove the “-ing” language in order to facilitate claim consistency and clarity. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1–20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claims 1–20 are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. With respect to Step 2A Prong One of the framework, claim 1 recites an abstract idea. Claim 1 includes elements for “in response to one of a first skill or a second skill being newly added toa digital worker: detecting a similarity between the first skill and the second skill assigned to the digital worker”; “generating a disambiguation question to disambiguate the first skill from the second skill”; “disambiguating the first skill from the second skill based on an answer to the disambiguation question”; and “generating one or more paraphrases of the disambiguation question”. The limitations above recite an abstract idea. More particularly, the elements above recite a mental process because the elements describe observations or evaluations that can be practically performed in the mind or by a human using pen and paper. As a result, claim 1 recites an abstract idea under Step 2A Prong One. Claims 8 and 15 include substantially similar limitations to those included with respect to claim 1. As a result, claims 8 and 15 recite an abstract idea under Step 2A Prong One for the same reasons as stated above with respect to claim 1. Claims 2–7, 9–14, and 16–20 further describe the process for disambiguating skills and further recite a mental process for the same reasons as stated above. Further, claims 6, 13, and 20 recite mathematical concepts because, in view of Applicant’s Specification, the elements describe mathematical relationships and calculations (See, e.g., Spec. ¶ 46). As a result, claims 2–7, 9–14, and 16–20 recite an abstract idea under Step 2A Prong One. With respect to Step 2A Prong Two of the framework, claim 1 does not include additional elements that integrate the abstract idea into a practical application. Claim 1 includes additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements include an implicit computer, a templatized model, a trained large language model, and a step for “storing”. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computer component is a generic computing element that is merely used as a tool to perform the recited abstract idea; the models do no more than generally link the use of the recited abstract idea to a particular technological environment; and the step for “storing” is an insignificant extrasolution activity to the recited abstract idea. As a result, claim 1 does not include any additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. As noted above, claims 8 and 15 include substantially similar limitations to those included with respect to claim 1. Although claim 8 further includes a computer-readable medium and a processor and claim 15 further includes a memory and a processor, the additional elements, when considered in view of the claim as a whole, do not integrate the abstract idea into a practical application because the additional computing elements are generic computer components that are merely used as a tool to perform the recited abstract idea. As a result, claims 8 and 15 do not include any additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Claims 2–7, 9–14, and 16–20 do not include any additional elements beyond those included with respect to the claims from which claims 2–7, 9–14, and 16–20 depend. As a result, claims 2–7, 9–14, and 16–20 do not include any additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two for the same reasons as stated above. With respect to Step 2B of the framework, claim 1 does not include additional elements amounting to significantly more than the abstract idea. As noted above, claim 1 includes additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements include an implicit computer, a templatized model, a trained large language model, and a step for “storing”. The additional elements do not amount to significantly more than the recited abstract idea because the additional computer component is a generic computing element that is merely used as a tool to perform the recited abstract idea; the models do no more than generally link the use of the recited abstract idea to a particular technological environment; and the step for “storing” is a well-understood, routine, and conventional computer function in view of MPEP 2106.05(d)(II). Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claim 1 does not include any additional elements that amount to significantly more than the recited abstract idea under Step 2B. As noted above, claims 8 and 15 include substantially similar limitations to those included with respect to claim 1. Although claim 8 further includes a computer-readable medium and a processor and claim 15 further includes a memory and a processor, the additional elements do not amount to significantly more than the recited abstract idea because the additional computing elements are generic computer components that are merely used as a tool to perform the recited abstract idea. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claims 8 and 15 do not include any additional elements that amount to significantly more than the recited abstract idea under Step 2B. Claims 2–7, 9–14, and 16–20 do not include any additional elements beyond those included with respect to the claims from which claims 2–7, 9–14, and 16–20 depend. As a result, claims 2–7, 9–14, and 16–20 do not include any additional elements that amount to significantly more than the recited abstract idea under Step 2B for the same reasons as stated above. Therefore, the claims are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. Accordingly, claims 1–20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1–2, 5, 7–9, 12, 14–16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Will, IV et al. (U.S. 2024/0020484) in view of Beaver (U.S. 2020/0106881), and in further view of Latzina et al. (U.S. 2012/0166178) and Rofouei et al. (U.S. 2024/0289407). Claims 1, 8, and 15: Will discloses a computer-implemented method for mitigating digital worker skill overlap, the method comprising: in response to one of a first skill or a second skill being newly added to a digital worker (See paragraphs 57–58, in view of paragraphs 60–62, wherein disambiguation processes are initiated in response to new user intent instances): detecting a similarity between the first skill and the second skill assigned to the digital worker (See FIG. 3 and paragraphs 39–40 and 54, wherein intent similarity is detected with respect to a plurality of intents associated with a chatbot, and wherein the intents include answers and service information requests); generating a disambiguation question to disambiguate the first skill from the second skill (See FIG. 4 and paragraph 57, wherein intent options are presented to the user); and disambiguating the first skill from the second skill based on an answer to the disambiguation question (See FIG. 4 and paragraph 61, wherein, upon selection of an intent option, the chatbot provides a response to the user; see also paragraph 22, wherein the chatbot learns from intent interaction experiences to disambiguate intent options over time). Although Will implicitly discloses storing the disambiguation question as an artifact (See paragraph 22, wherein interactions are used for learning) and discloses a skill specification (See paragraphs 57–60), Will does not expressly disclose the remaining claim elements. Beaver discloses generating, using a templatized model, an output (See paragraphs 70–71, in view of paragraph 8, wherein the virtual assistant outputs responses using a templatized language model). Will discloses a system directed to disambiguating utterances using a chatbot. Beaver discloses a system directed to automating virtual assistant conversations. Each reference discloses a system directed to managing bot interactions. The technique of using a templatized model is applicable to the system of Will as they each share characteristics and capabilities; namely, they are directed to managing bot interactions. One of ordinary skill in the art would have recognized that applying the known technique of Beaver would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Beaver to the teachings of Will would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate bot interaction management into similar systems. Further, applying a templatized model to Will would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more detailed analysis and more reliable results. Will and Beaver do not expressly disclose the remaining claim elements. Latzina discloses generating one or more paraphrases of the disambiguation question (See paragraph 39, wherein the system rephrases disambiguating questions during a user interaction); and storing the disambiguation question and the one or more paraphrases of the disambiguation question as disambiguation artifacts in a skill specification associated with at least one of the first skill or the second skill (See paragraphs 51–52, wherein user interactions are logged and used to build connections between “actual user input, retrievals of service/function recommendations, and user selections”). As disclosed above, Will discloses a system directed to disambiguating utterances using a chatbot, and Beaver discloses a system directed to automating virtual assistant conversations. Latzina discloses a system directed to processing inputs during a digital assistant interaction. Each reference discloses a system directed to managing bot interactions. The technique of utilizing paraphrases and a skill specification is applicable to the systems of Will and Beaver as they each share characteristics and capabilities; namely, they are directed to managing bot interactions. One of ordinary skill in the art would have recognized that applying the known technique of Latzina would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Latzina to the teachings of Will and Beaver would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate bot interaction management into similar systems. Further, applying paraphrases and a skill specification to Will and Beaver would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more detailed analysis and more reliable results. Will, Beaver, and Latzina do not expressly disclose the remaining claim elements. Rofouei discloses generating, using a trained large language model, one or more disambiguation questions (See paragraphs 185–186, wherein clarification prompts are generated by an LLM). As disclosed above, Will discloses a system directed to disambiguating utterances using a chatbot, Beaver discloses a system directed to automating virtual assistant conversations, and Latzina discloses a system directed to processing inputs during a digital assistant interaction. Rofouei discloses a system directed to augmenting search sessions with a chatbot. Each reference discloses a system directed to managing bot interactions. The technique of utilizing a large language model is applicable to the systems of Will, Beaver, and Latzina as they each share characteristics and capabilities; namely, they are directed to managing bot interactions. One of ordinary skill in the art would have recognized that applying the known technique of Rofouei would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Rofouei to the teachings of Will, Beaver, and Latzina would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate bot interaction management into similar systems. Further, applying a large language model to Will, Beaver, and Latzina would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more detailed analysis and more reliable results. With respect to claim 8, Will discloses a computer program product, the computer program product comprising: one or more non-transitory computer-readable storage media and program instructions stored on the one or more non-transitory computer-readable storage media capable of performing a method (See FIG. 2). With respect to claim 15, Will discloses a computer system, the system comprising: one or more computer processors, one or more computer-readable storage media, and program instructions stored on the one or more of the computer-readable storage media for execution by at least one of the one or more processors capable of performing a method (See FIG. 2). Claims 2, 9, and 16: Will discloses the computer-implemented method of claim 1, further comprising: prompting at least one of the disambiguation question or the one or more paraphrases of the disambiguation question in response to at least one of the first skill or the second skill being subsequently invoked (See FIG. 4 and paragraph 61, wherein disambiguation is triggered in response to user intent utterances triggering more than one intent; see also paragraphs 54–57). Claims 5, 12, and 19: Will discloses the computer-implemented method of claim 1, wherein the generating the disambiguation question further comprises: receiving a natural language explanation differentiating the first skill from the second skill (See FIG. 4 and paragraphs 37 and 70–71, wherein the user rephrases the utterance in response to detection of a low confidence score threshold level); identifying a key phrase within the natural language explanation (See FIG. 4 and paragraphs 70–71, wherein a key phrase (“ETime”) of the explanation is identified); and generating the disambiguation question based on the key phrase (See FIG. 4 and paragraphs 70–71, wherein the disambiguation question is based on the key phrase, “ETime”). Claims 7 and 14: Will discloses the computer-implemented method of claim 1, wherein the detecting the similarity between the first skill and the second skill further comprises: determining whether a similarity between at least one of a description, utterance, entity, input, and output corresponding to the first skill and the second skill exceed a threshold (See FIG. 3 and paragraphs 37 and 54, wherein utterance similarity thresholds are implemented to determine intent options). Claims 3–4, 10–11, and 17–18 are rejected under 35 U.S.C. 103 as being unpatentable over Will, IV et al. (U.S. 2024/0020484) in view of Beaver (U.S. 2020/0106881), and in further view of Latzina et al. (U.S. 2012/0166178), Rofouei et al. (U.S. 2024/0289407), and Weinstein (U.S. 2018/0300395). Claims 3, 10, and 17: As disclosed above, Will, Beaver, Latzina, and Rofouei disclose the elements of independent claim 1. Although Will discloses identifying one or more entities of the first skill or the second skill and generating the disambiguation question based on the one or more entities (See FIG. 4 and paragraphs 35 and 38, wherein intent objectives are matched to intent identifiers), Will, Beaver, Latzina, and Rofouei do not expressly disclose the remaining claim elements. Weinstein discloses wherein the generating the disambiguation question further comprises: identifying one or more exclusive entities that are exclusive to the first skill or the second skill (See paragraphs 54–56 and 71, wherein mutually exclusive arguments are identified, including vans/sedans or red/green); and generating the disambiguation question based on the one or more exclusive entities (See paragraphs 54–56 and 71, wherein mutually exclusive arguments are identified, and wherein based on the mutually exclusive arguments, disambiguation questions are generated). As disclosed above, Will discloses a system directed to disambiguating utterances using a chatbot, Beaver discloses a system directed to automating virtual assistant conversations, Latzina discloses a system directed to processing inputs during a digital assistant interaction, and Rofouei discloses a system directed to augmenting search sessions with a chatbot. Weinstein discloses a system directed to processing queries using a conversational virtual assistant. Each reference discloses a system directed to managing bot interactions. The technique of utilizing exclusive entities is applicable to the systems of Will, Beaver, Latzina, and Rofouei as they each share characteristics and capabilities; namely, they are directed to managing bot interactions. One of ordinary skill in the art would have recognized that applying the known technique of Weinstein would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Weinstein to the teachings of Will, Beaver, Latzina, and Rofouei would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate bot interaction management into similar systems. Further, applying exclusive entities to Will, Beaver, Latzina, and Rofouei would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more detailed analysis and more reliable results. Claims 4, 11, and 18: Although Will discloses using logic operations (See paragraph 50), Will and Beaver do not expressly disclose the remaining claim elements. Weinstein discloses wherein the identifying the one or more exclusive entities further comprises: applying an exclusive or (XOR) logic operation to entities associated with the first skill and the second skill (See paragraphs 54 and 71, wherein exclusive or operations are implicitly applied to identified mutually exclusive arguments). One of ordinary skill in the art would have recognized that applying the known technique of Weinstein would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to claim 3. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Will, IV et al. (U.S. 2024/0020484) in view of Beaver (U.S. 2020/0106881), and in further view of Latzina et al. (U.S. 2012/0166178), Rofouei et al. (U.S. 2024/0289407), and HILDICK-SMITH et al. (U.S. 2021/0390259). Claims 6, 13, and 20: As disclosed above, Will, Beaver, Latzina, and Rofouei disclose the elements of claims 1 and 5. Although Will discloses identifying a key phrase (See citations above), Will, Beaver, Latzina, and Rofouei do not expressly disclose the remaining claim elements. Hildick-Smith discloses wherein the identifying the key phrase further comprises: generating first embeddings for the natural language explanation and second embeddings for one or more strings within the natural language explanation (See paragraph 239, wherein embeddings are generated for words and phrases of a natural language message); and identifying as the key phrase the one or more strings corresponding to the second embeddings having a shortest distance from the first embeddings (See FIG. 9 and paragraphs 249–250 and 253, in view of paragraphs 212–214, 219, and 224, wherein key phrases are identified according to a cosine distance between embeddings). As disclosed above, Will discloses a system directed to disambiguating utterances using a chatbot, Beaver discloses a system directed to automating virtual assistant conversations, Latzina discloses a system directed to processing inputs during a digital assistant interaction, and Rofouei discloses a system directed to augmenting search sessions with a chatbot. Hildick-Smith discloses a system directed to providing personalized responses using a digital assistant. Each reference discloses a system directed to managing bot interactions. The technique of utilizing embeddings to identify key phrases is applicable to the systems of Will, Beaver, Latzina, and Rofouei as they each share characteristics and capabilities; namely, they are directed to managing bot interactions. One of ordinary skill in the art would have recognized that applying the known technique of Hildick-Smith would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Hildick-Smith to the teachings of Will, Beaver, Latzina, and Rofouei would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate bot interaction management into similar systems. Further, applying embedding techniques to Will, Beaver, Latzina, and Rofouei would have been recognized by those of ordinary skill in the art as resulting in an improved system that would allow more detailed analysis and more reliable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM S BROCKINGTON III whose telephone number is (571)270-3400. The examiner can normally be reached M-F, 8am-5pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM S BROCKINGTON III/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
May 23, 2025
Non-Final Rejection — §101, §103
Aug 25, 2025
Response Filed
Oct 10, 2025
Final Rejection — §101, §103
Jan 07, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602639
METHODS AND SYSTEMS FOR TIME-VARIANT VARIABLE PREDICTION AND MANAGEMENT FOR SUPPLIER PROCUREMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12591829
SYSTEMS AND METHODS FOR VULNERABILITY ASSESSMENT AND REMEDY IDENTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591825
Computing System and Method for Progress Tracking Using a Large Language Model
2y 5m to grant Granted Mar 31, 2026
Patent 12586034
CONTROL TOWER AND ENTERPRISE MANAGEMENT PLATFORM FOR MANAGING VALUE CHAIN NETWORK ENTITIES FROM POINT OF ORIGIN OF ONE OR MORE PRODUCTS OF THE ENTERPRISE TO POINT OF CUSTOMER USE
2y 5m to grant Granted Mar 24, 2026
Patent 12536480
METHOD FOR DETERMINING A TREATMENT SCHEDULE FOR TREATING A FIELD
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
41%
Grant Probability
96%
With Interview (+54.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 491 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month