Prosecution Insights
Last updated: April 19, 2026
Application No. 17/167,868

SEMI-CROWDSOURCED EXPERT-IN-THE-LOOP METHOD FOR THE EXTRACTION AND COMPILATION OF DIAGNOSIS AND REPAIR KNOWLEDGE

Non-Final OA §103
Filed
Feb 04, 2021
Examiner
GIULIANI, GIUSEPPI J
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
65%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
162 granted / 279 resolved
+3.1% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103
F8DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination - 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17[e], was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17[e] has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR1.114. The applicant’s submission for RCE filed on 22 October 2025 has been entered. Remarks This action is in response to the applicant’s RCE filed 22 October 2025, which is in response to the USPTO office action mailed 22 July 2025. Claims 1, 8 and 15 are amended. Claims 1-20 are currently pending. Response to Arguments With respect to the 35 USC §103 rejections of claims 1-20, the applicant’s arguments are moot in view of a new grounds of rejection, as necessitated by the applicant's amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 6-9, 11, 13-16, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Namburu et al., US 2011/0119231 A1 (hereinafter “Namburu”) in view of Sandor et al., US 2017/0286396 A1 (hereinafter “Sandor”) in further view of Zhang, US 10,692,006 B1 (hereinafter “Zhang”). Claim 1: Namburu teaches a method for semi-crowdsourced expert-in-the-loop information capture, comprising: obtaining solution search criteria for a repair problem from one or more of an expert user, machine algorithm, or crowd worker (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs)); conducting a search according to the solution search criteria to identify search results (Namburu, [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); filtering the search results according to relevance and likelihood of containing confirmed solutions (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked); providing the search results to crowd workers for analysis to find and extract the confirmed repair solutions (Namburu, [0043] note Pre-processing 430 may further include indexing/ranking 436 the original data 260 whereby the original data 260 itself is prioritized. Applications 230 or users 470 may also provide feedback to ranking systems such that if in some cases where certain features play a major role, the adaptive service system 100 can send a message to the ranking system to place priority on the particular features, [0053] note For example, to diagnose brake failure found in a particular vehicle type, a technician may enter a keyword such as "brake" and input data 520 including the make and model of the vehicle to receive suggestions 530 on how to repair the brake failure); if the solutions do not exist in a knowledge base, adding the extracted confirmed solutions to the knowledge base (Namburu, [0041] note During the process of ranking/indexing original data 260 or data sources, should new knowledge be acquired (e.g., new trends in customer feedback with respect to vehicle features, new fault phenomenon, previously un-noticed trends in original data), new objectives may be formulated); and otherwise if the solutions already exist in the knowledge base, using the extracted confirmed solutions (Namburu, [0045] note results 460 of the data analysis 220 may be exhibited in various formats including through visuals 462 and/or reports 464. Suitable forms of, visuals may be presentation slides, graphs, or the like, while conventional reports 464 such as MS Excel, MS PowerPoint, or other report formats may be contemplated). Namburu does not explicitly teach using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions, wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution; also to identify snippets that indicate confirmed-fix sentiment; providing the extracted confirmed solutions for expert review; and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier. However, Sandor teaches using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions (Sandor, [0035] note sentence categorizer 56 inputs a set of features for the issue sentence, including features related to the discourse patterns, into a classifier 75, which has been trained on such features to output a most probable category or a probabilistic distribution over some or all categories, [0036] note a knowledge base (KB) update component which uses the identified issue category for selecting one of a plurality of knowledge bases 76, 78 to be updated with an issue (e.g. a question) and corresponding answer, which may be derived, at least in part, from the answer 34 in the post 30); wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution (Sandor, [0032] note The system has access to a collection 28 of threads obtained from web posts, which may be stored in memory 12 during processing. Each thread 30 in the collection generally includes an issue 32, includes one or more text sequences (e.g., sentences), in a natural language having a grammar, such as English, that was posted by a person seeking an answer. Each issue may include a description of an anomaly and/or request information, e.g., as a question. The issue may relate to a device. Each of the sentences of the issue may be processed by the system. The thread 30 also includes one or more answers 34, posted by another person or other people. Each answer 34 generally attempts to provide an answer the question 32. Each answer may be in natural language and/or include graphics which illustrate the answer. The thread 30 may have metadata, e.g., XML tags, which provide information, such as one or more of: tags 36, 38 indicating the parts of the post corresponding to an issue and an answer to that question, respectively, a title tag 40 for a title 42 of the post, keyword tags 44, voting tags by other users, a rank, and the like); also to identify snippets that indicate confirmed-fix sentiment (Sandor, [0225] note a CRF classifier, or other classifier may be trained to identify problem (issue) and solution (answer) parts of the thread; i.e. snippets); and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier (Sandor, [0054] note the discourse patterns for one or more of the categories which fire on the collection of issue sentences are used as features, optionally together with other features extracted from the issue sentences, to train a classifier. The trained classifier(s) can be used in the categorization step S112 for making predictions for the respective category). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the knowledge base of Namburu with the knowledge base update based on a classifier of Sandor according to known methods (i.e. updating the knowledge base using a classified trained to identify problems (issues) and associated solutions (answers)). Motivation for doing so is that this allows the most useful types of user issues and requests to be identified and extracted, and the sentences that convey them are thus detected (Sandor, [0230]). Namburu and Sandor do not explicitly teach providing the extracted confirmed solutions for expert review. However, Zhang teaches providing the extracted confirmed solutions for expert review (Zhang, [Fig. 3]-[Fig. 6], [Col. 15 Lines 51-52] note FIG. 6 is an example 600 illustrating an implementation of a chatbot system, [Col. 15 Lines 65-67]-[Col. 16 Lines 1-2] note Example 600 begins at 650 with a user providing a question through question interface 602. At 652, the questions is provided to knowledge base 604 to determine whether a sufficiently similar question is already mapped to an answer in the knowledge base 604, [Col. 16 Lines 22-25] note At 664, each of the scores provided by the distance model 614 for each potential expert can be compared to a threshold to determine if the corresponding expert should be selected to provide an answer, [Col. 16 Lines 31-34] note At 668, answers from the selected experts can be grouped and the most common answer is provided to the knowledge base 604. The most common answer is then mapped to the question in knowledge base 604, [Col. 14 Lines 22-24] note FIG. 5 is a flow diagram illustrating a process 500 used in some implementations for training a distance model to identify experts to answer a question, [Col. 3 Lines 17-21] note training items can include a score for the best answer provided corresponding to the training item. Each training item can be used to partially train a distance model, such as a neural network). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the portal including a search window of Namburu and Sandor with the crowdsourced chatbot answers of Zhang according to known methods (i.e. providing a crowdsource based chatbot to answer searches related to service repair data and/or repair documentation). Motivation for doing so is that the chatbot system can improve question response systems by finding better sources for answers to questions (Zhang, [Col. 3 Lines 52-54]). Claim 2: Namburu, Sandor and Zhang teach the method of claim 1, wherein the confirmed-solution classifier is trained to identify, in a search result, presence of (i) a problem post indicative of the repair problem, (ii) a solution post indicative of a solution to the repair problem, and (iii) a confirmed-fix indication validating that the solution post provides a working solution (Zhang, [Col. 10 Lines 52-58] note At block 406, process 400 can determine if the question received at block 404 is already in a knowledge base… the question can be reduced to a set of keywords or to a semantic representation such as a vector in a vector space, [Col. 11 Lines 10-12] note At block 408, process 400 can return the answer identified in the knowledge base and process 400 continues to block 438). Claim 4: Namburu, Sandor and Zhang teach the method of claim 1, further comprising training to confirmed-solution classifier using a training dataset of positive and negative posts using a training model including one or more of a multi-layer perception (MLP), a random forest, a logistic regression, or a deep learning classification model (Zhang, [Col. 3 Lines 39-42] note Examples of models include: neural networks, support vector machines, decision trees, Parzen windows, Bayes, clustering, reinforcement learning, probability distributions, and others), wherein, in the training dataset, a first post of each forum thread is identified as being a problem post, one or more solution posts are identified as including matching terminology from a domain dictionary, and confirmed-fix indications are identified according to sentiment or occurrence of positive fix keywords (Namburu, [0035] note forums, discussion boards, [0036] note web data (e.g., discussion forums, blogs, search results), [0047] note Domain semantics, [Fig. 3] note an illustrative domain taxonomy structure, [0035] note various sources such as direct customer feedback regarding automobiles, automotive parts, or automotive services (e.g., repair services) provided to a dealer or through the use of web crawlers 250 or field data collected through dealer networks or through remote services). Claim 6: Namburu, Sandor and Zhang teach the method of claim 1, further comprising providing a solution from the knowledge base to resolve the repair problem (Namburu, [0022] note Repair procedures, hierarchical domain structures, and information reflecting domain terminology may be maintained in the system knowledge, [0039] note knowledge 240, consisting of domain semantics 242 and business knowledge 244, along with original data 260, for bringing further domain driven adaptation in data analysis for applications). Claim 7: Namburu, Sandor and Zhang teach the method of claim 1, wherein the repair problem relates to an automotive repair (Namburu, [0053] note For example, to diagnose brake failure found in a particular vehicle type, a technician may enter a keyword such as "brake" and input data 520 including the make and model of the vehicle to receive suggestions 530 on how to repair the brake failure). Claim 8: Namburu teaches a system for semi-crowdsourced expert-in-the-loop information capture, comprising: a computing platform including a hardware processor, programmed to: obtain solution search criteria for a repair problem from one or more of expert user, machine algorithm, or crowd worker (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs)); conduct a search according to the solution search criteria to identify search results (Namburu, [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); filter the search results according to relevance and likelihood of containing confirmed solutions (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked); provide the search results to crowd workers for analysis to find and extract confirmed repair solutions (Namburu, [0043] note Pre-processing 430 may further include indexing/ranking 436 the original data 260 whereby the original data 260 itself is prioritized. Applications 230 or users 470 may also provide feedback to ranking systems such that if in some cases where certain features play a major role, the adaptive service system 100 can send a message to the ranking system to place priority on the particular features); if the solutions do not exist in a knowledge base, adding the extracted confirmed solutions to the knowledge base (Namburu, [0041] note During the process of ranking/indexing original data 260 or data sources, should new knowledge be acquired (e.g., new trends in customer feedback with respect to vehicle features, new fault phenomenon, previously un-noticed trends in original data), new objectives may be formulated); and otherwise if the solutions already exist in the knowledge base, use the extracted confirmed solutions (Namburu, [0045] note results 460 of the data analysis 220 may be exhibited in various formats including through visuals 462 and/or reports 464. Suitable forms of, visuals may be presentation slides, graphs, or the like, while conventional reports 464 such as MS Excel, MS PowerPoint, or other report formats may be contemplated). Namburu does not explicitly teach using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions, wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution; also to identify snippets that indicate confirmed-fix sentiment; providing the extracted confirmed solutions for expert review; and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier. However, Sandor teaches using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions (Sandor, [0035] note sentence categorizer 56 inputs a set of features for the issue sentence, including features related to the discourse patterns, into a classifier 75, which has been trained on such features to output a most probable category or a probabilistic distribution over some or all categories, [0036] note a knowledge base (KB) update component which uses the identified issue category for selecting one of a plurality of knowledge bases 76, 78 to be updated with an issue (e.g. a question) and corresponding answer, which may be derived, at least in part, from the answer 34 in the post 30); wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution (Sandor, [0032] note The system has access to a collection 28 of threads obtained from web posts, which may be stored in memory 12 during processing. Each thread 30 in the collection generally includes an issue 32, includes one or more text sequences (e.g., sentences), in a natural language having a grammar, such as English, that was posted by a person seeking an answer. Each issue may include a description of an anomaly and/or request information, e.g., as a question. The issue may relate to a device. Each of the sentences of the issue may be processed by the system. The thread 30 also includes one or more answers 34, posted by another person or other people. Each answer 34 generally attempts to provide an answer the question 32. Each answer may be in natural language and/or include graphics which illustrate the answer. The thread 30 may have metadata, e.g., XML tags, which provide information, such as one or more of: tags 36, 38 indicating the parts of the post corresponding to an issue and an answer to that question, respectively, a title tag 40 for a title 42 of the post, keyword tags 44, voting tags by other users, a rank, and the like); also to identify snippets that indicate confirmed-fix sentiment (Sandor, [0225] note a CRF classifier, or other classifier may be trained to identify problem (issue) and solution (answer) parts of the thread; i.e. snippets); and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier (Sandor, [0054] note the discourse patterns for one or more of the categories which fire on the collection of issue sentences are used as features, optionally together with other features extracted from the issue sentences, to train a classifier. The trained classifier(s) can be used in the categorization step S112 for making predictions for the respective category). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the knowledge base of Namburu with the knowledge base update based on a classifier of Sandor according to known methods (i.e. updating the knowledge base using a classified trained to identify problems (issues) and associated solutions (answers)). Motivation for doing so is that this allows the most useful types of user issues and requests to be identified and extracted, and the sentences that convey them are thus detected (Sandor, [0230]). Namburu and Sandor do not explicitly teach providing the extracted confirmed solutions for expert review. However, Zhang teaches providing the extracted confirmed solutions for expert review (Zhang, [Fig. 3]-[Fig. 6], [Col. 15 Lines 51-52] note FIG. 6 is an example 600 illustrating an implementation of a chatbot system, [Col. 15 Lines 65-67]-[Col. 16 Lines 1-2] note Example 600 begins at 650 with a user providing a question through question interface 602. At 652, the questions is provided to knowledge base 604 to determine whether a sufficiently similar question is already mapped to an answer in the knowledge base 604, [Col. 16 Lines 22-25] note At 664, each of the scores provided by the distance model 614 for each potential expert can be compared to a threshold to determine if the corresponding expert should be selected to provide an answer, [Col. 16 Lines 31-34] note At 668, answers from the selected experts can be grouped and the most common answer is provided to the knowledge base 604. The most common answer is then mapped to the question in knowledge base 604, [Col. 14 Lines 22-24] note FIG. 5 is a flow diagram illustrating a process 500 used in some implementations for training a distance model to identify experts to answer a question, [Col. 3 Lines 17-21] note training items can include a score for the best answer provided corresponding to the training item. Each training item can be used to partially train a distance model, such as a neural network). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the portal including a search window of Namburu and Sandor with the crowdsourced chatbot answers of Zhang according to known methods (i.e. providing a crowdsource based chatbot to answer searches related to service repair data and/or repair documentation). Motivation for doing so is that the chatbot system can improve question response systems by finding better sources for answers to questions (Zhang, [Col. 3 Lines 52-54]). Claim 9: Namburu, Sandor and Zhang teach the system of claim 8, wherein the confirmed-solution classifier is trained to identify, in a search result, presence of (i) a problem post indicative of the repair problem, (ii) a solution post indicative of a solution to the repair problem, and (iii) a confirmed-fix indication validating that the solution post provides a working solution (Zhang, [Col. 10 Lines 52-58] note At block 406, process 400 can determine if the question received at block 404 is already in a knowledge base… the question can be reduced to a set of keywords or to a semantic representation such as a vector in a vector space, [Col. 11 Lines 10-12] note At block 408, process 400 can return the answer identified in the knowledge base and process 400 continues to block 438). Claim 11: Namburu, Sandor and Zhang teach the system of claim 8, wherein the computing platform is further programmed to train to confirmed-solution classifier using a training dataset of positive and negative posts using a training model including one or more of a multi-layer perception (MVLP), a random forest, a logistic regression, or a deep learning classification model (Zhang, [Col. 3 Lines 39-42] note Examples of models include: neural networks, support vector machines, decision trees, Parzen windows, Bayes, clustering, reinforcement learning, probability distributions, and others), wherein, in the training dataset, a first post of each forum thread is identified as being a problem post, one or more solution posts are identified as including matching terminology from a domain dictionary, and confirmed-fix indications are identified according to sentiment or occurrence of positive fix keywords (Namburu, [0035] note forums, discussion boards, [0036] note web data (e.g., discussion forums, blogs, search results), [0047] note Domain semantics, [Fig. 3] note an illustrative domain taxonomy structure, [0035] note various sources such as direct customer feedback regarding automobiles, automotive parts, or automotive services (e.g., repair services) provided to a dealer or through the use of web crawlers 250 or field data collected through dealer networks or through remote services). Claim 13: Namburu, Sandor and Zhang teach the system of claim 8, wherein the computing platform is further programmed to provide a solution from the knowledge base to resolve the repair problem (Namburu, [0022] note Repair procedures, hierarchical domain structures, and information reflecting domain terminology may be maintained in the system knowledge, [0039] note knowledge 240, consisting of domain semantics 242 and business knowledge 244, along with original data 260, for bringing further domain driven adaptation in data analysis for applications). Claim 14: Namburu, Sandor and Zhang teach the system of claim 8, wherein the repair problem relates to an automotive repair (Namburu, [0053] note For example, to diagnose brake failure found in a particular vehicle type, a technician may enter a keyword such as "brake" and input data 520 including the make and model of the vehicle to receive suggestions 530 on how to repair the brake failure). Claim 15: Namburu teaches a non-transitory computer readable medium comprising instructions for semi-crowdsourced expert-in-the-loop information capture that, when executed by a processor of a computing device, cause the computing device to perform operations including to: obtain solution search criteria for a repair problem from one or more of expert user, machine algorithm, or crowd worker (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs)); conduct a search according to the solution search criteria to identify search results (Namburu, [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); filter the search results according to relevance and likelihood of containing confirmed solutions (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked); provide the search results to crowd workers for analysis to find and extract confirmed repair solutions (Namburu, [0043] note Pre-processing 430 may further include indexing/ranking 436 the original data 260 whereby the original data 260 itself is prioritized. Applications 230 or users 470 may also provide feedback to ranking systems such that if in some cases where certain features play a major role, the adaptive service system 100 can send a message to the ranking system to place priority on the particular features); if the solutions do not exist in a knowledge base, adding the extracted confirmed solutions to the knowledge base (Namburu, [0041] note During the process of ranking/indexing original data 260 or data sources, should new knowledge be acquired (e.g., new trends in customer feedback with respect to vehicle features, new fault phenomenon, previously un-noticed trends in original data), new objectives may be formulated); and otherwise if the solutions already exist in the knowledge base, use the extracted confirmed solutions (Namburu, [0045] note results 460 of the data analysis 220 may be exhibited in various formats including through visuals 462 and/or reports 464. Suitable forms of, visuals may be presentation slides, graphs, or the like, while conventional reports 464 such as MS Excel, MS PowerPoint, or other report formats may be contemplated). Namburu does not explicitly teach using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions, wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution; also to identify snippets that indicate confirmed-fix sentiment; providing the extracted confirmed solutions for expert review; and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier. However, Sandor teaches using a confirmed-solution classifier that operates on the search results to identify confirmed repair solutions (Sandor, [0035] note sentence categorizer 56 inputs a set of features for the issue sentence, including features related to the discourse patterns, into a classifier 75, which has been trained on such features to output a most probable category or a probabilistic distribution over some or all categories, [0036] note a knowledge base (KB) update component which uses the identified issue category for selecting one of a plurality of knowledge bases 76, 78 to be updated with an issue (e.g. a question) and corresponding answer, which may be derived, at least in part, from the answer 34 in the post 30); wherein the confirmed-solution classifier is trained using a training dataset that includes confirmed-fix posts that indicate positive sentiment about fixing a problem with a proposed solution (Sandor, [0032] note The system has access to a collection 28 of threads obtained from web posts, which may be stored in memory 12 during processing. Each thread 30 in the collection generally includes an issue 32, includes one or more text sequences (e.g., sentences), in a natural language having a grammar, such as English, that was posted by a person seeking an answer. Each issue may include a description of an anomaly and/or request information, e.g., as a question. The issue may relate to a device. Each of the sentences of the issue may be processed by the system. The thread 30 also includes one or more answers 34, posted by another person or other people. Each answer 34 generally attempts to provide an answer the question 32. Each answer may be in natural language and/or include graphics which illustrate the answer. The thread 30 may have metadata, e.g., XML tags, which provide information, such as one or more of: tags 36, 38 indicating the parts of the post corresponding to an issue and an answer to that question, respectively, a title tag 40 for a title 42 of the post, keyword tags 44, voting tags by other users, a rank, and the like); also to identify snippets that indicate confirmed-fix sentiment (Sandor, [0225] note a CRF classifier, or other classifier may be trained to identify problem (issue) and solution (answer) parts of the thread; i.e. snippets); and using the extracted confirmed solutions and the snippets that indicate confirmed-fix sentiment to retrain the confirmed-solution classifier (Sandor, [0054] note the discourse patterns for one or more of the categories which fire on the collection of issue sentences are used as features, optionally together with other features extracted from the issue sentences, to train a classifier. The trained classifier(s) can be used in the categorization step S112 for making predictions for the respective category). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the knowledge base of Namburu with the knowledge base update based on a classifier of Sandor according to known methods (i.e. updating the knowledge base using a classified trained to identify problems (issues) and associated solutions (answers)). Motivation for doing so is that this allows the most useful types of user issues and requests to be identified and extracted, and the sentences that convey them are thus detected (Sandor, [0230]). Namburu and Sandor do not explicitly teach providing the extracted confirmed solutions for expert review. However, Zhang teaches providing the extracted confirmed solutions for expert review (Zhang, [Fig. 3]-[Fig. 6], [Col. 15 Lines 51-52] note FIG. 6 is an example 600 illustrating an implementation of a chatbot system, [Col. 15 Lines 65-67]-[Col. 16 Lines 1-2] note Example 600 begins at 650 with a user providing a question through question interface 602. At 652, the questions is provided to knowledge base 604 to determine whether a sufficiently similar question is already mapped to an answer in the knowledge base 604, [Col. 16 Lines 22-25] note At 664, each of the scores provided by the distance model 614 for each potential expert can be compared to a threshold to determine if the corresponding expert should be selected to provide an answer, [Col. 16 Lines 31-34] note At 668, answers from the selected experts can be grouped and the most common answer is provided to the knowledge base 604. The most common answer is then mapped to the question in knowledge base 604, [Col. 14 Lines 22-24] note FIG. 5 is a flow diagram illustrating a process 500 used in some implementations for training a distance model to identify experts to answer a question, [Col. 3 Lines 17-21] note training items can include a score for the best answer provided corresponding to the training item. Each training item can be used to partially train a distance model, such as a neural network). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the portal including a search window of Namburu and Sandor with the crowdsourced chatbot answers of Zhang according to known methods (i.e. providing a crowdsource based chatbot to answer searches related to service repair data and/or repair documentation). Motivation for doing so is that the chatbot system can improve question response systems by finding better sources for answers to questions (Zhang, [Col. 3 Lines 52-54]). Claim 16: Namburu, Sandor and Zhang teach the medium of claim 15, wherein the confirmed-solution classifier is trained to identify, in a search result, presence of (i) a problem post indicative of the repair problem, (ii) a solution post indicative of a solution to the repair problem, and (iii) a confirmed-fix indication validating that the solution post provides a working solution (Zhang, [Col. 10 Lines 52-58] note At block 406, process 400 can determine if the question received at block 404 is already in a knowledge base… the question can be reduced to a set of keywords or to a semantic representation such as a vector in a vector space, [Col. 11 Lines 10-12] note At block 408, process 400 can return the answer identified in the knowledge base and process 400 continues to block 438). Claim 18: Namburu, Sandor and Zhang teach the medium of claim 15, further comprising instructions that, when executed by the processor of the computing device, cause the computing device to perform operations including to train to confirmed-solution classifier using a training dataset of positive and negative posts using a training model including one or more of a multi-layer perception (MVLP), a random forest, a logistic regression, or a deep learning classification model (Zhang, [Col. 3 Lines 39-42] note Examples of models include: neural networks, support vector machines, decision trees, Parzen windows, Bayes, clustering, reinforcement learning, probability distributions, and others), wherein, in the training dataset, a first post of each forum thread is identified as being a problem post, one or more solution posts are identified as including matching terminology from a domain dictionary, and confirmed-fix indications are identified according to sentiment or occurrence of positive fix keywords (Namburu, [0035] note forums, discussion boards, [0036] note web data (e.g., discussion forums, blogs, search results), [0047] note Domain semantics, [Fig. 3] note an illustrative domain taxonomy structure, [0035] note various sources such as direct customer feedback regarding automobiles, automotive parts, or automotive services (e.g., repair services) provided to a dealer or through the use of web crawlers 250 or field data collected through dealer networks or through remote services). Claim 20: Namburu, Sandor and Zhang teach the medium of claim 15, further comprising instructions that, when executed by the processor of the computing device, cause the computing device to perform operations including to provide a solution from the knowledge base to resolve the repair problem (Namburu, [0022] note Repair procedures, hierarchical domain structures, and information reflecting domain terminology may be maintained in the system knowledge, [0039] note knowledge 240, consisting of domain semantics 242 and business knowledge 244, along with original data 260, for bringing further domain driven adaptation in data analysis for applications). Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Namburu, Sandor and Zhang in further view of ALKAN et al., US 2021/0097097 A1 (hereinafter “Alkan”). Claim 3: Namburu, Sandor and Zhang do not explicitly teach the method of claim 2, further comprising: identifying, for each search result, a confirmed-fix probability score for the search result; and responsive to the confirmed-fix probability score being greater than a predefined threshold adding the extracted confirmed solutions to the knowledge base skipping the expert review. However, Alkan teaches this (Alkan, [0013] note chat management, and in particular, to chat management to address queries, [0068] note If the query has been answered, then a determination can be made whether the query was correctly answered. This is illustrated at operation 420. Validating the answer can be completed using the methods described with respect to the answer retriever 320 of FIG. 3. For example, the query can be compared to an external source or validated by an expert. In some embodiments, the answer is determined to be correct in response to a confidence that the answer is correct exceeding a confidence threshold. If a determination is made that the answer to the query is correct, then method 400 proceeds to operation 425, where the correct answer is provided to the user). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the crowdsourced chatbot answers of Namburu, Sandor and Zhang with the chat management to address queries of Alkan according to known methods (i.e. determining a correct answer in response to a confidence of that answer exceeding a confidence threshold). Motivation for doing so is that answer quality is improved by determining whether the answer is likely accurate (Alkan, [0043]). Claim 10: Namburu, Sandor and Zhang do not explicitly teach the system of claim 9, wherein the computing platform is further programmed to: identify, for each search result, a confirmed-fix probability score for the search result; and responsive to the confirmed-fix probability score being greater than a predefined threshold add the extracted confirmed solutions to the knowledge base skipping the expert review. However, Alkan teaches this (Alkan, [0013] note chat management, and in particular, to chat management to address queries, [0068] note If the query has been answered, then a determination can be made whether the query was correctly answered. This is illustrated at operation 420. Validating the answer can be completed using the methods described with respect to the answer retriever 320 of FIG. 3. For example, the query can be compared to an external source or validated by an expert. In some embodiments, the answer is determined to be correct in response to a confidence that the answer is correct exceeding a confidence threshold. If a determination is made that the answer to the query is correct, then method 400 proceeds to operation 425, where the correct answer is provided to the user). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the crowdsourced chatbot answers of Namburu, Sandor and Zhang with the chat management to address queries of Alkan according to known methods (i.e. determining a correct answer in response to a confidence of that answer exceeding a confidence threshold). Motivation for doing so is that answer quality is improved by determining whether the answer is likely accurate (Alkan, [0043]). Claim 17: Namburu, Sandor and Zhang do not explicitly teach the medium of claim 16, further comprising instructions that, when executed by the processor of the computing device, cause the computing device to perform operations including to: identify, for each search result, a confirmed-fix probability score for the search result; and responsive to the confirmed-fix probability score being greater than a predefined threshold add the extracted confirmed solutions to the knowledge base skipping the expert review. However, Alkan teaches this (Alkan, [0013] note chat management, and in particular, to chat management to address queries, [0068] note If the query has been answered, then a determination can be made whether the query was correctly answered. This is illustrated at operation 420. Validating the answer can be completed using the methods described with respect to the answer retriever 320 of FIG. 3. For example, the query can be compared to an external source or validated by an expert. In some embodiments, the answer is determined to be correct in response to a confidence that the answer is correct exceeding a confidence threshold. If a determination is made that the answer to the query is correct, then method 400 proceeds to operation 425, where the correct answer is provided to the user). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the crowdsourced chatbot answers of Namburu, Sandor and Zhang with the chat management to address queries of Alkan according to known methods (i.e. determining a correct answer in response to a confidence of that answer exceeding a confidence threshold). Motivation for doing so is that answer quality is improved by determining whether the answer is likely accurate (Alkan, [0043]). Claims 5, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Namburu, Sandor and Zhang in further view of Wong et al., US 7,672,943 B2 (hereinafter “Wong”). Claim 5: Namburu, Sandor and Zhang teach the method of claim 1, wherein conducting the search includes: identifying a plurality of initial search results based on the solution search criteria (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs), [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); scoring the initial search results according to predefined rules, the rules relating to occurrence of one or more of keywords from a domain dictionary in content or URL of the initial search results (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked, [0046] note Aspects of system knowledge 240 utilized may include domain semantics 242); and crawling the initial search results with the highest scores to extract posts from the content of the URLs to generate the search results (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs). Various sources such as consumer blogs, automobile websites, forums, discussion boards, search engines, or the like, may provide part of the original data 260 to be retrieved by web crawlers 250). Namburu, Sandor and Zhang do not explicitly teach by universal resource locator (URL). However, Wong teaches this (Wong, [Col. 2 Lines 65-67]-[Col. 3 Lines 1-3] note an example embodiment is directed to web crawling techniques and technologies that control a web crawler application to download web pages in a prioritized order. The downloading order is influenced by URL scoring that is performed for outlinks (and their corresponding URLs) contained in downloaded and analyzed web pages, [Fig. 3], [Col. 7 Lines 50-52] note anchor text metric is used to predict the likelihood that the anchor text of a URL will lead to a web page of the desired type/category). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the web crawlers of Namburu, Sandor and Zhang with the web crawling techniques of Wong according to known methods (i.e. prioritizing a downloading order influenced by URL scoring). Motivation for doing so is that the URL scoring makes crawling and indexing of web pages of a desired or specified type more efficient, thus enabling targeted web crawling to be performed with less computation and hardware (Wong, Col. 3 Lines 3-7]). Claim 12: Namburu, Sandor and Zhang teach the system of claim 8, wherein the computing platform is further programmed to: identify a plurality of initial search results based on the solution search criteria (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs), [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); score the initial search results according to predefined rules, the rules relating to occurrence of one or more of keywords from a domain dictionary in content or URL of the initial search results (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked, [0046] note Aspects of system knowledge 240 utilized may include domain semantics 242); and crawl the initial search results with the highest scores to extract posts from the content to generate the search results (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs). Various sources such as consumer blogs, automobile websites, forums, discussion boards, search engines, or the like, may provide part of the original data 260 to be retrieved by web crawlers 250). Namburu, Sandor and Zhang do not explicitly teach by universal resource locator (URL). However, Wong teaches this (Wong, [Col. 2 Lines 65-67]-[Col. 3 Lines 1-3] note an example embodiment is directed to web crawling techniques and technologies that control a web crawler application to download web pages in a prioritized order. The downloading order is influenced by URL scoring that is performed for outlinks (and their corresponding URLs) contained in downloaded and analyzed web pages, [Fig. 3], [Col. 7 Lines 50-52] note anchor text metric is used to predict the likelihood that the anchor text of a URL will lead to a web page of the desired type/category). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the web crawlers of Namburu, Sandor and Zhang with the web crawling techniques of Wong according to known methods (i.e. prioritizing a downloading order influenced by URL scoring). Motivation for doing so is that the URL scoring makes crawling and indexing of web pages of a desired or specified type more efficient, thus enabling targeted web crawling to be performed with less computation and hardware (Wong, Col. 3 Lines 3-7]). Claim 19: Namburu, Sandor and Zhang teach the medium of claim 15, further comprising instructions that, when executed by the processor of the computing device, cause the computing device to perform operations including to: identify a plurality of initial search results based on the solution search criteria (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs), [Fig. 4], [0040] note original data 260 is collected and cleaned 410, such as by filtering, extraction, or connection of pertinent information); score the initial search results according to predefined rules, the rules relating to occurrence of one or more of keywords from a domain dictionary in content or URL of the initial search results (Namburu, [0041] note Next, the data analysis 220 may involve weighing and ranking of sources 420. To this end, sources of the original data 260 are indexed and ranked, [0046] note Aspects of system knowledge 240 utilized may include domain semantics 242); and crawl the initial search results with the highest scores to extract posts from the content to generate the search results (Namburu, [0035] note data-mining… Web crawlers 250 may comprise a software application which browses a wide-area network (WAN), such as the Internet, to extract information related to products (e.g., automotive) or diagnostic services (e.g., repairs). Various sources such as consumer blogs, automobile websites, forums,
Read full office action

Prosecution Timeline

Feb 04, 2021
Application Filed
Mar 03, 2025
Non-Final Rejection — §103
Jun 30, 2025
Response Filed
Jul 18, 2025
Final Rejection — §103
Oct 22, 2025
Request for Continued Examination
Oct 24, 2025
Response after Non-Final Action
Nov 06, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602410
MULTIMODAL CONTEXT SELECTION FOR LARGE LANGUAGE MODEL BASED RESOLUTIONS ADDRESSING TECHNICAL ISSUES
2y 5m to grant Granted Apr 14, 2026
Patent 12585649
CONDITIONAL BRANCHING FOR A FEDERATED GRAPH QUERY PLAN
2y 5m to grant Granted Mar 24, 2026
Patent 12561368
METHODS AND SYSTEMS FOR TENSOR NETWORK CONTRACTION BASED ON LOCAL OPTIMIZATION OF CONTRACTION TREE
2y 5m to grant Granted Feb 24, 2026
Patent 12561363
Visual Search Determination for Text-To-Image Replacement
2y 5m to grant Granted Feb 24, 2026
Patent 12536151
ACCURATE AND QUERY-EFFICIENT MODEL AGNOSTIC EXPLANATIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
65%
With Interview (+7.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month