DETAILED ACTION
This Office Action is in response for Application # 19/025,190 filed on January 16, 2025 in which claims 1-20 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims
Claims 1-20 are pending, of which claims 1-20 are rejected under 35 U.S.C. 103 and also claims 1-20 are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101. because the claims are directed to an abstract idea; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—Claims 1-20 recite a method, device and readable medium respectively.
The analysis of claims 1, 8 and 15 are as follows:
Step 2A, prong one: Does claim 1 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of “training a machine learning model to determine a predictive score based on a similarity level between a third-party-specific question and an existing standardized question from the standardized question-and-answer set;
receiving a third-party-specific question-and-answer set from a third-party provider;
determining, using a trained machine learning model, the predictive score for the third-party-specific question from the third-party-specific question and answer set, the predictive score associated with a similarity between the third-party-specific question and the existing standardized question from the standardized question-and-answer set;
determining if the predictive score exceeds a predetermined threshold;
in response to the predictive score exceeding the predetermined threshold, mapping the third-party-specific question to the existing standardized question; and
responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party- specific question” as drafted, are mental steps based on various processes can be performed in a human mind of retrieving answer to a question from digital assets (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group and with a pen & paper.
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “method”, “system” and “readable medium”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claim recites “training a machine learning model to determine a predictive score based on a similarity level between a third-party-specific question and an existing standardized question from the standardized question-and-answer set;
receiving a third-party-specific question-and-answer set from a third-party provider;
determining, using a trained machine learning model, the predictive score for the third-party-specific question from the third-party-specific question and answer set, the predictive score associated with a similarity between the third-party-specific question and the existing standardized question from the standardized question-and-answer set;
determining if the predictive score exceeds a predetermined threshold;
in response to the predictive score exceeding the predetermined threshold, mapping the third-party-specific question to the existing standardized question; and
responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party- specific question” are mere gathering data and applying process steps (i.e., identifying answer); the computers that perform those functions and the mental steps are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “the predetermined threshold “, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments (e.g., obtain questions by mapping between provider and user) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of meeting the predetermine threshold based on score is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i). Similarly, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claims 1, 8 and 15 are rejected as being directed to non-patentable subject matter under §101.
The analysis of claims 2-7, 9-14 and 16-20 are as follows:
Step 2A, prong one: Does claims 2-7, 9-14 and 16-20 recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations of “
Claim 2 responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party-specific question, wherein the standardized question is an existing standardized question.
Claim 3 wherein the method further comprises: developing an ordering of the standardized question and answer set such that one or more standardized questions from the standardized question and answer set are presented to a user in the ordering.
Claim 4 wherein the ordering minimizes a number of standardized questions asked to the user.
Claim 5 wherein the machine learning model is trained using prompt engineering.
Claim 6 wherein the method further comprises: responsive to the predictive score being less than the predetermined threshold, providing information indicative of the predictive score, wherein the information is provided to a system administrator.
Claim 7 wherein the method further comprises: responsive to providing the information indicative of the predictive score, receiving, from the system administrator a manual mapping of the third-party-specific question.
Claim 9 determining, using the trained machine learning model, a second predictive score for a third-party-specific answer to the third-party-specific question, the second predictive score associated with the similarity between the third- party-specific answer and a standardized answer to the existing standardized question, wherein the predictive score is a first predictive score.
Claim 10 monitoring a communication channel associated with the third-party provider; and receiving, through the communication channel, information indicative of an error.
Claim 11 refining the machine learning model, wherein the machine learning model is refined based on the error.
Claim 12 wherein the machine learning model implements a large language model for determining requested information associated with the third-party-specific question.
Claim 13 determining, using the trained machine learning model, a second predictive score for a second third-party-specific question from the third-party-specific question-and-answer set, the predictive score associated with the similarity between the second third-party-specific question and the existing standardized question from the standardized question-and-answer set, wherein the predictive score is a first predictive score and the third-party-specific question is a first third-party-specific question; determining whether a difference between the first predictive score and the second predictive score is less than a second predetermined threshold, wherein the predetermined threshold is a first predetermined threshold; and responsive to the difference being less than the second predetermined threshold, providing an indication to a system administrator.
Claim 14 wherein the standardized question-and-answer set is associated with insurance underwriting.
Claim 16 an error detection module operable to detect an error in the standardized question- and-answer set.
Claim 17 wherein the method further comprises: detecting, by the error detection module, the error in the standardized question-and-answer set after the standardized question-and- answer set, wherein the error is detected after a set of user' answers associated with the standardized question-and-answer set is presented to the third-party provider.
Claim 18 wherein detecting the error comprises: receiving, from the third-party provider, information indicative of the error in the standardized question-and-answer set.
Claim 19 responsive to the predictive score being less than the predetermined threshold, generating, by the rules mapping engine, a new standardized question corresponding to the third-party-specific question, wherein the standardized question is an existing standardized question.
Claim 20 wherein the third-party provider is an entity engaging in underwriting” as drafted, are mental steps based on various processes can be performed in a human mind of retrieving answer to a question from digital assets (acts of thinking, decision making). These limitations, therefore fall within the human mind processes group and with a pen & paper.
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application as just stated as related to the technical field of computer science . Although the claim recites that the recited functionality includes “method”, “system” and “readable medium”, these computer components are recited at a high-level of generality such that it amounts to no more than a mere instructions to apply the exception using generic computer component. In addition, the claim recites “
Claim 2 responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party-specific question, wherein the standardized question is an existing standardized question.
Claim 3 wherein the method further comprises: developing an ordering of the standardized question and answer set such that one or more standardized questions from the standardized question and answer set are presented to a user in the ordering.
Claim 4 wherein the ordering minimizes a number of standardized questions asked to the user.
Claim 5 wherein the machine learning model is trained using prompt engineering.
Claim 6 wherein the method further comprises: responsive to the predictive score being less than the predetermined threshold, providing information indicative of the predictive score, wherein the information is provided to a system administrator.
Claim 7 wherein the method further comprises: responsive to providing the information indicative of the predictive score, receiving, from the system administrator a manual mapping of the third-party-specific question.
Claim 9 determining, using the trained machine learning model, a second predictive score for a third-party-specific answer to the third-party-specific question, the second predictive score associated with the similarity between the third- party-specific answer and a standardized answer to the existing standardized question, wherein the predictive score is a first predictive score.
Claim 10 monitoring a communication channel associated with the third-party provider; and receiving, through the communication channel, information indicative of an error.
Claim 11 refining the machine learning model, wherein the machine learning model is refined based on the error.
Claim 12 wherein the machine learning model implements a large language model for determining requested information associated with the third-party-specific question.
Claim 13 determining, using the trained machine learning model, a second predictive score for a second third-party-specific question from the third-party-specific question-and-answer set, the predictive score associated with the similarity between the second third-party-specific question and the existing standardized question from the standardized question-and-answer set, wherein the predictive score is a first predictive score and the third-party-specific question is a first third-party-specific question; determining whether a difference between the first predictive score and the second predictive score is less than a second predetermined threshold, wherein the predetermined threshold is a first predetermined threshold; and responsive to the difference being less than the second predetermined threshold, providing an indication to a system administrator.
Claim 14 wherein the standardized question-and-answer set is associated with insurance underwriting.
Claim 16 an error detection module operable to detect an error in the standardized question- and-answer set.
Claim 17 wherein the method further comprises: detecting, by the error detection module, the error in the standardized question-and-answer set after the standardized question-and- answer set, wherein the error is detected after a set of user' answers associated with the standardized question-and-answer set is presented to the third-party provider.
Claim 18 wherein detecting the error comprises: receiving, from the third-party provider, information indicative of the error in the standardized question-and-answer set.
Claim 19 responsive to the predictive score being less than the predetermined threshold, generating, by the rules mapping engine, a new standardized question corresponding to the third-party-specific question, wherein the standardized question is an existing standardized question.
Claim 20 wherein the third-party provider is an entity engaging in underwriting” are mere gathering data and applying process steps (i.e., identifying answer); the computers that perform those functions and the mental steps are recited at a high level of generality that do not impose a meaningful limitation on the judicial exception and are insufficient to integrate the mental steps into a practical application. Although the claim recites the additional functionality “the predetermined threshold “, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments (e.g., obtain questions by mapping between provider and user) and therefore likewise amounts to no more than a mere instructions to apply the exception using generic computer components and is insufficient to integrate the steps into a practical application.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No— The recitation in the preamble is insufficient to transform a judicial exception to a patentable invention because the preamble elements are recited at a high level of generality that simply links to a field of use, see MPEP 2106.05(h). The claimed extra-solution of meeting the predetermine threshold based on score is acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)(i). Similarly, the gathering and determining are also recited at a high level of generality and merely generally link to respective technological environments. The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Taken alone, their additional elements do not amount to significantly more than the above- identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claims 2-7, 9-14 and 16-20 are rejected as being directed to non-patentable subject matter under §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Harsola et al. US 2025/0232111 A1 (hereinafter ‘Harsola’) in view of Blazek et al. US 2021/0158451 A1 (hereinafter ‘Blazek’).
As per claim 1, Harsola disclose, One or more non-transitory computer-readable media comprising computer-executable instructions that, when executed by at least one processor (Harsola: paragraph 0040: disclose non-transitory computer readable medium that provides instructions to processor(s) for execution), perform a method of managing a standardized question-and-answer set (Harsola: paragraph 0032: disclose mapping is between questions (corresponding to standard fields) and specific predictions (answers to the questions)), the method comprising:
training a machine learning model (Harsola: paragraph 0031: disclose train the DistilBERT model: (i) a question, (ii) a context, and (iii) an answer) to determine a predictive score based on a similarity level (Harsola: paragraph 0034: disclose indicating a percentage of similarity between two strings of texts) between a third-party-specific question and a standardized question (Harsola: paragraph 0017: disclose matching on the extracted text ‘third-party-specific’ to map different portions of the extracted text a dictionary of standard ‘standardized’ fields recognized by a computer application. Examiner concedes that the prior art is silent on third-party-specific and examiner would discuss this limitation in view of secondary prior-art below. Examiner equates third-party-specific question to extracted text and standardized question to standard fields);
receiving a third-party-specific question-and-answer set from a third-party provider, the third-party-specific question-and-answer set comprising the third-party-specific question (Harsola: paragraph 0032 and Fig. 5: disclose the mapping is between questions 502 (corresponding to standard fields) and specific predictions 504 (answers to the questions). Examiner equates Element 500 as question-and-answer set, where questions amountpaid and answer is 337.0);
determining, using a trained machine learning model (Harsola: paragraph 0031: disclose train the DistilBERT model: (i) a question, (ii) a context, and (iii) an answer), the predictive score for the third-party-specific question from the third-party-specific question-and- answer set (Harsola: paragraph 0032 and Fig. 5: disclose the mapping is between questions 502 (corresponding to standard fields) and specific predictions 504 (answers to the questions). Examiner equates Element 500 as question-and-answer set, where questions amountpaid and answer is 337.0), the predictive score associated with a similarity (Harsola: paragraph 0034: disclose indicating a percentage of similarity between two strings of texts) between the third-party-specific question and the standardized question from the standardized question-and-answer set (Harsola: paragraph 0017: disclose matching on the extracted text ‘third-party-specific’ to map different portions of the extracted text a dictionary of standard ‘standardized’ fields recognized by a computer application. Examiner concedes that the prior art is silent on third-party-specific and examiner would discuss this limitation in view of secondary prior-art below. Examiner equates third-party-specific question to extracted text and standardized question to standard fields);
determining if the predictive score exceeds a predetermined threshold (Harsola: paragraph 0035: disclose maximum similarity score greater than a threshold value are discarded); and
in response to the predictive score exceeding the predetermined threshold, mapping the third-party-specific question to the standardized question (Harsola: paragraph 0035: disclose the threshold value is 0.5. This process of discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field).
It is noted, however, Harsola did not specifically detail the aspects of
third-party-specific question as recited in claim 1.
On the other hand, Blazek achieved the aforementioned limitations by providing mechanisms of
third-party-specific question (Blazek: paragraph 0043: disclose users of third party partners associated with coverage recommendation system. Examiner argues that the coverage recommendation requires questions that are specific to third-party to get an answer and paragraph 0054: disclose the user with questions which are configured to elicit additional information related to users' business operations).
Harsola and Blazek are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “Question-Answer Systems”.
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of Harsola and Blazek because they are both directed to question-answer systems and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Blazek with the method described by Harsola in order to solve the problem posed.
The motivation for doing so would have been to accurately determine what products and/or coverage levels would be appropriate for their specific business circumstances (Blazek: paragraph 0005).
Therefore, it would have been obvious to combine Blazek with Harsola to obtain the invention as specified in instant claim 1.
As per claim 2, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Harsola disclose, responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party-specific question (Harsola: paragraph 0035: disclose the threshold value is 0.5. This process of discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field),
wherein the standardized question is an existing standardized question (Harsola: paragraph 0031: disclose train the DistilBERT model: (i) a question, (ii) a context, and (iii) an answer).
As per claim 3, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Harsola disclose, developing an ordering of the standardized question and answer set such that one or more standardized questions from the standardized question and answer set are presented to a user in the ordering (Harsola: Fig. 5, Element 500: disclose the order of the questions and the answers ‘prediction’).
As per claim 4, most of the limitations of this claim have been noted in the rejection of claims 1 and 3 above. In addition, Harsola disclose, wherein the ordering minimizes a number of standardized questions asked to the user (Harsola: Fig. 5, Element 500: disclose the number of questions are 8 and examiner argues that 8 is the minimum of questions for this exercise).
As per claim 5, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Harsola disclose, wherein the machine learning model is trained using prompt engineering (Harsola: paragraph 0030: disclose model includes a pre-trained DistilBERT model that is further trained (i.e., fine-tuned) using specific data and the DistilBERT model may be trained by a predetermined number of invoices with labeled fields. A portion of the predetermined number of invoices may be randomly selected as training data and the remaining portion can be used as validation data. Examiner equates predetermined number of invoices to prompt engineering).
As per claim 6, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, Harsola disclose, responsive to the predictive score being less than the predetermined threshold, providing information indicative of the predictive score (Harsola: paragraph 0035: disclose the threshold value is 0.5. This process of discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field),
wherein the information is provided to a system administrator (Harsola: paragraph 0020: disclose receive user inputs through the UIs. The UIs can be graphical user interfaces or command line interfaces. Examiner equates the user to system administrator).
As per claim 7, most of the limitations of this claim have been noted in the rejection of claims 1 and 6 above. In addition, Harsola disclose, responsive to providing the information indicative of the predictive score,
receiving, from the system administrator a manual mapping of the third-party-specific question (Harsola: paragraph 0032 and Fig. 5: disclose the mapping is between questions 502 (corresponding to standard fields) and specific predictions 504 (answers to the questions). Examiner equates Element 500 as question-and-answer set, where questions amountpaid and answer is 337.0).
As per claim 8, Harsola disclose, A method for managing a standardized question-and-answer set (Harsola: paragraph 0032: disclose mapping is between questions (corresponding to standard fields) and specific predictions (answers to the questions)), the method comprising:
responsive to the predictive score being less than the predetermined threshold, generating a new standardized question corresponding to the third-party- specific question (Harsola: paragraph 0035: disclose discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field): remaining limitations in this claim 8 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1.
As per claim 9, most of the limitations of this claim have been noted in the rejection of claim 8 above. In addition, Harsola disclose, determining, using the trained machine learning model, a second predictive score for a third-party-specific answer to the third-party-specific question, the second predictive score associated with the similarity between the third- party-specific answer and a standardized answer to the existing standardized question, wherein the predictive score is a first predictive score (Harsola: paragraph 0032 and Fig. 5: disclose the mapping is between questions 502 (corresponding to standard fields) and specific predictions 504 (answers to the questions). Examiner equates Element 500 as question-and-answer set, where questions amountpaid and answer is 337.0).
As per claim 10, most of the limitations of this claim have been noted in the rejection of claim 8 above. In addition, Harsola disclose, monitoring a communication channel associated with the third-party provider; and receiving, through the communication channel, information indicative of an error (Harsola: paragraph 0030: disclose unmapped fields, step 212 may be executed to invoke a fine-tuned DistilBERT model with a question-answer head for non-extracted fields. Examiner argues that fine-tuned requires addressing the errors such as unmapped fields).
As per claim 11, most of the limitations of this claim have been noted in the rejection of claims 8 and 10 above. In addition, Harsola disclose, refining the machine learning model, wherein the machine learning model is refined based on the error (Harsola: paragraph 0030: disclose unmapped fields, step 212 may be executed to invoke a fine-tuned DistilBERT model with a question-answer head for non-extracted fields. Examiner argues that fine-tuned requires addressing the errors such as unmapped fields).
As per claim 12, most of the limitations of this claim have been noted in the rejection of claim 8 above. In addition, Harsola disclose, wherein the machine learning model implements a large language model for determining requested information associated with the third-party-specific question (Harsola: paragraph 0031: disclose a DistilBERT model is just an example and any kind of natural language processing model ‘LLM’).
As per claim 13, most of the limitations of this claim have been noted in the rejection of claim 8 above. In addition, Harsola disclose, determining, using the trained machine learning model, a second predictive score for a second third-party-specific question from the third-party-specific question-and-answer set, the predictive score associated with the similarity between the second third-party-specific question and the existing standardized question from the standardized question-and-answer set, wherein the predictive score is a first predictive score and the third-party-specific question is a first third-party-specific question; determining whether a difference between the first predictive score and the second predictive score is less than a second predetermined threshold, wherein the predetermined threshold is a first predetermined threshold; and responsive to the difference being less than the second predetermined threshold, providing an indication to a system administrator (Harsola: paragraph 0035: disclose the threshold value is 0.5. This process of discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field. Examiner argues that the prior art taches applying threshold to select records and examiner believes that this limitation is an algorithmic logic programming that the applicant implementation).
As per claim 14, most of the limitations of this claim have been noted in the rejection of claim 8 above.
It is noted, however, Harsola did not specifically detail the aspects of
wherein the standardized question-and-answer set is associated with insurance underwriting as recited in claim 14.
On the other hand, Blazek achieved the aforementioned limitations by providing mechanisms of
wherein the standardized question-and-answer set is associated with insurance underwriting (Blazek: paragraph 0036: disclose one or more underwriting platforms used by one or more insurance agencies or systems).
As per claim 15, Harsola disclose, A system for managing a standardized question-and-answer set (Harsola: paragraph 0032: disclose mapping is between questions (corresponding to standard fields) and specific predictions (answers to the questions)), the system comprising:
a rules mapping engine operable to map a third-party-specific question-and- answer set to the standardized question-and-answer set (Harsola: paragraph 0034: disclose extracted text mapped to any standard field using the fine-tuned DistilBERT model should not be mapped to the “Others” field using the fuzzy logic): remaining limitations in this claim 16 are similar to the limitations in claim 1. Therefore, examiner rejects these remaining limitations under the same rationale as limitations rejected under claim 1.
As per claim 16, most of the limitations of this claim have been noted in the rejection of claim 15 above. In addition, Harsola disclose, an error detection module operable to detect an error in the standardized question- and-answer set (Harsola: paragraph 0030: disclose unmapped fields, be executed to invoke a fine-tuned DistilBERT model with a question-answer head for non-extracted fields. Examiner equates unmapped field to error).
As per claim 17, most of the limitations of this claim have been noted in the rejection of claims 15 and 16 above. In addition, Harsola disclose, detecting, by the error detection module, the error in the standardized question-and-answer set after the standardized question-and- answer set, wherein the error is detected after a set of user' answers associated with the standardized question-and-answer set is presented to the third-party provider (Harsola: paragraph 0030: disclose unmapped fields, be executed to invoke a fine-tuned DistilBERT model with a question-answer head for non-extracted fields. Examiner equates unmapped field to error. Examiner believes the action after detecting an error in this limitation is applicant’s choice of algorithmic logic in a particular order).
As per claim 18, most of the limitations of this claim have been noted in the rejection of claims 15, 16 and 17 above. In addition, Harsola disclose, receiving, from the third-party provider, information indicative of the error in the standardized question-and-answer set (Harsola: paragraph 0030: disclose unmapped fields, be executed to invoke a fine-tuned DistilBERT model with a question-answer head for non-extracted fields. Examiner equates unmapped field to error. Examiner believes the action after detecting an error in this limitation is applicant’s choice of algorithmic logic in a particular order).
As per claim 19, most of the limitations of this claim have been noted in the rejection of claim 15 above. In addition, Harsola disclose, responsive to the predictive score being less than the predetermined threshold (Harsola: paragraph 0035: disclose the threshold value is 0.5. This process of discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field. Examiner argues that the prior art taches applying threshold to select records and examiner believes that this limitation is an algorithmic logic programming that the applicant implementation), generating, by the rules mapping engine, a new standardized question corresponding to the third-party-specific question, wherein the standardized question is an existing standardized question (Harsola: paragraph 0035: disclose discarding errs on the side of a more precise DistilBERT mapping compared to an imprecise mapping to a generic “Others” field).
As per claim 20, most of the limitations of this claim have been noted in the rejection of claim 15 above.
It is noted, however, Harsola did not specifically detail the aspects of
wherein the third-party provider is an entity engaging in underwriting as recited in claim 20.
On the other hand, Blazek achieved the aforementioned limitations by providing mechanisms of
wherein the third-party provider is an entity engaging in underwriting (Blazek: paragraph 0036: disclose one or more underwriting platforms used by one or more insurance agencies ‘third-party provider’ or systems).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Pub. US 2020/0356604 A1 disclose “QUESTION AND ANSWER SYSTEM AND ASSOCIATED METHOD”
US Pub. US 2020/0019876 A1 disclose “Method for predicting and presenting future question in automatic computer system in e.g. local area network, involves uploading question database to strengthen conditional probability associated with future question”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836. The examiner can normally be reached on M-F. 8am - 4pm, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAVAN MAMILLAPALLI/
Primary Examiner, Art Unit 2159