DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claim 1 is an apparatus claim. Claim 9 is a method claim. Claim 17 is an apparatus claim. Therefore, claims 1, 9, and 17 are directed to either a process, machine, manufacture or composition of matter.
With respect to Claim 1:
Step 2A Prong 1:
select an event group in which at least a part of a plurality of event ranges overlap, the event ranges being ranges of character sequences estimated by a plurality of different methods with respect to a document of the teaching data, and the event ranges being different from a first range of character sequences defined with respect to the document of the teaching data (mental process – user can manually select an event group in which at least a part of a plurality of event ranges overlap)
determine an event range with the largest number of event ranges having same character sequences among the plurality of event ranges from among the event group as an additional event which is an event range to be added to the teaching data (mental process – user can manually determine an additional event)
add the additional event to the teaching data (mental process – user can manually add the event to the teaching data)
Step 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements:
a storage storing teaching data for generating a trained model (mere instructions to apply the exception using a generic computer component)
a hardware processor operatively coupled to the storage (mere instructions to apply the exception using a generic computer component)
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception. Additional elements:
a storage storing teaching data for generating a trained model (mere instructions to apply the exception using a generic computer component)
a hardware processor operatively coupled to the storage (mere instructions to apply the exception using a generic computer component)
Conclusion: The claim is not patent eligible.
Claims 9 and 17 are rejected on the same grounds as claim 1. Claim 17 further recites the two additional elements of: train a model by using updated teaching data in which the additional event generated by the data generation apparatus according to claim 1 is added to the teaching data; and generate a trained model. Under Step 2A prong 2, the judicial exception is not integrated into a practical application because these limitations are essentially adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Under Step 2B, the claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception because these limitations are essentially adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)).
Regarding Claims 2-6: The limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, other than the additional elements, nothing in the claim limitation precludes the step from practically being performed in the mind.
For claim 2: the limitation encompasses the user manually selecting the event ranges, and the limitation includes the additional element of the processor.
For claim 3: the limitation encompasses the user manually determining the even range based on the number of overlaps being at or above a threshold, and the limitation includes the additional element of the processor.
For claim 4: the limitation encompasses the user manually estimating an event range, and the limitation includes the additional element of the processor and using the plurality of models.
For claim 5: the limitation encompasses the user manually divide the teaching data, estimate the event ranges, and iterations, and the limitation includes the additional element of the processor and training a model.
For claim 6: the limitation encompasses the user manually generate a plurality of sets, generate a model set, and estimate the even ranges, and the limitation includes the additional element of the processor and using the model set.
These judicial exceptions are not integrated into a practical application. The additional element(s) of a processor (Claims 2-6) are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. The additional element(s) of training (Claim 5) and using a model (Claims 4, 6) recite merely adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of processor (Claims 2-6) amount to no more than mere instructions to apply the exception using a generic computer component or operation. Mere instructions to apply an exception using a generic computer component or operation cannot provide an inventive concept. The additional element(s) of training (Claim 5) and using a model (Claims 4, 6) recite adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Accordingly, the claims are not patent eligible.
Regarding Claims 7-8: These limitations, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind. That is, nothing in the claim limitation precludes the step from practically being performed in the mind.
For claim 7: the limitation encompasses the user manually using each of the event ranges estimated by the different methods that are ranges which a plurality of users set for the document.
For claim 8: the limitation encompasses the user manually using a weight that is given to each of sentences or tokens, which constitute the document.
These judicial exceptions are not integrated into a practical application. In particular, the claims do not recite any additional elements. Accordingly, this does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, no additional elements are cited. Accordingly, the claim is not patent eligible.
Claims 10-16 are rejected on the same grounds as claims 2-8, respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al. (hereinafter Zhu), NOUN-PHRASE CHUNKING MODEL BASED ON SBCB ENSEMBLE LEARNING ALGORITHM, in view of Geadas et al. (hereinafter Geadas), Ensemble Learning for Keyword Extraction from Event Descriptions, further in view of Hall et al. (hereinafter Hall), U.S. Patent 9,536,522.
Regarding Claim 1, Zhu discloses a data generation apparatus comprising:
a storage storing teaching data for generating a trained model [“the process of computer extraction meaningful information from natural language output” §1 ¶1; Note: computers have storage and processors]; and
a hardware processor operatively coupled to the storage [“the process of computer extraction meaningful information from natural language output” §1 ¶1; Note: computers have storage and processors] and configured to:
select an event group in which at least a part of a plurality of event ranges overlap [“input sentence” §2.1 ¶1; “ensemble learning algorithm employs multiple learners and integrates their prediction capabilities” §2.2 ¶1; Fig. 1; Note: multiple classifiers running on the same input sentence and selecting the same sequence of characters are overlapping on the sequence of characters they select], the event ranges being ranges of character sequences [“NP denotes noun phrase, VP represents verb phrase, and PP is the prepositional phrase” §2.1 ¶2] estimated by a plurality of different methods [“ensemble learning algorithm employs multiple learners and integrates their prediction capabilities” §2.2 ¶1; Fig. 1] with respect to a document of the teaching data [“dataset D” §2.2 ¶2; Fig. 1], and the event ranges being different from a first range of character sequences defined with respect to the document of the teaching data [“the words” §1 ¶2];
determine an event range with a largest number of event ranges having same character sequences among the plurality of event ranges from among the event group as an additional event [“prediction is given by the ensemble model Cf with the combination of a voting strategy” §2.2 ¶2; “an ensemble learning approach in identifying NP chunks” §1 ¶3] which is an event range to be added to the teaching data.
However, while Zhu does disclose voting strategy in general, Zhu fails to explicitly disclose determine an event range with a largest number of event ranges having same character sequences among the plurality of event ranges from among the event group as an additional event which is an event range to be added to the teaching data.
Geadas discloses determine an event range with a largest number of event ranges having same character sequences among the plurality of event ranges [“Majority voting in particular its weighted version are the most widespread choices when the individual classifiers give label outputs” §2.C ¶1; “The plurality vote of Equation 2, is called in a wide sense the majority vote, and is the most often used rule from the majority vote group.” §III.C ¶3] from among the event group as an additional event which is an event range to be added to the teaching data.
It would have been obvious to one having ordinary skill in the art, having the teachings of Zhu and Geadas before him before the effective filing date of the claimed invention, to modify the apparatus of Zhu to incorporate the plurality voting of the ensemble of Geadas.
Given the advantage of deciding a final decision from an ensemble to produce more accurate results, one having ordinary skill in the art would have been motivated to make this obvious modification.
However, Zhu fails to explicitly disclose determine an event range with a largest number of event ranges having same character sequences among the plurality of event ranges from among the event group as an additional event which is an event range to be added to the teaching data; and
add the additional event to the teaching data.
Hall discloses determine an event range with a largest number of event ranges having same character sequences among the plurality of event ranges from among the event group as an additional event which is an event range to be added to the teaching data; and
add the additional event to the teaching data [“information retrieval model annotations may be added to the training data set. For example, the information retrieval model 130 may add information retrieval model annotations to the training examples in the training data set 10 141 to produce annotated training examples for the annotated training data set 142.” col. 5, lines 7-12].
It would have been obvious to one having ordinary skill in the art, having the teachings of Zhu, Geadas, and Hall before him before the effective filing date of the claimed invention, to modify the combination to incorporate modifying data to add into the training data.
Given the advantage of producing better training data which results in more accurate models, one having ordinary skill in the art would have been motivated to make this obvious modification.
Regarding Claim 2, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein the processor selects, as the event group, the event ranges when an overlapping degree of the event ranges is a threshold or more [“the prediction is given by the ensemble model Cf with the combination of a voting strategy” §2.2 ¶2; Note: As each classifier in the ensemble selects an event range, a voting strategy is used to select the final event range. Voting selects overlapping ranges (i.e., the same chunk as selected by other classifiers), which is interpreted as the event group, as the final range using a threshold, such as simple majority.].
Regarding Claim 3, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein when a number of the event ranges which overlap each other is a threshold or more, the processor determines the event ranges as the additional event [“the prediction is given by the ensemble model Cf with the combination of a voting strategy” §2.2 ¶2; Note: As each classifier in the ensemble selects an event range, a voting strategy is used to select the final event range. Voting selects overlapping ranges as the final range using a threshold, such as simple majority voting.].
Regarding Claim 4, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein the processor is further configured to estimate an event range in the document, with respect to each of a plurality of different trained models trained by using the teaching data [“the prediction is given by the ensemble model Cf with the combination of a voting strategy” §2.2 ¶2; “learning component of the training tagged corpus” §1 ¶2; “During the training phase, m classifiers are trained based on the learning algorithm L over the m resampled training datasets from the original datasets.” §2.2 ¶2].
Regarding Claim 5, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein the processor is further configured to divide the teaching data into a plurality of partial data [“an original dataset D = {D1, D2, D3 ... , Dm}” §2.2 ¶2];
train a model by using a part of the plurality of partial data, and generate a trained model [“During the training phase, m classifiers are trained based on the learning algorithm L over the m resampled training datasets from the original datasets.” §2.2 ¶2]; and
estimate, by using the trained model, the event ranges in regard to a sentence corresponding to a remainder of the plurality of partial data other than the part thereof [“m classifiers C = {C1, C2, C3, • • •, Cm} are generated. After that, an optimization process is employed to select good classifiers” §2.2 ¶2; Fig. 1; “final classifiers Cf= {C1, C2, C3, • • •, Cn} (n≤m), are obtained to form a resultant ensemble model. Finally, the prediction is given by the ensemble model Cf with the combination of a voting strategy” §2.2 ¶2; Note: The classifiers find the event ranges.],
wherein the generation of the trained model and the estimation of the event ranges are repeated such that the event ranges are estimated for each of the plurality of partial data [“During the training phase, m classifiers are trained based on the learning algorithm L over the m resampled training datasets from the original datasets.” §2.2 ¶2].
Regarding Claim 6, Zhu, Geadas, and Hall disclose the apparatus according to claim 5. Zhu further discloses wherein the processor is further configured to:
generate a plurality of sets of the plurality of partial data by varying division positions of the teaching data [“divide a sentence into labeled, non-overlapping and non-recursive chunks” §1 ¶1; “training a classifier to segment a sentence” §1 ¶2; Note: Sentences are not all the same length and would therefore have varying division positions based on the number of words; “an original dataset D = {D1, D2, D3 ... , Dm}” §2.2 ¶2],
generate a trained model set including the plurality of trained models with respect to each of the sets of the plurality of partial data [“final classifiers Cf= {CI, C2, C3, • • •, Cn}” §2.2 ¶2], and
estimate the event ranges by using the trained model set with respect to each of the sets of the plurality of partial data [“prediction is given by the ensemble model Cf with the combination of a voting strategy” ¶2.2 ¶2].
Regarding Claim 7, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein each of the event ranges estimated by the different methods are ranges which a plurality of users set for the document [“the supervised analysis” §1 ¶1].
Regarding Claim 8, Zhu, Geadas, and Hall disclose the apparatus according to claim 1. Zhu further discloses wherein a weight is given to each of sentences or tokens, which constitute the document [“NP denotes noun phrase, VP represents verb phrase, and PP is the prepositional phrase” §2.1 ¶2; Note: noun phrases have a weight of NP, verb phrases have a weight of VP, and prepositional phrases have a weight of PP].
Claims 9-16 are rejected on the same grounds as claims 1-8 respectively.
Regarding Claim 17, Zhu, Geadas, and Hall disclose a learning apparatus comprising: a processor configured to: …by using updated teaching data in which the additional event generated by the data generation apparatus according to claim 1 is added to the teaching data [See Claim 1 mapping above].
However, Zhu fails to explicitly disclose train a model…; and
generate a trained model.
Hall discloses train a model…; and
generate a trained model [“a natural language processing model may be
trained with the annotated training data set” col. 5 lines 58-59].
It would have been obvious to one having ordinary skill in the art, having the teachings of Zhu, Geadas, and Hall before him before the effective filing date of the claimed invention, to modify the combination to incorporate the training of an NPL model of Hall.
Given the advantage of providing annotated training data to a model for more accuracy and faster processing, one having ordinary skill in the art would have been motivated to make this obvious modification.
Examiner’s Note
The Examiner respectfully requests of the Applicant in preparing responses, to fully consider the entirety of the reference(s) as potentially teaching all or part of the claimed invention. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain.” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art, including non-preferred embodiments (see MPEP 2123). The Examiner has cited particular locations in the reference(s) as applied to the claim(s) above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claim(s), typically other passages and figures will apply as well.
Additionally, any claim amendments for any reason should include remarks indicating clear support in the originally filed specification.
Response to Arguments
Regarding the §101 rejections, Applicant's arguments have been fully considered but have been found unpersuasive. Applicant argues that 1) the additional features cannot reasonably be performed in the mind, 2) the claims clearly integrate the abstract idea into a practical application, and 3) the claims recite technological improvements that are directed to significantly more than the abstract idea. Examiner disagrees with each point for at least the following reasons.
The first and second arguments are conclusory and no support is provided. Accordingly, they are not found persuasive.
In response to the third argument concerning significantly more, Applicant’s argument asserts that the claims provide an improved quality of a dataset so that a model with high recall is generated, and therefore provides an improvement. However, this alleged improvement is to the abstract idea. The claims recite abstract steps to determine event ranges to add to training data. The inventive concept must be tied to the additional elements, which in the instant claims are generic computer components (e.g., memory and a processor).
Accordingly, the rejections are maintained.
Regarding the prior art rejections, Applicant's arguments with respect to the claims have been considered but are moot because the arguments do not apply to the references being used in the current rejection of the limitations.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT H BEJCEK II whose telephone number is (571)270-3610. The examiner can normally be reached Monday - Friday: 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.B./ Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/ Supervisory Patent Examiner, Art Unit 2148