Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,650

SENTENCE CLASSIFICATION APPARATUS, SENTENCE CLASSIFICATION METHOD, AND SENTENCE CLASSIFICATION PROGRAM

Non-Final OA §103§DP
Filed
Jun 20, 2023
Examiner
MCCORD, PAUL C
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Hitachi, Ltd.
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
393 granted / 569 resolved
+7.1% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
41 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-10, 12, 13 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-12 of U.S. Patent No. 11727214. Although the claims at issue are not identical, they are not patentably distinct from each other because the claim 1 of the ‘214 patent in conjunction with claims 7, 10 of the ‘214 patent form a species to which instant claim 1 can be considered generic. The remaining claims are considered obvious variants of claims 2-10 of the 3’214 patent. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-10, 12, 13 rejected under 35 U.S.C. 103 as being unpatentable over Burstein: 20100223051 hereinafter Bur further in view of Zhao: 20170278510 and further in view of Duan: 20050149518. Regarding claim 1 Bur teaches: A sentence classification apparatus for classifying sentences into classified groups (Bur: Abstract; ¶2, 6-9, 67, 68: classifier model trained to predict relations among sentences) comprising: a case sentence obtention unit for obtaining a plurality of case sentences which are associated with effect values that are values obtained by evaluating effects (Bur: ¶ 9-12: such as by determining word by document values of singular value decomposition (SVD) with respect to sentences within a document wherein the sentences or other segments of the document are represented as vectors such as by performance of latent semantic analysis (LSA) which determines a vector space comprising vectors for each term, sentence, of a document and said space representative of effect values, vectors, etc. thereof); a case value creation unit for creating case values obtained by numerizing the case sentences for each of a plurality of parameters with different values (Bur: Abstract; ¶ 9-12, 28-30, 52, 53: sparse random indexing vectors comprises a vocabulary reified with respect to text segments, sentences, etc. of an input document thereby generative of numerical values such as within a word by context co-occurrence matrix, and such as within a group of words in the form of a text segment vector; from such structures similarity scores are derived ); a similarly calculation unit for scoring a correlation between the case values and the effect values for each of the values of the parameters (Bur: ¶ 28-30, 67-69, 72-77, 89: similarity scores used to determine relatedness of segments, sentences, etc. wherein parameters of case sentences determined in the document are compared with an effect, target, etc. sentence, parameters thereof); and a parameter selection unit for selecting a parameter among the parameters with different values on the basis of the similarity score (Bur: ¶ 42-45: sentences are ranked with respect to plurality of dimensions; classified, labelled, etc. with respect to similarity; etc.); wherein the case value creation unit clusters sentences into a plurality of clusters (Bur: ¶ 8, 25, 70-77: Fig. 2: sentences classified, clustered by class based on vector values therein); wherein the similarly calculation unit calculates values for each cluster, and wherein the parameter selection unit selects parameters based on similarities for each cluster (id.), and wherein the plurality of case sentences are classified using the parameter selected by the parameter selection unit (Bur: ¶ 66-70, 107, etc.: sentences classified as related based on training a classifier based on selected features, weights thereof, etc.) and wherein the case value creation unit creates a word vector data structure comprising a vocabular of words and at least a semantic vector for each word (Bur: 9-11, 54, 56, etc.: system generates a word vector vocabulary of input documents which functions similarly as a database of words, word vectors, which functions to track semantic relationships among words across documents) in which words are associated with word vectors respectively for each of the parameters with different values on the basis of the case sentences obtained by the case sentence obtention unit (Bur: Abstract; ¶ 9-11: word vectors of the input document(s) used to generate contest vectors to generate semantic segments of input document(s) based thereon and using parameters derived therefrom). Bur thus teaches determining correlations among sentences obtained from a document and projected into a vector space representative of effect values thereof however Bur does not discuss the explicit calculation of correlation coefficients and selection of parameters, clustering, etc. based on the calculation of correlation coefficients among sentences obtained from a document and projected values thereof and wherein the plurality of case sentences are classified using a parameter selected by a parameter selection unit. Bur teaches learning word vectors and generating sentence vectors therefrom such as by Latent Semantic Analysis and used for generating a word vector database in the form of vectorized vocabulary for each of one or more documents (Bur: ¶ 9, 12, 25) but does not explicitly discuss the amended recited case value creation unit which creates a word vector database that creates …case values using the word vector database. In a related field of endeavor Zhao teaches a system and method for training a language processor by using mappings, embeddings, etc. of word vectors derived from input sentences using a word vector database such as word2vec, glove, etc. and to thereby resolve attention parameters representing correlation between a word and each of one or more other words in each of the input sentences (Zhao: Abstract; ¶ 35, 39, 48: an input sentence segmented into words used to calculate word vectors, context vectors, etc.), said words are associated with word vectors respectively for each of the parameters with different values on the basis of the input sentences based on persisted mappings, embeddings, derived therefrom (Zhao: ¶ 35, 39, 48: such as by maintaining a data structure mapping the word vectors, embeddings therefor upon a word vector database such as word2vec, etc.; said embeddings, mappings, etc. comprising a persistent data structure by which the word vector database learns to represent documents in a corpus and encodes relationships therebetween). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to create an explicit vocabulary related database to maintain the Bur taught or suggested encoded semantic relationships among words, word segment vectors, semantic vectors, etc. of the expanding Bur corpus specific vocabulary for persisting the Zhao taught mappings or embeddings to a word vector database such as word2vec etc. and for at least the purpose of maintaining the semantic structure accreted by the Bur vocabulary with respect to emergent meaning thereof upon word2vec or similar word vector databases; one of ordinary skill in the art would have expected only predictable results therefrom. Bur in view of Zhao thus teaches determining correlations among sentences obtained from a document and projected into a vector space representative of effect values thereof however Bur in view of Zhao does not discuss the explicit calculation of correlation coefficients and selection of parameters, clustering, etc. based on the calculation of correlation coefficients among sentences obtained from a document and projected values thereof and wherein the plurality of case sentences are classified using a parameter selected by a parameter selection unit. In a related field of endeavor Duan teaches a system and method for determining relevant features by deriving a correlation metric to evaluate input and target features within a partitioned data space comprising coded instructions for calculating a correlation coefficient for each of a cluster of data, and parameters thereof to thereby determine relevance within a data space based on a correlation metric between features thereof an target features such as by mapping the relationship to a distance thereby allowing the evaluation and selection of features relevant upon local regions of a data space (Duan: Abstract; ¶ 7, 8, 19-21, 33, 34; Figs 1A, 1B; Claim 4). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to utilize a correlation metric such as that taught or suggested by Duan within the Bur in view of Zhao system and method for at least the purpose of explicitly calculating correlations among words in input documents to thereby winnow relevance of the Bur in view of Zhao embeddings by selection of features relevant upon clusters thereof; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 2 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1 wherein the parameter selection unit selects the parameter on the basis of the number of correlation coefficients that exceeds a certain threshold for each parameter (Bur: ¶ 61, 70, Fig 2: word vector reified based on threshold semantic distance); (Zhao: ¶ 37, 53-55, 69: a decay parameter functions as a threshold limiting the number of correlations by limiting the allowable distance between words generative of an attention parameter); (Duan: ¶ 34, Fig 2: maximum number of features used to parse the data space into smaller spaces). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 3 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1 wherein the parameter selection unit selects the parameter on the basis of the average value of the correlation coefficients calculated for each parameter (Bur: ¶ 8: adjacency among vectors, etc. based on average values therebetween; based on an average distance, etc.); (Duan: ¶ 19, etc.: such as determining correlation, clustering, etc. based on a k-mean clustering, etc. algorithm). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 4 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1, further comprising: a classification outcome display unit for displaying the outcome obtained by classifying the plurality of case sentences using the parameter selected by the parameter selection unit (Bur: ¶ 8, 25, 43-45, 70-77: Fig. 2: such as the display of classification data). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 5 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 4 wherein the classification outcome display unit displays representative sentences that are sentences representing classified groups respectively (Bur: ¶ 8, 25, 43-45, 70-77: Fig. 2: data points representative of sentence vector represented as classified groups for display). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 6 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 5 The sentence classification apparatus according to wherein the classification outcome display unit includes a selection unit for selecting a classified group among the classified groups and displays a case sentence included in the classified group selected by the selection unit (Bur: ¶ 8, 25, 43-45, 70-77: Fig. 2: such as selection for human annotation, labelling etc. of particular groupwise data). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 7 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1 wherein the case sentence obtention unit obtains a case sentence that meets a condition set on a condition setting screen where a condition for selecting the parameter is set (Bur: ¶ 8, 25, 43-45, 70-77: Fig. 2: labelling of sentences that have met conditions, threshold, distance, etc. necessary to each/any particular cluster). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 8 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1 wherein the parameter selection unit selects a parameter that makes the absolute value of the correlation coefficient maximum (Bur: ¶ 8, 25, 43-45, 70-77: Fig. 2: dimensions, parameters, etc. selected as part of labelling process and with respect to a maximum similarity score). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 9 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 1, wherein the case sentences include condition sentences showing the conditions for the cases and outcome sentences showing the outcomes of the cases respectively; and wherein the case value creation unit creates condition values obtained by numerizing the condition sentences and outcome values obtained by numerizing the outcome sentences (Bur: ¶12, 25, 43-45: sentences, segments, etc. labelled such as a thesis or other conditional values wherein each/any sentence, segment, etc. vector comprises a numerical value applied to the sentence, segment, etc.). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claim 10 Bur in view of Zhao in view of Duan teaches or suggests: The sentence classification apparatus according to claim 8 wherein the case value creation unit clusters the condition sentences and the outcome sentences into the clusters of the condition sentences and the clusters of the outcome sentences respectively (Bur: ¶ 9, 43-45, 76, etc.: such as the extraction of particular words, segments, sentences, etc. determined based on each/any document), and at the same time, extracts the clusters of the condition sentences associated with the clusters of the outcome sentences respectively (Bur: ¶ 9, 43-45, 76, etc.: such as by labelling of particular portions of a particular document, such as a thesis, etc. and with respect to extracted values of each/any of the additional words, segments, sentences, etc.), wherein the correlation coefficient calculation unit calculates the correlation coefficient for each of the extracted clusters of the condition sentences (Bur: ¶ 9, 43-45, 76, etc.: such as by determination of similarities between a target sentence and a values of the plurality of additional sentences); (Zhao: Abstract; ¶ 35, 39, 48); (Duan: Abstract; ¶ 7, 8, 19-21, 33, 34; Figs 1A, 1B; Claim 4); and wherein the parameter selection unit selects the parameter that makes the correlation coefficients between effect values associated with the clusters of the outcome sentences and the clusters of the condition sentences maximum (Bur: ¶ 9, 42-45, 76, etc.: sentences are ranked with respect to plurality of dimensions; classified, labelled, etc. with respect to similarities therebetween, etc.). The claim is considered obvious over Bur as modified by Zhao, and Duan as addressed in the base claim as it would have been obvious to apply the further teaching of Bur, Zhao, and/or Duan to the modified device of Bur, Zhao, and Duan; one of ordinary skill in the art would have expected only predictable results therefrom. Regarding claims 12, 13 – the claims are considered to recite substantially similar subject matter to that of claim 1 as rejected supra and are similarly rejected. Response to Arguments Applicant’s arguments and amendments, see Remarks and Claims, filed 2/3/26, with respect to the rejection(s) of claim(s) 1-13 under 35 USC 103 over Burstein, Sun, and Chormunge have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Burstein, Zhao, and Duan. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL C MCCORD whose telephone number is (571)270-3701. The examiner can normally be reached 730-630 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CAROLYN EDWARDS can be reached at (571) 270-7136. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL C MCCORD/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
Aug 03, 2023
Response after Non-Final Action
May 29, 2025
Non-Final Rejection — §103, §DP
Aug 29, 2025
Response Filed
Nov 03, 2025
Final Rejection — §103, §DP
Feb 03, 2026
Request for Continued Examination
Feb 13, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603094
ADAPTIVE PROCESSING WITH MULTIPLE MEDIA PROCESSING NODES
2y 5m to grant Granted Apr 14, 2026
Patent 12592238
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12593192
MEDIA PLAYBACK BASED ON SENSOR DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12572323
DYNAMIC AUDIO CONTENT GENERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567003
TECHNOLOGIES FOR DECENTRALIZED FLEET ANALYTICS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
96%
With Interview (+26.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month