DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 and 8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the abstract idea of a mathematical algorithm for determining a degree of unexpectedness between time-series waveform data stored in matrices [See the steps of instant Fig. 9].
This judicial exception is not integrated into a practical application because no context for the implementation of the algorithm is recited.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because no use of the algorithm result is recited. The recitation of computer processors and/or a computer program for implementing the algorithm amounts to the recitation of general purpose computer elements for the implementation of the mathematical algorithm and do not serve to amount to significantly more that the recitation of the abstract idea itself (see Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014)).
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter because the claims are directed to a computer program per se [see MPEP 2106 – “Non-limiting examples of claims that are not directed to one of the statutory categories: … iv. a computer program per se, Gottschalk v. Benson, 409 U.S. at 72, 175 USPQ at 676-77”].
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 6, and 8 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Inose et al. (US 20190236056 A1)[hereinafter “Inose”].
Regarding Claims 1 and 6, Inose discloses an information processing method/device [Paragraph [0079]] comprising:
a calculation unit, including one or more processors [Paragraph [0079]], configured to calculate a degree of overlapping between a probability distribution of each probability value included in a row i or a column i of a semantic similarity matrix [Paragraph [0035] – “The abovementioned system can include a data evaluation function. The data evaluation function is a function that analyzes and evaluates a large number of data to be evaluated (big data) on the basis of a small number of data (training data) that is manually classified. By including the data evaluation function, the abovementioned system can derive indicators (for example, values (for example, scores), words (for example, “high”, “moderate”, “low”, and the like)), and/or symbols (for example, symbols indicating “⊚”, “◯”, “Δ”, “×”, and the like) that can rank the data to be evaluated indicating the degree of relevance between the data to be evaluated and the predetermined case, for example, and implement the abovementioned evaluation. The data evaluation function may be implemented by the controller of the server device 2, for example.”Paragraph [0038] – “As a more specific example, the abovementioned system calculates the evaluation values of the data element in accordance with the expression expressed by Expression 1 below by evaluating each of the data elements with use of a transferred information amount (for example, an information amount calculated from a predetermined expression with use of the probability of appearance of the training data element and the probability of appearance of the classification information).”See Expression 1.Paragraph [0039] – “Here, when whether the data is related to a predetermined case is represented by a probability variable T, a case where the data is related to the predetermined case can be expressed by t=1, and a case where the data is not related to the predetermined case can be expressed by t=0. Meanwhile, when whether a predetermined data element is included in the data is represented by a probability variable M, a case in which the predetermined data element is included in the data can be expressed by m=1, and a case in which the predetermined data element is not included in the data can be expressed by m=0. Further, in the abovementioned Expression 1, p(t,m) represents a probability of t and m simultaneously occurring, p(t) represents a probability of t occurring, and p(m) represents a probability of m occurring. The abovementioned system can calculate, for example, the abovementioned transferred information amount for each data element, and set the calculated transferred information amount as the abovementioned weight. As a result, the abovementioned system can evaluate that the data element expresses the feature of the predetermined classification information more as the value of the calculated transferred information amount increases, for example.”Paragraph [0060] – “FIG. 4 is an example of a word-context matrix. The row of the matrix corresponds to the type of the morpheme (a first data element, hereinafter sometimes referred to as a “morpheme to be analyzed”) included in the corpus, and the column corresponds to the type of the morpheme (a second data element, hereinafter sometimes referred to as a “co-occurring morpheme”) co-occurring with the morpheme to be analyzed in the context of the corpus. The morphemes included in the corpus of the row of the word-context matrix include morphemes of the training data to which evaluation values are given, and morphemes of the data to be evaluated to which evaluation values are not given.”Paragraph [0061] – “When m represents the number of types of the morphemes to be analyzed, and n represents the number of types of the co-occurring morphemes, the word-context matrix is a matrix of m×n, and includes the number of appearances (co-occurrence frequency) of the co-occurring morphemes with respect to the morpheme to be analyzed as elements. The row of the word-context matrix becomes a row vector formed by the co-occurrence frequencies of a plurality of co-occurring morphemes for one morpheme to be analyzed out of the plurality of morphemes to be analyzed. Note that whether two morphemes are in a co-occurrence relationship can be determined by whether one morpheme appears within n (for example, n=2 to 10) morphemes before and after the other morpheme.”Paragraph [0064] – “When the abovementioned system transforms the elements of the word-context matrix shown in FIG. 4 by PMI, for example, the abovementioned system obtains the transformation matrix (C*) shown in FIG. 5.”], and a probability distribution of respective probability values included in a row j or a column j (j ≠ i) of a waveform similarity matrix [Paragraph [0035] – “The abovementioned system can include a data evaluation function. The data evaluation function is a function that analyzes and evaluates a large number of data to be evaluated (big data) on the basis of a small number of data (training data) that is manually classified. By including the data evaluation function, the abovementioned system can derive indicators (for example, values (for example, scores), words (for example, “high”, “moderate”, “low”, and the like)), and/or symbols (for example, symbols indicating “⊚”, “◯”, “Δ”, “×”, and the like) that can rank the data to be evaluated indicating the degree of relevance between the data to be evaluated and the predetermined case, for example, and implement the abovementioned evaluation. The data evaluation function may be implemented by the controller of the server device 2, for example.”Paragraph [0038] – “As a more specific example, the abovementioned system calculates the evaluation values of the data element in accordance with the expression expressed by Expression 1 below by evaluating each of the data elements with use of a transferred information amount (for example, an information amount calculated from a predetermined expression with use of the probability of appearance of the training data element and the probability of appearance of the classification information).”See Expression 1.Paragraph [0039] – “Here, when whether the data is related to a predetermined case is represented by a probability variable T, a case where the data is related to the predetermined case can be expressed by t=1, and a case where the data is not related to the predetermined case can be expressed by t=0. Meanwhile, when whether a predetermined data element is included in the data is represented by a probability variable M, a case in which the predetermined data element is included in the data can be expressed by m=1, and a case in which the predetermined data element is not included in the data can be expressed by m=0. Further, in the abovementioned Expression 1, p(t,m) represents a probability of t and m simultaneously occurring, p(t) represents a probability of t occurring, and p(m) represents a probability of m occurring. The abovementioned system can calculate, for example, the abovementioned transferred information amount for each data element, and set the calculated transferred information amount as the abovementioned weight. As a result, the abovementioned system can evaluate that the data element expresses the feature of the predetermined classification information more as the value of the calculated transferred information amount increases, for example.”] as a degree of unexpectedness between an i-th word and a j-th word using the semantic similarity matrix in which elements are probability values in one row or one column of a semantic similarity between words of a plurality of words and the waveform similarity matrix [Paragraph [0060] – “FIG. 4 is an example of a word-context matrix. The row of the matrix corresponds to the type of the morpheme (a first data element, hereinafter sometimes referred to as a “morpheme to be analyzed”) included in the corpus, and the column corresponds to the type of the morpheme (a second data element, hereinafter sometimes referred to as a “co-occurring morpheme”) co-occurring with the morpheme to be analyzed in the context of the corpus. The morphemes included in the corpus of the row of the word-context matrix include morphemes of the training data to which evaluation values are given, and morphemes of the data to be evaluated to which evaluation values are not given.”Paragraph [0061] – “When m represents the number of types of the morphemes to be analyzed, and n represents the number of types of the co-occurring morphemes, the word-context matrix is a matrix of m×n, and includes the number of appearances (co-occurrence frequency) of the co-occurring morphemes with respect to the morpheme to be analyzed as elements. The row of the word-context matrix becomes a row vector formed by the co-occurrence frequencies of a plurality of co-occurring morphemes for one morpheme to be analyzed out of the plurality of morphemes to be analyzed. Note that whether two morphemes are in a co-occurrence relationship can be determined by whether one morpheme appears within n (for example, n=2 to 10) morphemes before and after the other morpheme.”Paragraph [0064] – “When the abovementioned system transforms the elements of the word-context matrix shown in FIG. 4 by PMI, for example, the abovementioned system obtains the transformation matrix (C*) shown in FIG. 5.”] in which elements are probability values in one row or one column of a waveform similarity between time-series data of time-series data related to the words [Paragraph [0074] – “According to FIG. 10, a weight is only given to “train” that is the training data element at first, but a weight is also given to “car” (related data element) that is a conceptionally close near-synonym. The abovementioned system records the weight of the near-synonym to which weight is newly given in the memory in accordance with FIG. 3 (S308), and, when the data to be evaluated is evaluated (S314), performs evaluation by also referring to the weight of the near-synonym (S314).”Paragraph [0075] – “As a result of the above, the computer system according to this embodiment can correct the weight acquired from the original training data and expand the weight for the near-synonym, and hence can accurately evaluate the data to be analyzed without newly adding the training data including the near-synonym.”].
Regarding Claim 8, Inose discloses an information processing program which causes a computer to function as the information processing device according to claim 1 [Paragraph [0079] – “a CPU that executes a program (a control program of the data analysis system) that is software that implements the functions”].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20200279079 A1 – PREDICTING PROBABILITY OF OCCURRENCE OF A STRING USING SEQUENCE OF VECTORS
US 10482183 B1 – Device And Method For Natural Language Processing Through Statistical Model Comparison
US 20120215523 A1 – TIME-SERIES ANALYSIS OF KEYWORDS
US 20040088308 A1 – Information Analysing Apparatus
US 20030065510 A1 – Similarity Evaluation Method, Similarity Evaluation Program And Similarity Evaluation Apparatus
US 20020032549 A1 – Determining And Using Acoustic Confusability, Acoustic Perplexity And Synthetic Acoustic Word Error Rate
US 6154579 A – Confusion Matrix Based Method And System For Correcting Misrecognized Words Appearing In Documents Generated By An Optical Character Recognition Technique
US 6094506 A – Automatic Generation Of Probability Tables For Handwriting Recognition Systems
US 5862259 A – Pattern Recognition Employing Arbitrary Segmentation And Compound Probabilistic Evaluation
US 4827521 A – Training Of Markov Models Used In A Speech Recognition System
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ROBERT QUIGLEY whose telephone number is (313)446-4879. The examiner can normally be reached 11AM-9PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE R QUIGLEY/Primary Examiner, Art Unit 2857