8962
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending.
This action is response to the application filed on November 27, 2024.
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation in part of U.S. Patent Application Serial No. 18/779,727, filed July 22, 2024, which is a continuation in part of U.S. Patent Application Serial No. 18/541,890, filed December 15, 2023, both of which are hereby incorporated by reference in their entirety.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
As per claims 1-20:
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites method, and non-transitory computer readable storage medium storing instructions for execution by one or more processors, the instructions comprising: ingesting raw data from a data shipper; preprocessing the raw data to generate processed data, wherein preprocessing the raw data comprises: removing stop words from the raw data; and suppressing duplicate information in the raw data; and feeding the processed data to a machine learning engine that executes a large language model algorithm on the processed data to identify one or more of: an anomaly in the processed data; or two or more correlated events in the processed data.
Step 1: Statutory Category:
Claims 1 and 16 are directed to a method and non-transitory computer readable storage medium, which is a series of process and device, respectively, thereby meeting step 1.
Step 2A – Prong 1: Judicial Exception Recited:
Claim 1 and 16 recites a “mental process” abstract idea that can be performed in the human mind or by using a pen and paper. Specifically, claims 1 and 16 recites the following limitations that can be practically performed in the mind.
The limitations of the claim recites ingesting raw data from a data shipper; preprocessing the raw data to generate processed data, wherein preprocessing the raw data comprises: removing stop words from the raw data; and suppressing duplicate information in the raw data; and feeding the processed data to a machine learning engine that executes a large language model algorithm on the processed data to identify one or more of: an anomaly in the processed data; or two or more correlated events in the processed data. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas.
“ingesting” is data acquiring and can be done mentally.
"processing" is data managing and can be done mentally.
“removing” is data managing and can be done mentally.
“feeding" can also be done mentally. The feeding the processed data to a machine learning were considered insignificant extra solution activity. The limitations are mere data converting, sending and exporting recited at a high level of generality, which is well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The limitations remain insignificant extra-solution activity even upon reconsideration. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept.
As per MPEP 2106,04(a)(2) III “The courts consider a mental process (thinking) that ‘can be performed in the human mind, or by a human using a pen and paper’ to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011)”. Accordingly, the claim recites an abstract idea.
Step 2A – Prong 2: Integrated into a Practical Application
This judicial exception is not integrated into a practical application. The additional elements of “anomaly in the processed data; or two or more correlated events in the processed data.” in the claim limitations do not improve the functioning of a computer, or an improvement to other technology and is merely using a computer as a tool to perform the concept. Accordingly, there are no additional elements in the claim limitations that integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
Step 2B: Claims provide an Inventive Concept (significantly more than the judicial exception).
The claims 1 and 16 do not have additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, there are no additional elements that are sufficient to amount to significantly more than the judicial exception. The claims 1-20 are not patent eligible.
As per dependent claims 2-15 and 17-20 depend directly or indirectly on the independent claim 1 and 16, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself.
The dependent claims 2-15 and 17-20 states additional steps, 2. “deleting non-consequential log messages; and wherein the non-consequential log messages comprises a plurality of log messages received in sequence that repeat the same information a threshold number of times. 3. computing a first hash for a first data message; storing the first data message on a database; storing the first hash on the database, wherein the database is indexed based on hash values; ingesting a second data message that is identical to the first data message; and computing a second hash for the second data message, wherein the second hash is identical to the first hash due to the second data message being identical to the first data message; and mapping the second data message to the first hash stored on the database “stop words comprises pronouns. 5. The method of claim 1, wherein the stop words comprises article words. 6. The method of claim 1, wherein the stop words comprises connective words. 7. The method of claim 1, wherein the stop word comprises propositional words. 8. The method of claim 1, further comprising storing the processed data on a database that is indexed based on hash values. 9. The method of claim 1, wherein the raw data comprises log messages for a network application. 10. The method of claim 1, wherein the raw data comprises transactional data. 11. The method of claim 1, wherein the raw data comprises social media data associated with a social media website or application. 12. The method of claim 1, further comprising: ingesting domain-specific data applicable to the raw data; and feeding the domains-specific data to the machine learning engine in addition to feeding the processed data to the machine learning engine; wherein the machine learning engine executes the large language model algorithm to assess the processed data in view of the domain-specific data to identify one or more of the anomaly in the processed data or the two or more correlated events in the processed data. 13. The method of claim 12, wherein the machine learning engine further executes the large language model algorithm to assess the processed data in view of the domain-specific data to determine a sentiment classification for at least one data event within the processed data. 14. The method of claim 1, wherein preprocessing the raw data comprises eliminating data to reduce one or more of a storage requirement or a processor requirement for assessing the raw data. 15. The method of claim 1, wherein preprocessing the raw data further comprises: abbreviating one or more terms in the raw data; and substituting a large length identifier in the raw data with a shorter length identifier.
Therefore, the claims 1-20 are rejected under 35 U.S.C. 101 as being directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-17 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Crouse et al (US 20180253416 A1).
With respect to claims 1 and 16, Crouse et al teaches
ingesting raw data from a data shipper (FIG. 1, FIG. 3, [0028] Words may be counted based on the raw input following data filtering 104);
preprocessing the raw data to generate processed data, wherein preprocessing the raw data comprises (0134] At 602, a plurality of documents is received. The documents contain text):
removing stop words from the raw data (FIG. 1, [0024] Pre-processing 112 modifies the documents or portions of the documents for later processing. Pre-processing 112 may include stripping out punctuation, removing stop words 114, The list may be editable to add or remove stop words 114); and
suppressing duplicate information in the raw data ([0024] Pre-processing 112 may include stripping out punctuation, removing stop words 114, converting acronyms and abbreviations 116 to full words, stemming, and/or removing duplicate words); and
feeding the processed data to a machine learning engine that executes a large language model algorithm on the processed data to identify one or more of ([0231] NLP human language, NLP algorithms are typically based on machine learning algorithms. NLP can rely on machine learning to automatically learn these rules by analyzing a set of examples (i.e. a large corpus, like a book, down to a collection of sentences)):
an anomaly in the processed data; or two or more correlated events in the processed data ([0025] Anomaly detection 118 identifies portions of documents that likely include an anomaly. When this analysis is configured to recognize characteristics of dependent patent claims as being “anomalous,” anomaly detection 118. [0271] Correlation is a measure of how strongly two variables are related to each other. A correlation of +100% indicates a perfect positive correlation).
With respect to claims 2 and 17, Crouse et al teaches deleting non-consequential log messages; and wherein the non-consequential log messages comprises a plurality of log messages received in sequence that repeat the same information a threshold number of times (0009] FIG. 1 shows an example processing pipeline for generating a user interface showing the results of automatic document analysis).
With respect to claims 4-7, Crouse et al teaches the stop words comprises pronouns, article words, connective words and propositional words ([0019] FIG. 1. FIG. 10, The documents may be any type of document such as issued patents, published patent applications, scholarly articles, news articles, financial statements, etc. The documents may also be available in any one of multiple different formats such as plaintext, hypertext markup language (HTML), comma separated values (CSV), or images such as portable document format (PDF) or Tag Image File Format (TIFF) files).
With respect to claims 8 and 19, Crouse et al teaches storing the processed data on a database that is indexed based on hash values (0009] FIG. 1 for generating a user interface showing the results of automatic document analysis).
With respect to claim 9, Crouse et al teaches log messages for a network application ([0210] FIG. 10 computing device(s) 1000 may include a server, a desktop PC (personal computer), a notebook or portable computer, a workstation, a mainframe computer, a handheld device, a netbook, an Internet appliance, a portable reading device, an electronic book reader device, a tablet or slate computer, a game console, a mobile device (e.g., a mobile phone, a personal digital assistant, a smart phone, etc.), or a combination thereof. The computing device(s) 1000 may be implemented as a single device or as a combination of multiple physically distinct devices. computing device(s) 1000 may be implemented as a combination of a server and a client).
With respect to claims 10-11, Crouse et al teaches raw data comprises transactional data and social media data associated with a social media website or application ([0210] FIG. 10 computing device(s) 1000 may include a server, a desktop PC (personal computer), a notebook or portable computer, a workstation, a mainframe computer, a handheld device, a netbook, an Internet appliance, a portable reading device, an electronic book reader device, a tablet or slate computer, a game console, a mobile device (e.g., a mobile phone, a personal digital assistant, a smart phone, etc.), or a combination thereof. The computing device(s) 1000 may be implemented as a single device or as a combination of multiple physically distinct devices. computing device(s) 1000 may be implemented as a combination of a server and a client).
With respect to claims 12 and 20, Crouse et al teaches ingesting domain-specific data applicable to the raw data; and feeding the domains-specific data to the machine learning engine in addition to feeding the processed data to the machine learning engine; wherein the machine learning engine executes the large language model algorithm to assess the processed data in view of the domain-specific data to identify one or more of the anomaly in the processed data or the two or more correlated events in the processed data ([0133] FIG. 6 is a flowchart depicting an example method 600 of automatically processing documents to generate a UI that shows overall breadth scores for the documents. Portions of method 600 may be the same or similar to portions of methods 300-500).
With respect to claim 13, Crouse et al teaches to assess the processed data in view of the domain-specific data to determine a sentiment classification for at least one data event within the processed data ([0133] FIG. 6 is a flowchart depicting an example method 600 of automatically processing documents to generate a UI that shows overall breadth scores for the documents. Portions of method 600 may be the same or similar to portions of methods 300-500).
With respect to claim 14, Crouse et al teaches eliminating data to reduce one or more of a storage requirement or a processor requirement for assessing the raw data (FIG. 1, FIG. 3, [0028] Words may be counted based on the raw input following data filtering 104).
With respect to claim 15, Crouse et al teaches abbreviating one or more terms in the raw data; and substituting a large length identifier in the raw data with a shorter length identifier ([0024] Pre-processing 112 may include stripping out punctuation, removing stop words 114, converting acronyms and abbreviations).
Allowable Subject Matter
Claims 3 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISAAC M WOO whose telephone number is (571)272-4043. The examiner can normally be reached 9:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISAAC M WOO/Primary Examiner, Art Unit 2163