Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Introduction
This office action is in response to Applicant’s submission filed on 7/16/2024. As such, claims 1-20 have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites a method that, under the broadest reasonable interpretation, claims limitations that cover performance of the limitations in the human mind with the assistance of physical aids (e.g., pen and paper), but for the recitation of generic or well-known or conventional computer components. That is, other than reciting “first language model, first electronic communication, second language model, and second electronic communications,” nothing in these claim limitations precludes the steps from practically being performed in the mind. As a whole, claim 1 pertains to processing text and verifying the text is generated by AI by recreating the text and making a comparison, which is a mental process that a human can do. Individually, each of the limitations also pertains to a mental process and/or insignificant extra solution activity, for example:
prompting a first language model to extract from a first electronic communication a header and keywords from a body of the first electronic communication, wherein the first electronic communication has already been determined to be an attack; (e.g., data gathering step, a human can obtain or receive a text that has been extracted by a computer, like receiving a print out of a text with header and keywords from a spam/spoofing email.)
searching publicly available information based on the keywords; (e.g., the human performing searching of information using the provided keywords.)
prompting a second language model to compose an electronic communication based, at least in part, on information acquired from the searching and a sender and a recipient indicated in the header; (e.g., processing or entry of information, the human entering a prompt into a computer using information gathered from the search, and the sender and recipient from the header into a computer to generate text.)
prompting the second language model to determine whether the first and second electronic communications are similar; (e.g., processing information, the human using the computer to compare the text, or the human can print out text and compare them side by side.)
and indicating the first electronic communication as generated by artificial intelligence if the second language model responds that the first and second electronic communications are similar. (e.g., evaluation of information, the human making a determination that the text are similar.)
The judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components. Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of receiving, determining, or outputting information) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional limitations of using generic computer components amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Claim 1 is not patent eligible.
The examiner further notes that the use of claimed generic computer components (“first language model, first electronic communication, second language model, and second electronic communications,”) invokes such generic computer components “merely as a tool to perform an existing process”. MPEP 2106.05(f). MPEP 2106.05(f) further explains:
Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). Similarly, "claiming the improved speed or efficiency inherent with applying the abstract idea on a computer" does not integrate a judicial exception into a practical application or provide an inventive concept. Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1367, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015).
Claim 1 recites generic computer components (“first language model, first electronic communication, second language model, and second electronic communications,”), with respect to performing tasks. MPEP 2106.05(d) and (f) further provides examples of court decisions where the courts found generic computing components to be mere instructions to apply a judicial exception, and further explains “increased speed” (e.g., using a computer to increase the speed of an otherwise mental process) does not provide an inventive concept. For example:
A commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
A process for monitoring audit log data that is executed on a general-purpose computer where the increased speed in the process comes solely from the capabilities of the general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016) (emphasis added).
Performing repetitive calculations. Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims.")
Claim 8 recites non-transitory machine readable medium claim that corresponds to the method of claim 1 and is therefore rejected under the same grounds as claim 1 above. While claim 8 further recites a “machine readable medium” and “program codes”, these are merely generic computer components recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Claim 8 only other difference from claim 1 is that the communication receive is already been determined to be malicious, and claim 8 focus on the steps to classify it, but the human can nevertheless perform the task of classifying if the malicious text is AI generated or not. Therefore, none of these limitations (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception, because in either case the additional limitations merely utilize generic computer components that amounts to no more than mere instructions to apply the exception using generic computer function. Claim 8 is not patent eligible.
Claim 16 recites an apparatus claim that corresponds to the method of claim 1 and is therefore rejected under the same grounds as claim 1 above. While claim 16 further recites a “processor” and “machine-readable medium having instructions”, these are merely generic computer components recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Claim 16 only other difference from claim 1 is that the communication receive is already been determined to be malicious, and claim 16 focus on the steps to classify it, but the human can nevertheless perform the task of classifying if the malicious text is AI generated or not. Therefore, none of these limitations (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception, because in either case the additional limitations merely utilize generic computer components that amounts to no more than mere instructions to apply the exception using generic computer function. Claim 16 is not patent eligible.
Claims 2-6 depend from independent claim 1, do not remedy any of the deficiencies of claims 1, and therefore are rejected on the same grounds as claim 1 from above.
Claim 2 further recites: wherein prompting the second language model to determine whether the first and second electronic communications are similar is according to zero shot prompting. (e.g., processing information and evaluation of the text, the human comparing two text without prior examples to see if they are similar. Asking AI to do something a human can is considered an abstract idea and in this case, can be done as mental process.)
Claim 3 further recites: further comprising removing personally identifiable information from the first and second electronic communications before prompting the second language model to determine whether the first and second electronic communications are similar. (e.g., processing information, the human removing or deleting PII before submitting the information to the computer.)
Claim 4 further recites: wherein searching publicly available information comprises prompting the first language model or a third language model to search publicly available information based on the keywords. (e.g., using generic computer component to search publicly available information based on keywords.)
Claim 5 further recites: wherein prompting the first language model comprises: generating a first prompt with one or more task instructions to extract a sender and a recipient from the first electronic communication, to extract the body from the first electronic communication, to remove indication of the sender and the recipient from the extracted body and identify keywords in the extracted body after removal of the sender and the recipient; (e.g., processing/identifying information, the human writing a prompt to tell the generic computer component to remove the sender’s and recipients name, or just write it on a piece of paper and then crossing it out, then rewrite the body of the text on a paper and then identify keywords from the body of the text.)
and submitting the first prompt to the first language model. (e.g., the human can then submit the instruction to the generic computer for processing.)
Claim 6 further recites: wherein prompting the second language model to determine whether the first and second electronic communications are similar comprises: generating a first prompt with a set of one or more task instructions to determine similarity based on topic of content in the bodies of the first and second electronic communications and disregard recipient and sender; (e.g., processing/identifying information, the human writing a prompt to tell the generic computer component to identify the topic and compare the topics of the two text while removing the sender and recipient info, or the human can just identify the topic from the two text and then compare if they are same or similar while disregarding the sender and recipient.)
and submit the first prompt to the second language model. (e.g., the human can then submit the instruction to the generic computer for processing.)
Claim 7 further recites: wherein generating the first prompt with the set of one or more task instructions to determine similarity comprises generate the first prompt with the set of one or more task instructions to also disregard style and parts of the electronic communications that are not the bodies. (e.g., processing/identifying information, the human writing a prompt to tell the generic computer component to ignore the writing style and non-body parts, or the human can just focus on reading the text and ignore the fonts, or formatting.)
Claims 9-10 recites machine-readable medium claims that corresponds to the method of claims 2-3 and are therefore rejected under the same grounds as claims 2-3 above.
Claim 11 further recites: wherein the instructions to prompt the second language model to determine whether the malicious communication is similar to the composed communication comprise instructions to generate a prompt with a set of one or more task instructions to remove personally identifiable information from the communications and then determine whether the malicious communication is similar to the composed communication. (e.g., processing/identifying information, the human writing a prompt to tell the generic computer component to remove PII, or just write it on a piece of paper and then crossing it out, compare the malicious text with the recreated text to determine if they are similar.)
Claims 12-15 recites machine-readable medium claims that corresponds to the method of claims 4-7 and are therefore rejected under the same grounds as claims 4-7 above.
Claim 17 recites apparatus claims that corresponds to the method of claim 3 and are therefore rejected under the same grounds as claim 3 above.
Claim 18 recites apparatus claims that corresponds to the method of claim 11 and are therefore rejected under the same grounds as claim 11 above.
Claims 19-20 recites apparatus claims that corresponds to the method of claims 5-6 and are therefore rejected under the same grounds as claims 5-6 above.
In sum, claims 2-7, 9-15 and 17-20 depend from claims 1, 8 and 16 respectively, and further recite mental processes as explained above. None of the additional limitations recited in claims 2-7, 9-15 and 17-20 amount to anything more than the same or a similar abstract idea as recited in claims 1, 8 and 16. Nor do any limitations in claims 2-7, 9-15 and 17-20: (a) integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or (b) amount to significantly more than the judicial exception because the additional limitations of using generic computer components amounts to no more than mere instructions to apply the exception using generic computer components. Claims 2-7, 9-15 and 17-20 are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sinks (US 20250071129), in view of Gupta (US 20250124236).
Sinks discloses: 1. A method comprising: prompting a an attack; ([0005-0010] Use of AI engine to identify online content that is suspected of being maliciously created, also extraction of meta data corresponding to suspect content.)
searching publicly available information based on the keywords; ([0010] search-engine spider to crawl the Internet to identify posted content propagated across online sources)
prompting a second language model to compose an electronic communication based, at least in part, on information acquired from the searching and a sender and a recipient indicated in the header; ([0005-0010] determine the method of the malicious activity based on, inter alia, discernable data such as timing, frequency, dates, authors, frequency, screen names, usernames, email addresses, metadata, IP addresses, routing data, ownership/attribution information, and/or any other detected information or characteristics etc. relating to the posts; … online AI engines, hubs, or the like in an effort to recreate identical or substantially similar content in order to confirm that the malicious materials were generated by that particular AI engine, hub, or code base; [0010] utilized based on which of the online AI bots are able to successfully recreate the suspect content;)
prompting the second language model to determine whether the first and second electronic communications are similar; ([0005-0010] online AI engines, hubs, or the like in an effort to recreate identical or substantially similar content in order to confirm that the malicious materials were generated by that particular AI engine, hub, or code base; [0010] utilized based on which of the online AI bots are able to successfully recreate the suspect content;)
and indicating the first electronic communication as generated by artificial intelligence if the second language model responds that the first and second electronic communications are similar. ([0005-0010] online AI engines, hubs, or the like in an effort to recreate identical or substantially similar content in order to confirm that the malicious materials were generated by that particular AI engine, hub, or code base; [0010] utilized based on which of the online AI bots are able to successfully recreate the suspect content;)
Sinks does not explicitly disclose using a LLM to extracting header and keywords from body of the communication.
Gupta discloses: using LLM to extracting header and keywords from body of the communication. ([0049] In some embodiments, the evaluation module 420 constructs a contextual prompt which provides contextual information to the evaluation LLM, which guides the evaluation LLM in generating evaluation results. For example, the contextual prompt provided to the evaluation LLM to extract keywords from the expected output may be:)
Sinks and Gupta are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Sinks to combine the teaching of Gupta, because embodiments described herein can guide the evaluation LLM in generating more accurate, relevant and targeted evaluation results (Gupta, [0049]).
Regarding Claim 8, Sinks discloses: 8. A non-transitory, machine-readable medium having program code stored thereon, the program code comprising instructions to: ([0019] In some arrangements, one or more various steps or processes disclosed herein can be implemented in whole or in part as computer-executable instructions (or as computer modules or in other computer constructs) stored on computer-readable media.)
classify a malicious communication as artificial intelligence (AI) generated or not AI generated, ([0010] In some arrangements, an information-security process for detection, validation, and sourcing of malicious AI-generated content distributed on the Internet)
As for the rest of the claim, they recite similar elements as claim 1, therefore the rationale applied in the rejection of claim 1 is equally applicable.
Regarding Claim 16, Sinks discloses: 16. An apparatus comprising: a processor; ([0030] processors)
a machine-readable medium having instructions stored thereon, the instructions executable by the processor to cause the apparatus to: ([0019] In some arrangements, one or more various steps or processes disclosed herein can be implemented in whole or in part as computer-executable instructions (or as computer modules or in other computer constructs) stored on computer-readable media.)
classify a malicious communication as artificial intelligence (AI) generated or not AI generated, ([0010] In some arrangements, an information-security process for detection, validation, and sourcing of malicious AI-generated content distributed on the Internet)
As for the rest of the claim, they recite similar elements as claim 1, therefore the rationale applied in the rejection of claim 1 is equally applicable.
Claims 2 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Sinks, in view of Gupta, and further in view of Padgett (US 20240160902).
Regarding claim 2, Sinks/Gupta disclose all of claim 1,
Sinks/Gupta does not explicitly disclose the following feature.
Padgett discloses: wherein prompting the second language model to determine whether the first and second electronic communications are similar is according to zero shot prompting. ([0059] Inputs to an LLM may be referred to as a prompt, which is a natural language input that includes instructions to the LLM to generate a desired output. A computing system may generate a prompt that is provided as input to the LLM via its API. As described above, the prompt may optionally be processed or pre-processed into a token sequence prior to being provided as input to the LLM via its API. A prompt can include one or more examples of the desired output, which provides the LLM with additional information to enable the LLM to better generate output according to the desired output. Additionally or alternatively, the examples included in a prompt may provide inputs (e.g., example inputs) corresponding to/as may be expected to result in the desired outputs provided. A one-shot prompt refers to a prompt that includes one example, and a few-shot prompt refers to a prompt that includes multiple examples. A prompt that includes no examples may be referred to as a zero-shot prompt.)
Sinks/Gupta/Padgett are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Sinks/Gupta to combine the teaching of Padgett, because zero-shot prompting allow AI model to perform task without first needing specific predefined examples, which in essence increase speed and efficiency (Padgett, [0059]).
Claim 9 is machine-readable medium claim that corresponds to claim 2 and therefore are also rejected under same grounds as claim 2.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sinks, in view of Gupta, and further in view of Gnanasekaran (US 20250131126).
Regarding claim 3, Sinks/Gupta disclose all of claim 1,
Sinks/Gupta does not explicitly disclose the following feature.
Gnanasekaran discloses: removing personally identifiable information from the first and second electronic communications before prompting the second language model to determine whether the first and second electronic communications are similar. ([0036] The image component 125 and the text component 127 may identify the PII information in the prompt. The image component 125 and the text component 127 may also sanitize (e.g., hide/remove) the PII information in the prompt before the prompt is sent to the LLM 134.)
Sinks/Gupta/Gnanasekaran are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Sinks/Gupta to combine the teaching of Gnanasekaran, because sanitize or removal of PII helps protect sensitive information (Gnanasekaran, [0036]).
Claims 10 and 17 are machine-readable medium and apparatus claims that corresponds to claim 3 and therefore are also rejected under same grounds as claim 3.
Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sinks, in view of Gupta, and further in view of Wang (US 20250307318).
Regarding claim 4, Sinks/Gupta disclose all of claim 1,
Sinks/Gupta does not explicitly disclose the following feature.
Wang discloses: wherein searching publicly available information comprises prompting the first language model or a third language model to search publicly available information based on the keywords. ([0154] In a further implementation of the foregoing method, wherein said utilizing the LLM to generate the question comprises: determining additional context based on a domain of a search engine that received the first search query via user interaction, a filter applied to the first search query, or a keyword included in the first search query; and generating a prompt to cause the LLM to generate the question, the prompt comprising the first search term and the additional context.)
Sinks/Gupta/Wang are considered analogous art. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Sinks/Gupta to combine the teaching of Wang, because keyword search would improve the search results (Wang, [0154]).
Claim 12 is machine-readable medium claim that corresponds to claim 4 and therefore are also rejected under same grounds as claim 4.
Potentially Allowable Subject Matter
Claims 5-7, 11, 13-15 and 18-20 would be potentially allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and amended to overcome the pertinent rejections under section 35 U.S.C. 101.
The following is a statement of reasons for the indication of potentially allowable subject matter:
With respect to claim 5, Sinks (US 20250071129) teaches extracting of metadata which includes post time, post date, posting IP address, posting user indicia, and post keywords, see para 0018. However, it does not teach remove indication of the sender and the recipient from the extracted body and identify keywords in the extracted body after the removal of the sender and the recipient. While, Gupta (US 20250124236) teaches using an LLM to extract keywords, see para 0048-0049, and also teaches evaluation of similarity between text, see para 0059-0061, and 0070-0075 and fig 5a and 5b. However, it is silent in data sanitization or removal of sender/recipient step, prior to identification of keywords in the extracted body. Accordingly, the prior art of record fails to explicitly teach or fairly suggest the invention set forth in claim 5.
As for claims 13 and 19 although in different statutory category, however they recite similar elements as set forth in claim 5, therefore they also contain similar potentially allowable subject matter.
With respect to claim 6, Sinks (US 20250071129) teaches using public bot to recreate malicious online content see para 0005, however it does not explicitly disclose generating prompt to determine similarity based on topic of content and disregard the recipient and the sender. While Gupta teaches keyword extraction and similarity comparison, see 0059, however it also fall short of reciting the claim as specifically claimed, as it does not disclose determine similarity based on topic of content and ignoring recipient/sender information. Accordingly, the prior art of record fails to explicitly teach or fairly suggest the invention set forth in claim 6. Further, dependent claim 7 inherit the potentially allowable subject matter from claim 6, and thus, also contain potentially allowable subject matter by virtue of their dependency. Claims 14 and 20 although in different statutory category, however they recite similar elements as set forth in claim 6, therefore they also contain similar potentially allowable subject matter. Claim 15 inherit the potentially allowable subject matter from claim 14, and thus, also contain potentially allowable subject matter by virtue of their dependency. As for claim 11, although scope is slightly broader than claim 5, nevertheless, the combination of the cited prior arts does not teach prompting LLM for the removal of PII before comparing the malicious and the regenerated text, therefore it contains potentially allowable subject matter. As for claim 18, although in different statutory category, however, they recite similar elements as set forth in claim 11, therefore they also contain similar potentially allowable subject matter.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Royman (US 20250023913) – discloses method for detection of malicious messages by using two scoring system, first to determine probability of message being malicious, and second scoring is likelihood that the message is generated by AI. See Abstract for details.
Cheng (US 20240378380) – discloses a method to detect AI generated content by taking the prefix or start of a text and feed it into a LLM and ask it to complete the text, and then use n-gram similarity to determine how close the word sequence match to see how likely the text was generated by the same AI model or similar AI model. See Para 0004-0006 and fig 9 for additional details.
Bin Huraib (US 20240070261) – discloses malware identification and profiling, which includes identifying hidden content, such as metadata, and extracted keywords. See para 0015, and 0028 for additional details.
Chow (US 20090150365) – discloses method for detection of email spam and perform website filtering using inference detection which is based on search results received form keywords extracted from websites. See Abstract, and para 0025 and 0043 for additional details.
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in neural information processing systems, 32. – disclose modeling conditional generation of AI generated fake news. See section three for additional details.
Greco, F., Desolda, G., Esposito, A., & Carelli, A. (2024). David versus goliath: can machine learning detect LLM-generated text. A Case Study in the Detection of Phishing Emails, ITASEC. – disclose using LLM to detect LLM generate phishing emails. See Abstract and section three for additional details.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip H Lam whose telephone number is (571)272-1721. The examiner can normally be reached 9 AM-3 PM Pacific time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHILIP H LAM/ Examiner, Art Unit 2656