Prosecution Insights
Last updated: April 19, 2026
Application No. 18/369,789

RESPONSIVE ACTION PREDICTION BASED ON ELECTRONIC MESSAGES AMONG A SYSTEM OF NETWORKED COMPUTING DEVICES

Final Rejection §101§103§DP
Filed
Sep 18, 2023
Examiner
FIORILLO, JAMES N
Art Unit
2444
Tech Center
2400 — Computer Networks
Assignee
Spredfast Inc.
OA Round
2 (Final)
86%
Grant Probability
Favorable
3-4
OA Rounds
2y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 444 resolved
+28.0% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
30 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
55.5%
+15.5% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 444 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office correspondence is in response to “Amendment and Response under 37 C.F.R. 1.111 filed on December 1, 2025 in response to a non-final office action dated May 30, 2025. Claims 2 – 21 are pending. Claims 2 – 21 are rejected. Response to Arguments Applicant’s arguments filed on 12/1/2025 have been fully considered. In regard to claims 2, 10, and 18 which were rejected on the ground of non-statutory double patenting as being unpatentable over claims 1 and 13 of U.S. Patent 11,765,248, and in regard to claims 2 and 10 which were rejected on the ground of non-statutory double patenting as being unpatentable over claims 1 and 16 of U.S. Patent 11,297,151, the applicant has indicated in the response that a terminal disclaimer will be filed to overcome the rejections. However, as of present, no terminal disclaimer has been filed, and thus the rejections are not withdrawn. In regard to claims 2 -21, which were rejected under 35 U.S.C. 103, the applicant’s arguments were not persuasive, and the claims are not withdrawn. The examiner here now responds to each argument. In regards to claims 2 – 6, 10 – 14, and 18 – 19, the applicant argues that the prior art combination of Rodriguez and Lee fails to teach, anticipate or suggest: “characterizing the electronic messages to identify subsets of attributes; analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associating a classification value for each subset of matched attribute data;” (as recited in claim2 and substantially replicated in claims 10 and 18) The applicant states: “ . . . Regarding "characterizing the electronic messages to identify subsets of attributes," the Examiner cites Rodriguez, |0016] . . . Paragraph 0116 states that messages can have certain types of content data, such as themes, text, images, or videos from editing performed by the user in the embedded application. However, this is simply a statement of what can be present in the message. There is no teaching in 0116 of any type of active "characterizing to identify" and particularly no "characterizing the electronic messages to identify subsets of attributes." Paragraph 0262 does teach "For example, chat messages can be parsed and one or more predetermined topics, words or keywords, and/or phrases can be detected to determine a suggestion event (e.g., "go out to eat," "let's invite User4," "let's watch MovieA," a particular name, address, location, etc.)." However, subsets of attributes of the "one or more predetermined topics, words or keywords, and/or phrases" are not detected. Accordingly, Rodriguez does not teach "characterizing the electronic messages to identify subsets of attributes." Additionally, claim 2 requires: analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; The Examiner cites Rodriguez, [0262] . . . As discussed in the previous section of this response, Rodriguez does not teach "analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data." Referring to Rodriguez, 0262, there is no discussion of matching patterns of attribute data against a data model. Rodriguez teaches detecting "one or more predetermined topics, words or keywords, and/or phrases to determine a suggestion event." Paragraph 0262 and other paragraphs in Rodriguez do not discuss attributes and particularly in the context of pattern matching. Rodriguez teaches detecting specific, predetermined topics, words or keywords, and/or phrases. There is no discussion of patterns and particularly no matching of attribute data patterns. Accordingly, Rodriguez does not teach "analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data." Claim 2 also requires: associating a classification value for each subset of matched attribute data; Lee discusses classification in 063. However, the classification disclosed by Lee is not "associating a classification value for each subset of matched attribute data." . . . However, the disclosure of "classification" in Lee discloses making classification decisions based on the actual content of the messages, which is not in the context of any subset of matched attribute data. Since Rodriguez does not disclose the subset of matched attribute data and Lee's classification is not in the context of any subset of matched attribute data, Rodriguez in view of Lee do not disclose "associating a classification value for each subset of matched attribute data" as required by claim 2. Accordingly, Applicant respectfully requests withdrawal of the rejection. . . .” (Applicant’s remarks pages 7- 10) In response to applicant’s argument: The applicant argues the three limitations (characterizing . . . analyzing . . . associating . . . ) as an ordered combination are not taught by the cited prior art. The applicant’s arguments are not persuasive. For the “characterizing” limitation, the applicant’s main objection is that there is no active “characterizing” to identify the subset of attributes in the cited disclosure of Rodriguez, and that the cited disclosure only lists what attributes could be in a message. This is not a persuasive argument because the cited disclosure of Rodriguez is part of a process for user selection to open up specific content and themes found in the message (e.g. attributes) so there is activity being taught by the disclosure which under the broadest reasonable interpretation would be interpreted as “characterizing.” For the “analyzing” limitation, the applicant’s main objection is that the cited disclosure of Rodriguez is not teaching matching patterns of attribute data against a data model. This is not a persuasive argument because the cited disclosure (Rodriguez ¶ [0262], ¶ [0470]) describes using a machine language model that processes events, topics, words or keywords, and/or phrases from chat messages for predicting particular suggestion events (e.g. matched attribute data) which is a type matching patterns of data. Therein when analyzing the limitation under broadest reasonable interpretation, the disclosure of Rodriguez teaches the limitation as currently recited. For the “associating” limitation, the applicant’s main objection is that while prior art Lee is performing a classification decision, the classification uses actual message content instead of matched attribute data, which the applicant advocates was not represented by the teachings of Rodriguez. This is not a persuasive argument because Lee is provided in combination with Rodriguez and Rodriguez has been shown to teach the matched attribute data using broadest reasonable interpretation. See MPEP 2111 (“an examiner must construe claim terms in the broadest reasonable manner during prosecution as is reasonably allowed in an effort to establish a clear record of what applicant intends to claim. Thus, the Office does not interpret claims in the same manner as the courts. In re Morris, 127 F.3d 1048, 1054, 44 USPQ2d 1023, 1028 (Fed. Cir. 1997); In re Zletz, 893 F.2d 319, 321-22, 13 USPQ2d 1320, 1321-22 (Fed. Cir. 1989). Because applicant has the opportunity to amend the claims during prosecution, giving a claim its broadest reasonable interpretation will reduce the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Yamamoto, 740 F.2d 1569, 1571 (Fed. Cir. 1984); In re Zletz, 893 F.2d 319, 321, 13 USPQ2d 1320, 1322 (Fed. Cir. 1989) (“During patent examination the pending claims must be interpreted as broadly as their terms reasonably allow.”); In re Prater, 415 F.2d 1393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969)). The applicant is referred to the description of the rejected claims shown below. The examiner recommends that the applicant review the specification for disclosure that if integrated into the independent claims would distinguish the amended claims from the cited prior art. The applicant is invited to contact the examiner for an interview to discuss how to move the prosecution forward. Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439. Priority This application is a continuation claiming the benefit of prior filed application No. 17/567073 (now U.S. Patent 11,765,248) which was filed on December 31, 2021 and which was co-pending with the instant application, and also claimed benefit to prior filed application 16/827625 (now U.S. Patent 11,297,151) filed on March 25, 2020 and also claimed benefit to prior-filed application No.15/821543 (now U.S. Patent 10,601,937) filed on 11/22/2017 . The instant application is entitled to the priority date of November 22, 2017. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp Claims 2, 10, and 18 are rejected on the ground of non-provisional non-statutory anticipatory-type double patenting as being unpatentable over claims 1 and 13 of U.S. Patent 11,765,248. Although some of the conflicting claims are not identical, they are not patently distinct from each other because both sets of claims are directed to the same invention. This is a non- provisional non-statutory anticipatory-type double patenting rejection since the claims directed to the same invention have been patented. In regard to claim 2: Application 18/369789 U.S. Patent 11,765,248 2. A method comprising: receiving data representing electronic messages into an entity computing system associated with an electronic messaging account; 13. A system, comprising: a memory configured to store data; and a processor configured to receive the data representing electronic messages into an entity computing system associated with an electronic messaging account, parsing components of the electronic message; characterizing the electronic messages to identify subsets of attributes; to determine one or more components of the electronic message and respective component characteristic values as attributes, analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associating a classification value for each subset of matched attribute data; wherein the one or more component characteristics are each represented by a component characteristic value comprising data representing one or more of a language, a word, and a topic specifying a product or a service, a quality issue, or a payment issue, to characterize the electronic message based on the one or more component characteristics during a first time interval as associated with a dataset formed by a data model, determining a response for an electronic message as a function of the classification value for a subset of matched attribute data; and wherein to characterize the electronic message determines at least one classification value that specifies generation of a responsive electronic message, to match a subset of patterns of data from the electronic message against the dataset being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message, to analyze a frequency with which response electronic messages based on an electronic message being associated with the dataset, to predict a value representing a likelihood of a response being generated based on the frequency, and the one or more component characteristics, wherein to predict the value further comprises retrieving a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics, to compare the first threshold value to the first value, to classify the first value as the predicted value, to classify the electronic message for a response as a classified message, and causing a computing device to transmit a response electronic message. to cause a computing device to transmit a response electronic message, the processor is further configured to characterize the electronic message to form the data model, to identify a pattern of data against which to compare the one or more component characteristics, and to match a pattern of data to a subset of component characteristics to determine the dataset. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 2 are to be found in U.S. Patent 11,765,248 (herein ‘248) claim 13 (as the instant application ‘789 claim 2 fully encompasses Patent ‘248 claim 13). The difference between ‘789 claim 2 and ‘248 claim 13 lies in the fact that the ‘248 claim includes many more elements and is thus much more specific. Thus the invention of claim 13 of the ‘248 patent is in effect a “species” of the “generic” invention of ‘789 claim 2. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 2 is anticipated by claim 13 of ‘248, it is not patently distinct from 248 claim 13. In regard to claim 10: Application 18/369789 U.S. Patent 11,765, 248 10. A system comprising: a memory device configured to store executable instructions to predict one or more actions for electronic messages, and a processor configured to execute executable instructions; the processor configured to: 1. A system, comprising: a memory configured to store executable instructions; and a processor configured to receive data representing the electronic messages into an entity computing system associated with an electronic messaging account; receive data representing an electronic message including other data representing an item associated with an entity computing system associated with an electronic messaging account, parse components of the electronic message; characterize the electronic messages to identify subsets of attributes to identify one or more component characteristics associated with one or more components of the electronic message, wherein the one or more component characteristics are each representative of a component characteristic value comprising data representing one or more of a language, a word, and a topic specifying a product or a service, a quality issue, or a payment issue, to characterize the electronic message based on the one or more component characteristics to classify the electronic message for a response as a classified message, analyze the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; to compare the first threshold value to the first value, and to classify the first value as a first classification value, wherein the first classification value represents a likelihood of an action, to match a subset of patterns of data from the electronic message against a data model being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message, associate a classification value for each subset of matched attribute data; wherein the processor is further configured to characterize the electronic message to determine at least one classification value that specifies generation of a responsive electronic message, determine a response for an electronic message as a function of the classification value for a subset of matched attribute data; and to retrieve a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics, wherein the data model includes at least patterns of data corresponding to the one or more component characteristics, cause a computing device to transmit a response electronic message. to cause a computing device including a user interface to perform an action to facilitate the response to the classified message, and to present a user input on the user interface configured to accept a data signal to initiate the action. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 10 are to be found in U.S. Patent 11,765,248 (herein ‘248) claim 1 (as the instant application ‘789 claim 10 fully encompasses Patent ‘248 claim 1). The difference between ‘789 claim 10 and ‘248 claim 1 lies in the fact that the ‘248 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the ‘248 patent is in effect a “species” of the “generic” invention of ‘789 claim 10. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 10 is anticipated by claim 1 of ‘248, it is not patently distinct from 248 claim 1. In regard to claim 18: Application 18/369789 U.S. Patent 11,765,248 18. (New) A non-transitory computer readable medium storing instructions that when executed by one or more processors perform a method, the method comprising: receiving data representing electronic messages into an entity computing system associated with an electronic messaging account; 13. A system, comprising: a memory configured to store data; and a processor configured to receive the data representing electronic messages into an entity computing system associated with an electronic messaging account, parsing components of the electronic message; characterizing the electronic messages to identify subsets of attributes; to determine one or more components of the electronic message and respective component characteristic values as attributes, analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associating a classification value for each subset of matched attribute data; wherein the one or more component characteristics are each represented by a component characteristic value comprising data representing one or more of a language, a word, and a topic specifying a product or a service, a quality issue, or a payment issue, to characterize the electronic message based on the one or more component characteristics during a first time interval as associated with a dataset formed by a data model, determining a response for an electronic message as a function of the classification value for a subset of matched attribute data; and wherein to characterize the electronic message determines at least one classification value that specifies generation of a responsive electronic message, to match a subset of patterns of data from the electronic message against the dataset being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message, to analyze a frequency with which response electronic messages based on an electronic message being associated with the dataset, to predict a value representing a likelihood of a response being generated based on the frequency, and the one or more component characteristics, wherein to predict the value further comprises retrieving a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics, to compare the first threshold value to the first value, to classify the first value as the predicted value, to classify the electronic message for a response as a classified message, and causing a computing device to transmit a response electronic message. to cause a computing device to transmit a response electronic message, the processor is further configured to characterize the electronic message to form the data model, to identify a pattern of data against which to compare the one or more component characteristics, and to match a pattern of data to a subset of component characteristics to determine the dataset. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 18 are to be found in U.S. Patent 11,765,248 (herein ‘248) claim 13 (as the instant application ‘789 claim 18 fully encompasses Patent ‘248 claim 13). The difference between ‘789 claim 18 and ‘248 claim 13 lies in the fact that the ‘248 claim includes many more elements and is thus much more specific. Thus the invention of claim 13 of the ‘248 patent is in effect a “species” of the “generic” invention of ‘789 claim 18. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 18 is anticipated by claim 13 of ‘248, it is not patently distinct from 248 claim 13. Claims 2 and 10 are rejected on the ground of non-provisional non-statutory anticipatory-type double patenting as being unpatentable over Claims 1, and 16 of U.S. Patent 11,297,151. Although some of the conflicting claims are not identical, they are not patently distinct from each other because both sets of claims are directed to the same invention. This is a non- provisional non-statutory anticipatory-type double patenting rejection since the claims directed to the same invention have been patented. In regard to claim 2: Application 18/369789 U.S. Patent 11,297,151 2. A method comprising: 1. A method comprising: receiving data representing electronic messages into an entity computing system associated with an electronic messaging account; receiving data representing an electronic message including with data representing an item associated with an entity computing system associated with an electronic messaging account; parsing components of the electronic message; identifying one or more component characteristics associated with each of one or more components of the electronic message, characterizing the electronic messages to identify subsets of attributes; wherein the one or more component characteristics are each represented by a component characteristic value comprising data representing one or more of a word, a phrase, a media type, and a channel type; analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associating a classification value for each subset of matched attribute data; characterizing the electronic message based on the one or more component characteristics to classify the electronic message for a response as a classified message, including: characterizing the electronic message determines at least one classification value that specifies generation of a responsive electronic message including clustering data retrieving a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics; determining a response for an electronic message as a function of the classification value for a subset of matched attribute data; and comparing the first threshold value to the first value; and classifying the first value as a first classification value, wherein the first classification value represents a likelihood of one or more actions to match a subset of patterns of data from the electronic message against a data model being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message wherein the data model includes at least patterns of data corresponding to the one or more component characteristics; causing a computing device to transmit a response electronic message causing presentation of a user input on a user interface configured to accept a data signal to initiate an action. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 2 are to be found in U.S. Patent 11,297,151 (herein ‘151) claim 1 (as the instant application ‘789 claim 2 fully encompasses Patent ‘151 claim 1). The difference between ‘789 claim 2 and ‘151 claim 1 lies in the fact that the ‘151 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the ‘151 patent is in effect a “species” of the “generic” invention of ‘789 claim 2. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 2 is anticipated by claim 1 of ‘151, it is not patently distinct from 151 claim 1. In regard to claim 10: Application 18/369789 U.S. Patent 11,297,151 10. A system comprising: a memory device configured to store executable instructions to predict one or more actions for electronic messages, and a processor configured to execute executable instructions; the processor configured to: 16. An apparatus comprising: a memory including executable instructions; and a processor, the executable instructions executed by the processor to: receive data representing the electronic messages into an entity computing system associated with an electronic messaging account; receive data representing electronic messages into an entity computing system associated with an electronic messaging account; parse components of the electronic message; characterize the electronic messages to identify subsets of attributes determine one or more components of the electronic message and respective component characteristic values as attributes wherein the one or more component characteristics are each represented by a component characteristic value comprising data representing one or more of a language, a word, and a topic specifying a product or a service, a quality issue, or a payment issue; analyze the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associate a classification value for each subset of matched attribute data; characterize the electronic message based on the one or more component characteristics during a first time interval as associated with a dataset formed by a data model, wherein characterizing the electronic message determines at least one classification value that specifies generation of a responsive electronic message based on clustering data to match a subset of patterns corresponding to the one or more component characteristics; match a subset of patterns of data from the electronic message against the dataset being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message; analyze a frequency with which response electronic messages based on an electronic message being associated with the dataset; determine a response for an electronic message as a function of the classification value for a subset of matched attribute data; and predict a value representing a likelihood of a response being generated based on the frequency, and the one or more component characteristics, wherein the predicting further comprises: retrieving a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics; compare the first threshold value to the first value; classify the first value as the predicted value, the predicted value being configured to predict generation of the responsive electronic message; classify the electronic message for a response as a classified message; and cause a computing device to transmit a response electronic message. cause a computing device to transmit a response electronic message. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 10 are to be found in U.S. Patent 11,297,151 (herein ‘151) claim 16 (as the instant application ‘789 claim 10 fully encompasses Patent ‘151 claim 16). The difference between ‘789 claim 10 and ‘151 claim 16 lies in the fact that the ‘151 claim includes many more elements and is thus much more specific. Thus the invention of claim 16 of the ‘151 patent is in effect a “species” of the “generic” invention of ‘789 claim 10. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 10 is anticipated by claim 16 of ‘151, it is not patently distinct from 151 claim 16. Claim 2 is rejected on the ground of non-provisional non-statutory anticipatory-type double patenting as being unpatentable over claims 1 of U.S. Patent 10,601,937. Although some of the conflicting claims are not identical, they are not patently distinct from each other because both sets of claims are directed to the same invention. This is a non- provisional non-statutory anticipatory-type double patenting rejection since the claims directed to the same invention have been patented. Application 18/369789 U.S. Patent 10,601,937 2. A method comprising: 1. A method comprising: receiving data representing electronic messages into an entity computing system associated with an electronic messaging account; receiving data representing an electronic message including with data representing an item associated with an entity computing system associated with an electronic messaging account; parsing components of the electronic message; identifying one or more component characteristics associated with each of one or more components of the electronic message, characterizing the electronic messages to identify subsets of attributes; wherein the one or more component characteristics are each represented by a component characteristic value comprising data representing one or more of a language, a word, and a topic specifying a product or a service, a quality issue, or a payment issue; analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data; associating a classification value for each subset of matched attribute data; characterizing the electronic message based on the one or more component characteristics to classify the electronic message wherein: characterizing the electronic message determines at least one classification value that specifies generation of a responsive electronic message; retrieving a first threshold value from data storage against which to compare with a first value to classify the electronic message, wherein the first value represents a probability derived from the one or more component characteristics; comparing the first threshold value to the first value; and classifying the first value as a first classification value, wherein the first classification value represents a likelihood of an action, determining a response for an electronic message as a function of the classification value for a subset of matched attribute data; and matching a subset of patterns of data from the electronic message against a data model being associated with a likelihood that a specific pattern of data causes a response implemented in the responsive electronic message wherein the data model includes at least patterns of data corresponding to the one or more component characteristics; causing a computing device to transmit a response electronic message. causing a computing device including a user interface to perform an action to facilitate the response to the classified message; and presenting a user input on the user interface configured to accept a data signal to initiate the action. It is clear that all of the elements of the instant application 18/369789 (herein ‘789) claim 2 are to be found in U.S. Patent 10,601,937 (herein ‘937) claim 1 (as the instant application ‘789 claim 2 fully encompasses Patent ‘937 claim 1). The difference between ‘789 claim 2 and ‘937 claim 1 lies in the fact that the ‘937 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the ‘937 patent is in effect a “species” of the “generic” invention of ‘789 claim 2. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since the ‘789 claim 2 is anticipated by claim 1 of ‘937, it is not patently distinct from 937 claim 1. 35 USC § 101 Analysis 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2 – 21 are directed to statutory subject matter and are not rejected under 35 USC 101 because of a judicial exception. The claimed subject matter is integrated into a practical application under Prong 2 of the Step 2A analysis described in MPEP 2016.04(d). The claims are directed to non-abstract improvements in computer related technology. A claim is non-statutory when it is directed to a judicial exception (e.g. either one of mathematical concepts, mental processes, or certain methods of organizing human activity) without significantly more. The claimed invention is not directed to a judicial exception. Instead, the claimed invention is directed to a technological improvement to predict an action based on content in electronic messages through the deployment of a computerized infrastructure that uses data modeling to classify the incoming electronic message so that a receiving computing device can perform an action or facilitate a response. The ordered steps of the claimed invention provides a specific improvement for message processing and management within a business enterprise by parsing components of the electronic message and characterizing the electronic messages to identify subsets of attributes, which are analyzed by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data, and further associating a classification value for each subset of matched attribute data. Finally, a response for an electronic message as a function of the classification value for a subset of matched attribute data is determined, and causes a computing device to transmit a response electronic message. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 – 6, 10 – 14, and 18 – 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez et al. (U.S. 2018/0367484 A1; herein referred to as Rodriguez) in view of Lee et al. (U.S. 2018/0253659 A1; herein referred to Lee). In regard to claim 2, Rodriguez teaches A method comprising (see ¶ [0004] “. . . a computer-implemented method to provide suggested items includes causing a chat interface to be displayed by a first user device, where the chat interface is generated by a messaging application, and the chat interface is configured to display one or more messages in a chat conversation. The one or more messages are provided by a plurality of user devices participating in the chat conversation over a network. The method causes an embedded interface to be displayed associated with the chat interface, where the embedded interface is provided by a first embedded application that executes at least in part on the first user device in association with the messaging application. The method determines that a suggestion event has occurred in association based on based on received data that indicates that a particular event has occurred at one or more of the plurality of user devices that are participating in the chat conversation, where the one or more of the plurality of user devices are different than the first user device. The method obtains one or more suggested response items based on the suggestion event, and causes the one or more suggested response items to be displayed by the first user device. . . .”): receiving data representing electronic messages (e.g. chat messages) into an entity computing system (see Fig. 1, ¶ [0074] “. . . messaging application 103 may be an instant messaging application, a social network application, an email application, a multimedia messaging application, and the like. For example, if the messaging application is an instant messaging application, messages may be received as part of an instant messaging communication between a particular user 125a and one or more other users 125 of participating devices, e.g., in a messaging session (e.g., chat, group, or “chat conversation”) having two or more participants, etc. A chat, or chat conversation, is a messaging session in which multiple participating users communicate messages (e.g., including various types of content data) with each other. In some implementations, users may send messages to other users by inputting messages into a chat conversation implemented by a messaging application. In some implementations, users may send messages to particular other users by messaging a phone number (e.g., when the messaging application 103 works over SMS, or another messaging application that utilizes phone numbers) or selecting a recipient user from a contacts list (e.g., when the messaging application 103 works over rich communications services (RCS) or another chat interface) . . .”) associated with an electronic messaging account (e.g. via a messaging application) (see Fig. 2, ¶ [0100] “. . . the user input can be received in the chat interface, e.g., text commands input as messages or selection of interface elements in the chat interface. Input received in the chat interface can be processed and/or conveyed from the messaging application to the embedded application. In some implementations, the embedded application can have access to chat conversation information (e.g., user names or chat identities, user icons, etc.) and/or access to user profile information (user name, profile picture, chat obfuscated ID) which can allows for a personalized experience. In some implementations, the embedded application does not have access to the chat stream (e.g., the chat messages input in the chat conversation). In other implementations, the embedded application can have full access to the chat stream, and/or the embedded application can request higher-level access to chat input (to be permitted by the user), and/or can be provided summaries by the messaging application of chat messages input by chat users in the chat conversation. In additional examples, the embedded application can directly read the input chat messages from a chat conversation database associated with the chat conversation (e.g., stored on the user device or a server), and/or can receive chat messages and other user input in the chat interface via a server or bot. . . “); parsing components of the electronic message (see ¶ [0108] “. . . the first embedded application can process content data from the chat conversation (e.g., parse and/or otherwise process chat messages provided by the messaging application) to present output data in the embedded interface, and/or an embedded interface itself, that is contextually relevant to one or more topics mentioned in the associated chat conversation. . . .”) ; characterizing the electronic messages to identify subsets of attributes (e.g. types of content) (see ¶ [0116] “ . . . Such chat messages can include themes, text, images, videos, or other type of content data resulting from editing performed by the user in the embedded application, web content snippets edited by a user in the embedded application, etc. In some examples, the attribution or name of the embedded application, as displayed in the chat interface of other chat devices, can be selectable by user input, and if selected, causes the embedded application to open to edit the same shared content data or edit another content data item . . .”); analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data (e.g. suggested events) (see ¶ [0262] “ . . . , chat messages can be parsed and one or more predetermined topics, words or keywords, and/or phrases can be detected to determine a suggestion event (e.g., “go out to eat,” “let's invite User4,” “let's watch MovieA,” a particular name, address, location, etc.). In some implementations, a machine learning model can be trained in a training stage with synthetic or actual training data and, in an inference stage used in method 700, can be used to process a set of chat messages to predict if a suggestion event has occurred. For example, the model can predict if any particular user actions (e.g., commands or further messages) are likely to be initiated or provided by users based on the content of the set of chat messages, and if such commands are within a threshold probability of occurrence, the input of the set of chat messages (e.g., the input of the last chat message of the set) can be considered a suggestion event. . . .” see ¶ [0470] “ . . . the training data may include any type of data such as text, images, audio, video, etc. Training data may be obtained from any source, e.g., a data repository specifically marked for training, data for which permission is provided for use as training data for machine-learning, etc. In implementations where one or more users permit use of their respective user data to train the trained model, training data may include such user data. In implementations where users permit use of their respective user data, the data may include permitted data such as images (e.g., photos or other user-generated images), communications (e.g., e-mail; chat data such as text messages, voice, video, etc.), documents (e.g., spreadsheets, text documents, presentations, etc.) . . “); causing a computing device to transmit a response electronic message (see ¶ [0006] “ . . . obtaining one or more suggested response items based on the suggestion event includes determining at least one suggested response item of the one or more suggested response items based on at least one of: one or more predefined associations of the at least one suggested response item with the suggestion event; a model trained with particular suggestion events and responses input in response to the particular suggestion events; and one or more of a plurality of rules and objectives used in a game provided by the embedded application. In some implementations, the method further includes receiving user input indicative of user selection of a selected suggested response item from the one or more suggested response items; and outputting a selected message in the chat conversation, the selected message associated with the selected suggested response item. . . .”; see ¶ [0328] “ . . . the method continues to block 732, where the messaging application outputs selected message(s) corresponding to the selected message item to the chat conversation in the chat interface. In some implementations, multiple messages can correspond to a single message item. In some examples, the selected message(s) can be displayed in the chat interface of the (first) device receiving the user selection of the message item, and the selected message(s) can be transmitted over the network to the chat devices participating in the chat conversation and displayed in the chat interfaces of those devices. In some implementations, the selected message(s) can be transmitted to a particular subset of the chat devices for display in their associated chat interfaces, e.g., to member devices in the same embedded session as the first device, to devices of particular chat users based on user selection or preferences, to devices of particular user statuses or roles in the embedded session, to devices on the same team or opposing team in a game of an embedded session, etc. . . .”). Rodriguez fails to teach but Lee teaches associating a classification value (see ¶ [0063] “ . . . message management computing platform 110 and/or machine learning engine 112e may use one or more machine learning models 112g to output scores and/or classifications that message management computing platform 110 may use to determine whether message management computing platform 110 will perform one or more automated message management actions. Such scores and/or classifications may include at least a message priority score, an automatic response classification, a category classification, and/or a group matching classification . . “) for each subset of matched attribute data (see ¶ [0067] “ . . . Machine learning engine 112e may classify a message as fitting one or more types of automatic response classifications. In some embodiments, machine learning engine 112e may output, for each message it analyzes, discrete value(s) indicating one or more classifications and optional confidence levels for the one or more classifications. For example, machine learning engine 112e may classify a message as an opportunity to automatically generate an “away message” response, a “please follow up later” response, and/or a “set up a meeting” response. Continuing the example, certain combinations of input features, such as scheduling information indicating the user is away, a high priority score, a certain identity of the sender, and/or the user as sole recipient (e.g., no other users in the “to:” or “cc:” fields in the case of an email) may tend to indicate an “away message” response classification with a high confidence (e.g., a response indicating the user is away but will reply soon). Similarly, certain combinations of input features, such as scheduling information indicating the user is busy and/or a low priority score, may tend to indicate a “please follow up later” response classification (e.g., a response requesting the sender to try messaging again later when the user is less busy). As another example, certain combinations of input features, such as keywords in the message including “meeting,” “availability,” and/or “schedule” may tend to indicate a “set up a meeting” response classification (e.g., a response indicating the user's availability and a request to select a time for a meeting . . .”); determining a response for an electronic message as a function of the classification value (see ¶ [0065] “ . . . Referring to FIG. 2D, at step 213, message management computing platform 110 and/or machine learning engine 112e may determine an automatic response classification for some or all of the messages retrieved at step 207 (e.g., for only the retrieved messages tagged as “unresponded”). In some embodiments, machine learning engine 112e may use different machine learning models 112g for generating the priority score, the automatic response classification, and other scores and/or classifications. In some embodiments, the output of one machine learning model 112g may be used as input for another machine learning model 112g. For example, machine learning engine 112e may accept, as input to the machine learning model 112g for generating an automatic response classification, the priority score outputted by another machine learning model 112g. Thus, for example, machine learning engine 112e may be more likely to recommend sending automatic responses to higher priority messages. . . .”) for a subset of matched attribute data (see ¶ ¶ [0074-0075] “ . . . At step 219, message management computing platform 110 and/or message server module 112a may generate automatic responses to messages based on the automatic response categorizations determined in step 213. In some embodiments, message management computing platform 110 and/or message server module 112a may generate automatic responses when a confidence associated with the automatic response categorization is above a certain threshold. Automatic responses may include pre-configured content and/or content taken from previous responses sent by a user. In some embodiments, message management computing platform 110 and/or message service module may access a scheduling system 130 to retrieve availability information for generating an automatic response that includes such information. In some embodiments, message management computing platform 110 and/or message server module 112a may transmit an automatic response to a user for review before sending the automatic response . . .”) ; It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate a system and method for providing automated message management functions including generating an automatic response to a message that is categorized and classified by a model that has input groups of messages, as taught by Lee, into a system and method to enable a messaging application to generate a chat interface for enabling a user to participate in a network chat session and upon receipt of certain chat events or topics, suggested response items can be generated by the messaging application, as taught by Rodriguez. Such incorporation provides a dynamic model for automatically providing response content over multiple message platforms. In regard to claim 3, the combination of Rodriguez and Lee teaches wherein parsing the components (see Rodriguez ¶ [0108] as described for the rejection of claim 1 and is incorporated herein) comprises: ingesting data representing the electronic messages (see Rodriguez ¶ [0103] “ . . . the content data is received by the first messaging application of the first device, which provides the content data to the first embedded application. . . .” see Rodriguez ¶ [0406] “ . . . A bot may be implemented as a computer program or application (e.g., a software application) that is configured to interact with one or more users (e.g., any of the users 125a-n) via messaging application 103a/103b to provide information or to perform specific actions within the messaging application 103. As one example, an information retrieval bot may search for information on the Internet and present the most relevant search result within the messaging app . . .” ) ; and applying a natural language processing algorithm to the data representing the electronic messages to parse the components (see Rodriguez ¶ ¶ [0411-0412] “ . . . a bot may use a conversational interface, such as a chat interface, to use natural language to interact conversationally with a user. In certain embodiments, a bot may use a template-based format to create sentences with which to interact with a user, e.g., in response to a request for a restaurant address, using a template such as “the location of restaurant R is L.” In certain cases, a user may be enabled to select a bot interaction format, e.g., whether the bot is to use natural language to interact with the user, whether the bot is to use template-based interactions, etc. In cases in which a bot interacts conversationally using natural language, the content and/or style of the bot's interactions may dynamically vary based on one or more of: the content of the conversation determined using natural language processing, the identities of the users in the conversations, and one or more conversational contexts (e.g., historical information on the user's interactions, connections between the users in the conversation based on a social graph), external conditions (e.g., weather, traffic), the user's schedules, related context associated with the users, and the like. In these cases, the content and style of the bot's interactions is varied based on only such factors for which users participating in the conversation have provided consent . . .”). In regard to claim 4, the combination of Rodriguez and Lee teaches further comprising: detecting the response electronic message is transmitted (see Rodriguez ¶ [0240] “ . . . Suggested response items (also referred to as “suggested items” herein) may be generated and provided in the chat interface and/or the embedded interface for selection by a user in a variety of contexts. Suggested response items may be generated and provided to the user automatically, upon consent from the user and one or more other users that sent and/or received the image . . .”) ; modifying a value representing a frequency responsive to include the response electronic message (see Rodriguez¶ ¶ [0243-0244] “ . . . A machine learning model can be created, based on training data, prior to receiving a suggestion event for which suggested response items are to be generated, so that upon receiving the indication of an suggestion event, suggested response items can be generated using the existing model. Machine-learning models may be trained using synthetic data or test data, e.g., data that is automatically generated by a computer, with no use of user information. Synthetic data can be based on simulated events occurring in embedded applications and embedded sessions, and responsive commands and messages, where no human users are participants. In some implementations, machine-learning models may be trained using sample data or training data, e.g., commands and messages actually provided by users in response to embedded application and session events and who consent to provide such data for training purposes. Training data is treated before use to remove user identifiers and other user-related information. In some implementations, machine-learning models may be trained based on sample data for which permissions to utilize user data for training have been obtained expressly from users. After the machine learning model is trained, a newly-occurring set of data can be input to the model and the model can provide suggested items based on its training with the sample data. Based on the sample data, the machine-learning model can predict messages and commands to occurring events in an embedded session, which may then be provided as suggested response items. User interaction is enhanced, e.g., by reducing burden on the user to determine a command or compose a message to an application event, by providing a choice of response items that are customized based on the occurring event and the user's context. Some examples of machine-learning application and machine-learning features are described below with reference to FIG. 12. In some examples, when users provide consent, suggested response items may be customized based on the user's prior activity, e.g., earlier messages provided in a conversation, messages in different conversations, earlier commands provided by the user to the embedded application or to a different embedded application program, etc. For example, such activity may be used to determine an appropriate suggested item for the user, e.g., a playful message or command, a formal message, etc. based on the user's interaction style. In another example, when the user specifies one or more user-preferred languages and/or locales, messaging application 103a/103b may generate suggested items in the user's preferred language. In various examples, suggested items may be text messages, images, multimedia, encoded commands, etc. . . .”).; and updating the data model to recalibrate the likelihood of transmitting the response electronic message for a subsequent electronic message associated with the subset of matched attribute data (see Rodriguez ¶ [0245] “ . . . machine learning may be implemented on one or more components of environment 100, e.g., suggestion server 156, messaging server 101, client devices 115, either or both messaging server 101 and client devices 115, etc. In some implementations, a simple machine learning model may be implemented on client device 115 (e.g., to permit operation of the model within memory, storage, and processing constraints of client devices) and a complex machine learning model may be implemented on messaging server 101, suggestion server 156, and/or a different server. If a user does not provide consent for use of machine learning techniques, such techniques are not implemented. In some implementations, a user may selectively provide consent for machine learning to be implemented only on a client device 115. In these implementations, machine learning may be implemented on client device 115, such that updates to a machine learning model or user information used by the machine learning model are stored or used locally, and are not shared to other devices such as messaging server 101, servers 135 and 150-156, or other client devices 115 . . .”) In regard to claim 5, the combination of Rodriguez and Lee teaches further comprising: selecting one of a subset of actions to be performed including transmitting the response electronic message (see Rodriguez ¶ [0110] “ . . . the embedded application can be a “lightweight” (e.g., reduced-feature or reduced-functionality) version of a full application that executes on the user device or on a different devices. The lightweight version is a version of the full application that requires less storage space, memory to execute, etc., can be executed without launching the full application, and may have a subset (e.g., fewer than the full set) of the features and/or functions of the full application. In some examples, a full game application can be installed on a different device of the first user's (e.g., a desktop or laptop computer) and a lightweight version of the game application can be executed as an embedded application on the user device. Such a lightweight game application can allow the first user to provide user input in the embedded interface to change game settings or game data relating to the first user or the first user's account used in the full game application. Such changes can include managing game resources of the first user used in the full game application, e.g., organizing inventory of game items, buying or selling items within the game, allocating points for particular game abilities of the first user's account, changing preferences or display settings, perform simple game actions, etc. . .”). In regard to claim 6, the combination of Rodriguez and Lee teaches wherein associating a classification value (see Lee ¶ [0063] as described for the rejection of claim 2 and is incorporated herein) comprises: clustering data to match a pattern of attribute data to identify the subset of matched attribute data (see Lee ¶¶ [0068-0069] “ . . . machine learning engine 112e may use unsupervised machine learning techniques (e.g., clustering algorithms) to find potential new categories. Such new potential categories may be suggested to a user, as further discussed below. At step 215, message management computing platform 110 and/or machine learning engine 112e may compare messages to other messages and/or groups of messages to determine similarities between the message and the other messages and/or groups of messages. For example, in the case of email, the email may be compared to other emails to determine a similarity and group the emails into a single thread. In some embodiments, message management computing platform 110 and/or machine learning engine 112e may compare the message itself, metadata associated with the message, the outputs of analysis steps 208-211, the priority score generated in step 212, and/or information derived therefrom to comparable information for the other messages or groups of messages in order to determine a similarity. For example, messages sharing a certain number of keywords, a topic, a sentiment, senders/receivers, and/or other information may be designated as similar messages. Additionally or alternatively, the similarity between the message and the other message(s) may be determined using the clusters optionally generated in step 214. For example, messages appearing in the same clusters may be indicated as similar messages. Based on message management computing platform 110 and/or machine learning engine 112e indicating message similarity, message management computing platform 110 may group the messages together (e.g., into threads, topics, categorizations, and the like) . . .”). The motivation to combine Lee with Rodriguez is described for the rejection of claim 1 and is incorporated herein. Additionally, Lee clusters similar messages for determining classification based on the message attributes. In regard to claim 10, Rodriguez teaches A system (see ¶ [0004] as described for the rejection of claim 2 and is incorporated herein) comprising: a memory device configured to store executable instructions to predict one or more actions for electronic messages (see ¶ [0010] “ . . . a system includes a memory and at least one processor configured to access the memory and configured to perform operations including causing a chat interface to be displayed by a first user device, where the chat interface is generated by a messaging application. The chat interface is configured to display one or more messages in a chat conversation, where the one or more messages are provided by a plurality of user devices participating in the chat conversation over a network. The operations include causing an embedded interface to be displayed associated with the chat interface, where the embedded interface is provided by a first embedded application executing in association with the messaging application, and the first embedded application executes at least in part on the first user device. The operations include determining that a suggestion event has occurred in association with use of the first embedded application based on at least one of: user input received by the embedded interface, and event information from the first embedded application indicating that the suggestion event has occurred in the first embedded application. The operations include obtaining one or more suggested response items responsive to the suggestion event, and causing to be displayed the one or more suggested response items by the first user device. . . .”) , and a processor configured to execute executable instructions (see ¶ [0467] “ . . . device 1200 includes a processor 1202, a memory 1204, and input/output (I/O) interface 1206. Processor 1202 can be one or more processors and/or processing circuits to execute program code and control basic operations of the device 1200. A “processor” includes any suitable hardware and/or software system, mechanism or component that processes data, signals or other information. A processor may include a system with a general-purpose central processing unit (CPU), multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a particular geographic location, or have temporal limitations. For example, a processor may perform its functions in “real-time,” “offline,” in a “batch mode,” etc. Portions of processing may be performed at different times and at different locations, by different (or the same) processing systems. A computer may be any processor in communication with a memory. . . “); the processor configured to: receive data representing the electronic messages (e.g. chat messages) into an entity computing system (see Fig. 1, ¶ [0074] as described for the rejection of claim 2 and is incorporated herein) associated with an electronic messaging account (e.g. via a messaging application) (see Fig. 2, ¶ [0100] as described for the rejection of claim 2 and is incorporated herein); parse components of the electronic message (see ¶ [0108] as described for the rejection of claim 2 and is incorporated herein); characterize the electronic messages to identify subsets of attributes (e.g. types of content) (see ¶ [0116] as described for the rejection of claim 2 and is incorporated herein); analyze the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data (e.g. suggested events) (see ¶ [0262] as described for the rejection of claim 2 and is incorporated herein); cause a computing device to transmit a response electronic message (see ¶ [0006] ¶ [0328] as described for the rejection of claim 2 and is incorporated herein); Rodriguez fails to teach but Lee teaches associate a classification value (see ¶ [0063] as described for the rejection of claim 2 and is incorporated herein) for each subset of matched attribute data (see ¶ [0067] as described for the rejection of claim 2 and is incorporated herein); determine a response for an electronic message as a function of the classification value (see ¶ [0065] as described for the rejection of claim 2 and is incorporated herein) for a subset of matched attribute data (see ¶ ¶ [0074-0075] as described for the rejection of claim 2 and is incorporated herein); The motivation to combine Lee with Rodriguez is described for the rejection of claim 2 and is incorporated herein. In regard to claim 11,, the combination of Rodriguez and Lee teaches wherein the processor configured to parse the components (see Rodriguez ¶ [0108] as described for the rejection of claim 2 and is incorporated herein) is further configured to: ingest data representing the electronic messages (see Rodriguez ¶ [0103], ¶ [0406] as described for the rejection of claim 3 and is incorporated herein); and apply a natural language processing algorithm to the data representing the electronic messages to parse the components (see Rodriguez ¶ ¶ [0411-0412] as described for the rejection of claim 3 and is incorporated herein) In regard to claim 12, the combination of Rodriguez and Lee teaches the processor is further configured to: detect the response electronic message is transmitted (see Rodriguez ¶ [0240] as described for the rejection of claim 4 and is incorporated herein); modify a value representing a frequency responsive to include the response electronic message (see Rodriguez¶ ¶ [0243-0244] as described for the rejection of claim 4 and is incorporated herein); and update the data model to recalibrate the likelihood of transmitting the response electronic message for a subsequent electronic message associated with the subset of matched attribute data (see Rodriguez ¶ [0245] as described for the rejection of claim 4 and is incorporated herein) In regard to claim 13, the combination of Rodriguez and Lee teaches the processor is further configured to: select one of a subset of actions to be performed including transmitting the response electronic message (see Rodriguez ¶ [0110] as described for the rejection of claim 5 and is incorporated herein) In regard to claim 14, the combination of Rodriguez and Lee teaches wherein the processor configured to associate a classification value (see Lee ¶ [0063] as described for the rejection of claim 2 and is incorporated herein) is further configured to: cluster data to match a pattern of attribute data to identify the subset of matched attribute data (see Lee ¶¶ [0068-0069] as described for the rejection of claim 6 and is incorporated herein). The motivation to combine Lee with Rodriguez is described for the rejection of claim 6 and is incorporated herein. In regard to claim 18, Rodriguez teaches A non-transitory computer readable medium storing instructions that when executed by one or more processors (see ¶ [0012] “ . . . , a non-transitory computer readable medium has stored thereon software instructions that, when executed by a processor, cause the processor to perform operations. . . .”) perform a method (see ¶ [0004] as described for the rejection of claim 2 and is incorporated herein) , the method comprising: receiving data representing electronic messages (e.g. chat messages) into an entity computing system (see Fig. 1, ¶ [0074] as described for the rejection of claim 2 and is incorporated herein) associated with an electronic messaging account (e.g. via a messaging application) (see Fig. 2, ¶ [0100] as described for the rejection of claim 2 and is incorporated herein); parsing components of the electronic message (see ¶ [0108] as described for the rejection of claim 2 and is incorporated herein); characterizing the electronic messages to identify subsets of attributes (e.g. types of content) (see ¶ [0116] as described for the rejection of claim 2 and is incorporated herein); analyzing the subsets of attributes by matching patterns of attribute data against a data model stored in a repository to form subsets of matched attribute data (e.g. suggested events) (see ¶ [0262] as described for the rejection of claim 2 and is incorporated herein); causing a computing device to transmit a response electronic message (see ¶ [0006] ¶ [0328] as described for the rejection of claim 2 and is incorporated herein); Rodriguez fails to teach but Lee teaches associating a classification value (see ¶ [0063] as described for the rejection of claim 2 and is incorporated herein) for each subset of matched attribute data (see ¶ [0067] as described for the rejection of claim 2 and is incorporated herein); determining a response for an electronic message as a function of the classification value (see ¶ [0065] as described for the rejection of claim 2 and is incorporated herein) for a subset of matched attribute data (see ¶ ¶ [0074-0075] as described for the rejection of claim 2 and is incorporated herein); The motivation to combine Lee with Rodriguez is described for the rejection of claim 2 and is incorporated herein. In regard to claim 19, the combination of Rodriguez and Lee teaches wherein parsing the components (see Rodriguez ¶ [0108] as described for the rejection of claim 2 and is incorporated herein) comprises: ingesting data representing the electronic messages (see Rodriguez ¶ [0103], ¶ [0406] as described for the rejection of claim 3 and is incorporated herein); and applying a natural language processing algorithm to the data representing the electronic messages to parse the components (see Rodriguez ¶ ¶ [0411-0412] as described for the rejection of claim 3 and is incorporated herein) Claims 7 – 9, 15 – 17, and 20 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over Rodriguez et al. (U.S. 2018/0367484 A1; herein referred to as Rodriguez) in view of Lee et al. (U.S. 2018/0253659 A1; herein referred to Lee) as applied to claims 2 – 6, 10 – 14, and 18 – 19 in further view of Caballaro et al. (U.S. 2018/0129960 A1; herein referred to as Caballaro). In regard to claim 7, the combination of Rodriguez and Lee fails to explicitly teach but Caballaro teaches further comprising: applying one or more machine learning algorithms to correlate vector data to a cluster of data, each cluster representing one of the subsets of attributes (see Caballaro ¶ [0039]” . . . Classification is the correlation of an output to a given input (e.g., confidence score to the positive and negative signals). Classification may be performed using a predictor function that is constructed using a set of “training” data that includes an input (or feature) vector and an answer (or verification) vector. In particular embodiments, the predictor function is constructed using machine-learning (ML) algorithms trained using historical actions and past user responses, or data farmed from users by exposing them to various options and measuring the responses. As an example and not by way of limitation, ML classification algorithms may include support vector machine (SVM), Naive Bayes, Adaptive Boosting (AdaBoost), Random Forest, Gradient Boosting, K-means clustering, Density-based Spatial Clustering of Applications with Noise (DBSCAN), or Neural Network algorithms. In particular embodiments, the ML classifier algorithm may combine (e.g., through a dot product) the input vector with one or more weights to construct a predictor function to best fit the input vector to the answer vector. Although this disclosure describes particular ML classifiers with linear predictor functions, this disclosure contemplates any suitable ML classifier based on the classifier that provides the best performance (e.g., time or correlation between the input vector to the answer vector) . . .”). It would have been obvious to one with ordinary skill in the art before the effective filing date of the applicant’s application to incorporate a system and method for identifying information about a user by calculating a confidence score computed using a machine learning classifier that applies a set of training data and operates on correlated vector data, as taught by Caballaro, into a system and method to enable a messaging application to generate a chat interface for enabling a user to participate in a network chat session and upon receipt of certain chat events or topics, suggested response items can be generated by the messaging application, and therein providing automated message management functions including generating an automatic response to a message that is categorized and classified by a model that has input groups of messages as taught by the combination of Rodriguez and Lee. Such incorporation provides better training data for generating the automatic response messages. In regard to claim 8, the combination of Rodriguez, Lee, and Caballaro teaches further comprising: forming correlated datasets (see Caballaro ¶ [0025] ” . . . . Data stores 164 may be used to store various types of information. In particular embodiments, the information stored in data stores 164 may be organized according to specific data structures. In particular embodiments, each data store 164 may be a relational, columnar, correlation, or other suitable database. . . .”), each correlated data being formed based on a set of clustered data (see Caballaro ¶ [0039] as described for the rejection of claim 7 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 7. Additionally, Caballaro provides correlated datasets for efficient training models. In regard to claim 9, the combination of Rodriguez, Lee, and Caballaro teaches further comprising: associating a correlated dataset to a classification value (see Caballaro ¶ [0041] “ . . . a number of variables may be considered for both determining the weights for the predictor function and calculating the confidence score. Constructing the predictor function may be an optimization of a weighted function of the positive signals and negative signals of the users from the sample group to the verified results obtained from the sample group. As described above, the predictor function may include one or more weights or coefficients to the positive signals and negative signals. In particular embodiments, the signals from the sample group of users may be accessed to populate the values of the feature vector used to construct the predictor function. A feature vector is a vector of numerical “features” or independent variables that represent an output, in this case a probabilistic-based estimate of whether the identifying information for a particular endpoint should be used to communicate with the user. As an example and not by way of limitation, the features may correspond to observable signals that may be used to predict an outcome. The output vector of the ML classifier may be the confidence score and the output vector may be compared to the answer vector to train the predictor function of the machine-learning classifier. In particular embodiments, the feature vector of a particular user may be processed using the predictor function that is constructed using a set of training data, described above. The input vector may also include information about the user (e.g., demographics), and the value of the weights of the predictor function determined by the ML classifier may take this or other suitable information into account . . .”). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 7. Additionally, Caballaro uses the correlated datasets to calculate a confidence score for determining the automatic response. In regard to claim 15, the combination of Rodriguez, Lee, and Caballaro teaches the processor is further configured to: apply one or more machine learning algorithms to correlate vector data to a cluster of data, each cluster representing one of the subsets of attributes (see Caballaro ¶ [0039] as described for the rejection of claim 7 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 7 and incorporated herein. In regard to claim 16, the combination of Rodriguez, Lee, and Caballaro teaches the processor is further configured to: form correlated datasets (see Caballaro ¶ [0025] as described for the rejection of claim 7 and is incorporated herein), each correlated data being formed based on a set of clustered data (see Caballaro ¶ [0039] as described for the rejection of claim 8 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 8. In regard to claim 17, the combination of Rodriguez, Lee, and Caballaro teaches the processor is further configured to: associate a correlated dataset to a classification value (see Caballaro ¶ [0041] as described for the rejection of claim 9 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 9. In regard to claim 20, the combination of Rodriguez, Lee, and Caballaro teaches further comprising: applying one or more machine learning algorithms to correlate vector data to a cluster of data, each cluster representing one of the subsets of attributes (see Caballaro ¶ [0039] as described for the rejection of claim 7 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 7 and incorporated herein. In regard to claim 21, the combination of Rodriguez, Lee, and Caballaro teaches further comprising: forming correlated datasets (see Caballaro ¶ [0025] as described for the rejection of claim 7 and is incorporated herein),, each correlated data being formed based on a set of clustered data see Caballaro ¶ [0039] as described for the rejection of claim 8 and is incorporated herein). The motivation to combine Caballaro with the combination of Rodriguez and Lee is described for the rejection of claim 8. Conclusion There are prior art made of record which are not relied upon but are considered pertinent to applicant’s disclosure. They are listed on the PTO-892 accompanying this action THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES N FIORILLO whose telephone number is (571)272-9909. The examiner can normally be reached on 7:30 - 5 PM Mon - Fri.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John A. Follansbee can be reached on 571-272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES N FIORILLO/Examiner, Art Unit 2444
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
May 28, 2025
Non-Final Rejection — §101, §103, §DP
Dec 01, 2025
Response Filed
Mar 13, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602457
PREVENTING ACCIDENTAL PASSWORD DISCLOSURE
2y 5m to grant Granted Apr 14, 2026
Patent 12585739
IMAGE FORMING DEVICE TRANSMITTING DATA FOR DISPLAYING AUTHENTICATION CHANGING WEB PAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12572631
System and Method for Watermarking Data for Tracing Access
2y 5m to grant Granted Mar 10, 2026
Patent 12562921
CERTIFICATE ENROLLMENT FOR SHARED NETWORK ELEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12554557
PRECISION GEOMETRY CLIENT FOR THIN CLIENT APPLICATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+36.9%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 444 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month