DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to Amendment filed 01/22/2026.
Claims 1, 9 and 17 have been amended, and new claims 21-23 have been added. Currently, claims 1-23 are pending.
Response to Arguments
Applicant's arguments filed 01/22/2026 have been fully considered but they are not fully persuasive.
Regarding Applicant’s amendments with respect to independent claims 1, 9 and 17, the amendments appear to reword or rearrange the previous recited limitations without specifying on how a primary machine-learning model or at least one second machine-learning model (see claims 1 and 9) or a plurality of narrative machine-learning models (see claim 17) work or function or being trained in processing input and generating output (e.g., no details on how the primary machine-learning model determines whether one or more content items are associated with the one or more predefined influence operations, etc.). As such, those machine-learning model models recited at high level of generality amount to no more than mere instructions to implement or apply the abstract idea (e.g., mental process).
Regarding Applicant’s arguments (see Remarks, pages 10-11) that claim 1 does not recite a mental process or does not fall into the grouping of mental processes, Examiner respectfully disagrees.
Consider claim 1 as currently amended as follows:
1. (Currently Amended) A method for detecting influence operations, the method comprising:
generating, using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source, a first indication that at least one content item of the plurality of content items is associated with one or more predefined influence operations, wherein the primary machine-learning model is trained to determine whether one or more content items are associated with the one or more predefined influence operations;
generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations; and
generating an output based on the second indication of the one or more predefined diverse narratives.
All the underlined recitation/limitations as presented above can be performed in the human mind (i.e., mental processes). For instance, providing a set of content items and a set of predefined influence operations and/or predefined narratives/topics, a human can determine whether one or more content items can be associated with one or more predefined influence operations and/or predefined narratives/topics through observation, evaluation, judgment and opinion. Thus, claim 1 obviously recites mental process or abstract idea.
Regarding Applicant’s arguments (see Remarks, pages 12-13) that even assuming, arguendo, amended claim 1 falls into the category of a mental process, amended claim 1 is still not directed to an abstract idea because amended claim 1 as a whole integrates the alleged judicial exception into a practical application (e.g., machine learning-assisted data classification), Examiner respectfully disagrees.
Referring to amended claim 1 as illustrated above, the portion of claim 1 that is not underlined can be interpreted as being directed to additional elements. These additional elements including “using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source”, “the primary machine-learning model is trained”, “using at least one secondary machine-learning model on the at least one content item”, and “the at least one second machine-learning model is trained” recited at high level of generality without providing details on how a model works to generate its output or how it is trained to perform its function are directed to insignificant extra-solution activity (e.g., mere data gathering such as receiving content items from at least one internet source, receiving the at least one content item, etc.) and mere instructions for implementing or applying the abstract idea (e.g., a primary machine-learning model and at least one secondary machine-learning model). Therefore, claim 1 as amended does not recite any additional element that integrates the abstract idea into a practical application.
Regarding Applicant’s arguments (see Remarks, pages 12-13) that even assuming, arguendo, amended claim 1 is directed to the judicial exception of an abstract idea, amended claim 1 recites additional elements that are unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, Examiner respectfully disagrees.
Referring to amended claim 1 as illustrated above, the portion of claim 1 that is not underlined can be interpreted as being directed to additional elements. These additional elements including “using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source”, “the primary machine-learning model is trained”, “using at least one secondary machine-learning model on the at least one content item”, and “the at least one second machine-learning model is trained” recited at high level of generality without providing details on how a model works to generate its output or how it is trained to perform its function are directed to insignificant extra-solution activity (e.g., mere data gathering such as receiving content items from at least one internet source, receiving the at least one content item, etc.) and mere instructions for implementing or applying the abstract idea (e.g., a primary machine-learning model and at least one secondary machine-learning model). As explained above, the primary machine-learning model and the at least secondary machine-learning model are at best the equivalent of merely adding the words “apply it” to the judicial exception. Therefore, claim 1 as amended does not recite additional elements that are unconventional or inventive concept/steps significantly more than the abstract idea.
Regarding Applicant’s arguments (see Remarks, pages 14-16), with respect to independent claim 1 and similarly applied to independent claims 9 and 17, that Smith fails to cure the deficiencies of Alizadeh because the functionality of the prediction of Smith is different from that of the second indication as recited in claim 1, Examiner respectfully disagrees.
Claim 1 broadly recited using a primary machine-learning model to generate a first indication to indicate whether or not content item(s) is associated with any predetermined influence operation(s) and using at least one second machine-learning model to generate a second indication to indicate whether or not the at least one content item is associated with any predetermined diverse narratives. However, claim 1 does not provide details on how the primary machine-learning model and the at least one second machine-learning model are working and/or training together and/or how the one or more predefined influence operations and the one or more predefined diverse narratives are generated and associated/related (e.g., whether a particular influence operation is associated with a particular set of diverse narratives). As being broadly recited, the claimed invention cannot distinguish its recited “second indication” (i.e., an indication/determination/prediction of whether a content item is associated with one or more predefined diverse narratives) from the prediction of whether a given content related to the issue of gun control, abortion, foreign policy, etc. (see [0031]) as disclosed by Smith. The recited “predefined diverse narratives” can be broadly interpreted as any set of predefined classifications/categories/topics/concepts.
Regarding Applicant’s arguments (see Remarks, page 16) regarding new claims 21-23, these new claims will be rejected in view of new reference (see Wang et al. (CN-103913721)).
Claim Objections
Claims 17-20 and 23 are objected to because of the following informalities:
Regarding claim 17, the phrase “that are associated one or more content items” in line 7 should be “that are associated with one or more content items”.
Other dependent claims 18-20 and 23 are objected as incorporating the informality of the objected claim 17 upon which they depend.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea of labeling/tagging data without significantly more.
The claims recite an abstract idea of labeling/tagging content items based on broadly recited steps of determining and indicating, which are broadly recited steps/concepts that can be performed in the human mind and/or with the aid of pencil and paper and directed to mental processes grouping of abstract ideas . This judicial exception is not integrated into a practical application because other additional elements including genetic computer components and common computer functionality (e.g., accessing, storing, displaying, etc.) and/or insignificant extra-solution activity (e.g., mere data gathering and outputting) for implementing the abstract idea are not sufficient to integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements include only generic/common computer components (e.g., memory, processor, program instructions, etc.) and generic/common computer functions (e.g., accessing, storing, displaying, etc.) and/or insignificant extra-solution activity (e.g., mere data gathering and outputting), which are not sufficient to amount to significantly more than the recited abstract idea.
Abstract idea analysis as follows:
Step 1:
According to the first part of the analysis, in the instant claims, claims 1-8 and 21 are directed to a method (i.e. a process), claims 9-16 and 22 are directed to a system (i.e., a machine), and claims 17-20 and 23 are directed to a method (i.e., a processor). Thus, each of the claims falls within one of the four statutory categories (e.g., process, machine, manufacture or composition of matter).
Step 2a Prong 1 (claims 1, 9 and 17):
The following limitations recited in claims 1, 9, and 17 are abstract ideas that fall under mental processes:
generating, using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source, a first indication that at least one content item of the plurality of content items is associated with one or more predefined influence operations, wherein the primary machine-learning model is trained to determine whether one or more content items are associated with the one or more predefined influence operations (wherein the underlined portion presents limitation what can be performed by the human mind or with the aid of pencil and paper; this limitation broadly recites using a primary machine learning model to generate a first indication and/or to determine whether one or more content items are associated with one or more predefined influence operations, wherein the step of generating a first indication and/or determining as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper, e.g., observing the content items to make a determination about the association between the content items and predefined influence operations, such mental observations or evaluations fall within the “mental processes” grouping of abstract idea);
generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations (wherein the underlined portion presents limitation what can be performed by the human mind or with the aid of pencil and paper; this limitation broadly recites using at least one secondary machine learning model to generate a second indication and/or determine whether one or more content items are associated with one or more predefined diverse narratives for the one or more predefined influence operations, wherein the step of generating a second indication and/or determining as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper, e.g., observing the content items to make a determination about the association between the content items and predefined narratives, such mental observations or evaluations fall within the “mental processes” grouping of abstract idea); and
generating an output based on the second indication of the one or more predefined diverse narratives (this step of generating can be performed in the human mind or with the aid of pencil and paper).
All the limitations above are mental steps that can be performed in the human mind or with the aid of pencil and paper.
Step 2a Prong 2 (Claims 1, 9 and 17):
The following limitations in claims 1, 9 and 17 are additional elements:
using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source (the element of using the primary machine-learning model is interpreted as “apply it” to the abstract idea (i.e., mental process) and element of content items received from at least one internet source is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity);
the primary machine-learning model is trained to determine (reciting the machine-learning model is trained to perform a function or generate an outcome without any details about how the outcomes are accomplished is providing nothing more than mere instructions for implementing the abstract idea);
using at least one secondary machine-learning model on the at least one content item (the element of using at least one secondary machine-learning model is interpreted as “apply it” to the abstract idea (i.e., mental process) and the at least content items as input to the at least secondary machine-learning model is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity); and
the primary machine-learning model is trained to determine (reciting the machine-learning model is trained to perform a function or generate an outcome without any details about how the outcomes are accomplished is providing nothing more than mere instructions for implementing the abstract idea);
a processor (see claim 9) (this limitation is directed to generic computer component);
memory storing instructions that, when executed by the processor, cause the system to perform a set of operations (see claim 9) (this limitation is directed to generic computer components or generic computer for implementing or applying the abstract idea; and
wherein each predefined diverse narrative of the one or more predefined diverse narratives correspond to one or more selected from the group of: a diagnostic frame and a prognostic frame (see claim 17) (this limitation is directed to mere data/information).
These are a generic computer and/or generic computer components used to perform generic computer functions or insignificant extra-solution activity, such that they amount to no more than components used to execute mere instructions for implementing or applying the abstract idea. Accordingly, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract idea(s).
Step 2b (Claims 1, 9 and 17):
The following limitations in claims 1, 9 and 17 are additional elements:
using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source (the element of using the primary machine-learning model is interpreted as “apply it” to the abstract idea (i.e., mental process) and element of content items received from at least one internet source is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity);
the primary machine-learning model is trained to determine (reciting the machine-learning model is trained to perform a function or generate an outcome without any details about how the outcomes are accomplished is providing nothing more than mere instructions for implementing the abstract idea);
using at least one secondary machine-learning model on the at least one content item (the element of using at least one secondary machine-learning model is interpreted as “apply it” to the abstract idea (i.e., mental process) and the at least content items as input to the at least secondary machine-learning model is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity); and
the primary machine-learning model is trained to determine (reciting the machine-learning model is trained to perform a function or generate an outcome without any details about how the outcomes are accomplished is providing nothing more than mere instructions for implementing the abstract idea);
a processor (see claim 9) (this limitation is directed to generic computer component);
memory storing instructions that, when executed by the processor, cause the system to perform a set of operations (see claim 9) (this limitation is directed to generic computer components or generic computer for implementing or applying the abstract idea; and
wherein each predefined diverse narrative of the one or more predefined diverse narratives correspond to one or more selected from the group of: a diagnostic frame and a prognostic frame (see claim 17) (this limitation is directed to mere data/information).
These are a generic computer and/or generic computer components used to perform generic computer functions and/or insignificant extra-solution activity or well-understood, routine, conventional activity, and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 2 and 10, claims 2 and 10 depend on claims 1 and 9 respectively. As such, claims 2 and 10 recite the abstract idea as presented in claims 1 and 9.
In addition, claims 2 and 10 include additional elements:
wherein the one or more predefined influence operations each correspond to a respective influence entity (this limitation specifies the one or more predefined influence operations, which is directed to mere data/information).
These are additional elements directed to mere data/information, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 3 and 11, claims 3 and 11 depend on claims 1 and 9 respectively. As such, claims 3 and 11 recite the abstract idea as presented in claims 1 and 9.
In addition, claims 3 and 11 include additional elements:
wherein the plurality of content items include one or more long-form content items (this limitation specifies the plurality of content items, which is directed to mere data/information).
These are additional elements directed to mere data/information, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 4 and 12, claims 4 and 12 depend on claims 1 and 9 respectively. As such, claims 4 and 12 recite the abstract idea as presented in claims 1 and 9.
In addition, claims 4 and 12 include additional elements:
aggregating a plurality of training content items (this step of aggregating as broadly recited is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity);
labelling each training content item of the plurality of training content items to be associated with a respective one or more predefined or new influence operations (this step of labelling as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper, e.g., by observing a set of content items and making a decision on associating between content item and predefined influence operations (i.e., labels)); and
outputting the plurality of training content items with corresponding indications of the associated one or more predefined or new influence operations (this step of outputting as broadly recited is directed to mere data gathering or outputting recited at high level of generality, and thus is insignificant extra-solution activity).
These are additional elements directed to mental process (i.e., abstract idea) and insignificant extra-solution activities, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 5, 13 and 18, claims 5, 13 and 18 depend on claims 1, 9 and 17 respectively. As such, claims 5, 13 and 18 recite the abstract idea as presented in claims 1, 9 and 17.
In addition, claims 5, 13 and 18 include additional elements:
aggregating a plurality of training content items (this step of aggregating as broadly recited is directed to mere data gathering recited at high level of generality, and thus is insignificant extra-solution activity);
labelling each training content item of the plurality of training content items to be associated with a respective one or more predefined diverse narratives (this step of labelling as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper, e.g., by observing a set of content items and making a decision on associating between content item and predefined diverse narratives (i.e., labels)); and
outputting the plurality of training content items with corresponding indications of the associated one or more predefined diverse narratives (this step of outputting as broadly recited is directed to mere data gathering or outputting recited at high level of generality, and thus is insignificant extra-solution activity).
These are additional elements directed to mental process (i.e., abstract idea) and insignificant extra-solution activities, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 6 and 14, claims 6 and 14 depend on claims 1 and 9 respectively. As such, claims 6 and 14 recite the abstract idea as presented in claims 1 and 9.
In addition, claims 6 and 14 include additional elements:
wherein each predefined diverse narrative of the one or more predefined diverse narratives correspond to one or more selected from the group of: a diagnostic frame and a prognostic frame (this limitation specifies the predefined diverse narratives and is directed to mere data/information).
These are additional elements directed to mere data/information, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 7, 15 and 19, claims 7, 15 and 19 depend on claims 1, 9 and 18 respectively. As such, claims 7, 15 and 19 recite the abstract idea as presented in claims 1, 9 and 8.
In addition, claims 7, 15 and 19 include additional elements:
wherein, prior to providing the at least one content item to at least one secondary machine-learning model, the at least one content item is converted to text, and wherein the text is provided to the at least one secondary machine-learning model (this step of converting as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper, wherein the at least one secondary machine-learning model as broadly recited without any limitation on how it operates can be directed to nothing more than mere instructions for implementing or applying the abstract ideas).
These are additional elements directed to mental process (i.e., abstract idea) and mere instructions for implementing the abstract idea, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 8 and 16, claims 8 and 16 depend on claims 1 and 9 respectively. As such, claims 8 and 16 recite the abstract idea as presented in claims 1 and 9.
In addition, claims 8 and 16 include additional elements:
wherein, prior to providing each content item of the plurality of content items to a primary machine-learning model, a language of at least one content item of the plurality of content items is identified, and wherein the primary machine-learning model is selected from a plurality of machine-learning models, based on the identified language of the at least one content item (this step of identifying a language of a content item and selecting a machine-learning model based on the identified language as broadly recited can be mentally performed in the human mind or with the aid of pencil and paper).
These are additional elements directed to mental process (i.e., abstract idea), which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claim 20, claim 20 depends on claim 19. As such, claim 20 recites the abstract idea as presented in claim 19.
In addition, claim 20 includes additional elements:
wherein the plurality of content items include one or more long-form content items (this limitation specifies the plurality of content items, which is directed to mere data/information).
These are additional elements directed to mere data/information, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Regarding claims 21-23, claims 21-23 depend on claims 1, 9 and 18 respectively. As such, claims 21-23 recite the abstract idea as presented in claims 1, 9 and 8.
In addition, claims 21-23 include additional elements:
the second indication comprising a continuous variable output defining a first range indicative of the one or more predefined diverse narratives not being associated with the one or more content items of the at least one content item and a second range indicative of the one or more predefined diverse narratives being associated with the one or more content items of the at least one content item (this feature of providing model output as continuous variables/value/number and/or ranges are well-understood and well-used in the art (see Wijshoff et al., (WO2017/211814), [0035] and [0063]; Kobayashi (U.S. Publication 2018/0285694), [0068]; and Wang et al. (CN-103913721), [0052] and [0111]).
These are additional elements directed to well-understood, routine, conventional activity for implementing the abstract idea, which do not integrate the judicial exception into a practical application and do not amount to significantly more, see MPEP 2106.05(d)(II).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 and 9-14 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Alizadeh et al. (U.S. Publication No. 2022/0383142, Publication date 12/01/2022), and further in view of Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018).
As to claim 1, Alizadeh et al. teaches:
“A method for detecting influence operations” (see Alizadeh et al., Abstract), the method comprising:
“generating, using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source, a first indication that at least one content item of the plurality of content items is associated with one or more predefined influence operations, wherein the primary machine-learning model is trained to determine whether one or more content items are associated with the one or more predefined influence operations” (see Alizadeh et al., [0091] for receiving posts on a given social media platform for being assessing by classifiers and for using classifiers trained on human-interpretable features to assess whether posts on a given social media platform are part of a previous observed coordinated influence operation, wherein each previously observed coordinated influence operation is interpreted as a predefined influence operation as recited; and also see [0092] for applying a classification threshold to label posts as being part of the influence campaign/operation or not, wherein the label is an indication as recited).
In addition, Alizadeh et al. teaches events/narratives/topics associated with influence campaigns (see Alizadeh et al., [0086] for a high confidence set of labels for certain posts being part of a coordinated influence campaign/operations; also see [0102] and [0103]) and a feature of identifying events/narratives/topics associated with posts or content items (see Alizadeh et al., [0103] and [0106]).
However, Alizadeh et al. does not explicitly teach a feature of using at least one machine-learning model to classify and label posts or content items with narratives/topics and outputting/displaying the identified topic as equivalently recited as follows:
“generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations; and
generating an output based on the second indication of the one or more predefined diverse narratives”.
On the other hand, Smith et al. explicitly teaches a feature of using at least one machine-learning model to classify and label posts or content items with narratives/topics and outputting/displaying the identified topic (see Smith et al., [0024], [0031] and Fig. 1C for determining a topic of a content (e.g., a newsfeed item or article) and displaying along with related contents/opinions) as equivalently recited as follows:
“generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations” (see Smith et al., [0031] for providing a content to the classifier model to determine which of a plurality of predetermined topics related/associated with the content, wherein a plurality of predetermined topics can be interpreted as equivalent to one or more predefined diverse narratives as recited; also see [0031] for receiving a prediction that the content is related/associated with one of the plurality of predetermined topics (e.g., gun control, abortion, foreign policy, etc.), wherein the prediction can be interpreted as equivalent to an indication as recited; also see [0024]); and
“generating an output based on the second indication of the one or more predefined diverse narratives” (see Smith et al., Fig. 1C, [0024] and [0026] for displaying the anchor topic (e.g., guns or gun control) associated with the content (e.g., a newsfeed item or article) and a set of related contents/opinions).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Smith et al.'s teaching to Alizadeh et al.’s system by implementing a feature for classifying the content items in the influence operations according to topics/narratives associated with the influence operations and displaying their classified narratives/topics. Ordinarily skilled artisan would have been motivated to do so to provide Alizadeh et al.’s system with an alternative way to identify narratives/topics associated with content items associated with influence operations. In addition, both of the references (Alizadeh et al. and Smith et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
As to claim 9, Alizadeh et al. teaches:
“A system for detecting influence operations” (see Alizadeh et al., Abstract), the system comprising:
“a processor” (see Alizadeh et al., Fig. 1 for processor(s) 14); and
“memory storing instructions that, when executed by the processor, cause the system to perform a set of operations, the set of operations comprising” (see Alizadeh et al., Fig. 1 for memory 16):
“generating, using a primary machine-learning model on each content item of a plurality of content items received from at least one internet source, a first indication that at least one content item of the plurality of content items is associated with one or more predefined influence operations, wherein the primary machine-learning model is trained to determine whether one or more content items are associated with the one or more predefined influence operations” (see Alizadeh et al., [0091] for receiving posts on a given social media platform for being assessing by classifiers and for using classifiers trained on human-interpretable features to assess whether posts on a given social media platform are part of a previous observed coordinated influence operation, wherein each previously observed coordinated influence operation is interpreted as a predefined influence operation as recited; and also see [0092] for applying a classification threshold to label posts as being part of the influence campaign/operation or not, wherein the label is an indication as recited).
In addition, Alizadeh et al. teaches events/narratives/topics associated with influence campaigns (see Alizadeh et al., [0086] for a high confidence set of labels for certain posts being part of a coordinated influence campaign/operations; also see [0102] and [0103]) and a feature of identifying events/narratives/topics associated with posts or content items (see Alizadeh et al., [0103] and [0106]).
However, Alizadeh et al. does not explicitly teach a feature of using at least one machine-learning model to classify and label posts or content items with narratives/topics and outputting/displaying the identified topic as equivalently recited as follows:
“generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations; and
generating an output based on the second indication of the one or more predefined diverse narratives”.
On the other hand, Smith et al. explicitly teaches a feature of using at least one machine-learning model to classify and label posts or content items with narratives/topics and outputting/displaying the identified topic (see Smith et al., [0024], [0031] and Fig. 1C for determining a topic of a content (e.g., a newsfeed item or article) and displaying along with related contents/opinions) as equivalently recited as follows:
“generating, using at least one secondary machine-learning model on the at least one content item, a second indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the at least one secondary machine-learning model is trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives for advancing the one or more predefined influence operations” (see Smith et al., [0031] for providing a content to the classifier model to determine which of a plurality of predetermined topics related/associated with the content, wherein a plurality of predetermined topics can be interpreted as equivalent to one or more predefined diverse narratives as recited; also see [0031] for receiving a prediction that the content is related/associated with one of the plurality of predetermined topics (e.g., gun control, abortion, foreign policy, etc.), wherein the prediction can be interpreted as equivalent to an indication as recited; also see [0024]); and
“generating an output based on the second indication of the one or more predefined diverse narratives” (see Smith et al., Fig. 1C, [0024] and [0026] for displaying the anchor topic (e.g., guns or gun control) associated with the content (e.g., a newsfeed item or article) and a set of related contents/opinions).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Smith et al.'s teaching to Alizadeh et al.’s system by implementing a feature for classifying the content items in the influence operations according to topics/narratives associated with the influence operations and displaying their classified narratives/topics. Ordinarily skilled artisan would have been motivated to do so to provide Alizadeh et al.’s system with an alternative way to identify narratives/topics associated with content items associated with influence operations. In addition, both of the references (Alizadeh et al. and Smith et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
As to claims 2 and 10, these claims are rejected based on the same arguments as above to reject claims 1 and 9 respectively and are similarly rejected including the following:
Alizadeh et al. as modified by Smith et al. teaches:
“wherein the one or more predefined influence operations each correspond to a respective influence entity” (see Alizadeh et al., [0004] for coordinated influence operations each associated with a country (i.e., a respective influence entity)).
As to claims 3 and 11, these claims are rejected based on the same arguments as above to reject claims 1 and 9 respectively and are similarly rejected including the following:
Alizadeh et al. as modified by Smith et al. teaches:
“wherein the plurality of content items include one or more long-form content items” (see Alizadeh et al., [0075] for social media posts which can be long-form content items; also see Smith et al., [0024] and [0048] wherein content items include a newsfeed item or article, news stories, etc.).
As to claims 4 and 12, these claims are rejected based on the same arguments as above to reject claims 1 and 9 respectively and are similarly rejected including the following:
Alizadeh et al. as modified by Smith et al. teaches:
“wherein training the primary machine-learning model comprises” (see Alizadeh et al., [0009]-[0010] for training and/or retraining a classifier):
“aggregating a plurality of training content items” (see Alizadeh et al., [0092] for collecting/aggregating posts over a given period to form the training data);
“labelling each training content item of the plurality of training content items to be associated with a respective one or more predefined or new influence operations” (see Alizadeh et al., [0173] and [0176] for training classifier based on labeled training data including posts (i.e., content items) associated with a given coordinated influence operation (i.e., positive class) and sample (e.g., posts) associated with organic user’s activity (i.e., negative class)); and
“outputting the plurality of training content items with corresponding indications of the associated one or more predefined or new influence operations” (see Alizadeh et al., [0171]-[0173] for training classifiers to access whether posts on a given social media platform are part of a previous observed coordinated influence operation, wherein the prediction/output from the classifiers can be interpreted as indications as recited).
As to claims 5 and 13, these claims are rejected based on the same arguments as above to reject claims 1 and 9 respectively and are similarly rejected including the following:
Alizadeh et al. as modified by Smith et al. teaches:
“wherein training the at least one secondary machine-learning model” (see Alizadeh et al., [0009]-[0010] for training and/or retraining a classifier; also see Smith et al., [0024] for training a supervised classification model) comprises:
“aggregating a plurality of training content items” (see Alizadeh et al., [0092] for collecting/aggregating posts over a given period to form the training data; also see Smith et al., [0024] for a corpus or data set of articles);
“labelling each training content item of the plurality of training content items to be associated with a respective one or more predefined diverse narratives” (see Alizadeh et al., [0173] and [0176] for training classifier based on labeled training data; also see Smith et al., [0024] for training a supervised classification model using a corpus or data set of articles with known topic labels, wherein known topic labels can be interpreted as equivalent to labels associated with predefined narratives as recited); and
“outputting the plurality of training content items with corresponding indications of the associated one or more predefined diverse narratives” (see Alizadeh et al., [0171]-[0173] and [0176] for training classifiers using labeled training data to train the model to output the correct label for each content items; also see Smith et al., [0024] for training the supervised classification model to predict/output the correct topic associated with each content item wherein each identified topic can be interpreted as an indication of associated one or more predefined diverse narratives as recited).
As to claims 6 and 14, these claims are rejected based on the same arguments as above to reject claims 1 and 9 respectively and are similarly rejected including the following:
Alizadeh et al. as modified by Smith et al. teaches:
“wherein each predefined diverse narrative of the one or more predefined diverse narratives correspond to one or more selected from the group of: a diagnostic frame and a prognostic frame” (see Alizadeh et al., [0087] for “Groundtruth” refers to a high confidence set of labels for certain posts being part of a coordinated campaign, wherein the set of labels as disclosed can be interpreted as topics/narratives associated with the coordinated influence campaign/operation; also see Smith et al., [0026] classifying newsfeed items or articles into target topics/issues, wherein issues can be interpreted as diagnostic frame (i.e., related to identify a problem/issue)).
Claims 7 and 15 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Alizadeh et al. (U.S. Publication No. 2022/0383142, Publication date 12/01/2022), in view of Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Pandey et al. (U.S. Publication No. 2025/0119494, effectively filed date 10/04/2023).
As to claims 7 and 15, Alizadeh et al. as modified by Smith et al. teaches all limitations as recited in claims 1 and 9 respectively including identifying an operation/event/topic associated with a content item using a machine learning model (see Alizadeh et al., [0171]; also see Smith et al., [0024] for determining a topic associated with a newsfeed item or article).
However, Alizadeh et al. as modified by Smith et al. does not explicitly teach a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using a machine learning model as equivalently recited as follows:
“wherein, prior to providing the at least one content item to at least one secondary machine-learning model, the at least one content item is converted to text, and wherein the text is provided to the at least one secondary machine-learning model”.
On the other hand, Pandey et al. explicitly teaches a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using a machine learning model (see Pandey et al., [0162] for converting an audio call (i.e., a content item) into text before executing the GenAI model on the text of the audio call to identify the topic of the conversation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Pandey et al.'s teaching to Alizadeh et al.’s system (as modified by Smith et al.) by implementing a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using a machine learning model. Ordinarily skilled artisan would have been motivated to do so to provide Alizadeh et al.’s system with an effective way to identify tags/topics for content items of different forms (e.g., audio content) using models/classifiers based on text. In addition, both of the references (Alizadeh et al. and Pandey et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
Claims 8 and 16 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Alizadeh et al. (U.S. Publication No. 2022/0383142, Publication date 12/01/2022), in view of Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Cakaloglu et al. (U.S. Publication No. 2022/0414123, Publication date 12/29/2022).
As to claims 8 and 16, Alizadeh et al. as modified by Smith et al. teaches all limitations as recited in claims 1 and 9 respectively including identifying an operation/event/topic associated with a content item using a machine learning model (see Alizadeh et al., [0171]; also see Smith et al., [0024] for determining a topic associated with a newsfeed item or article).
However, Alizadeh et al. as modified by Smith et al. does not explicitly teach a feature for identifying a language associated with a content item and selecting a machine learning model from a plurality of machine learning model based on the identified language as equivalently recited as follows:
“wherein, prior to providing each content item of the plurality of content items to a primary machine-learning model, a language of at least one content item of the plurality of content items is identified, and wherein the primary machine-learning model is selected from a plurality of machine-learning models, based on the identified language of the at least one content item”.
On the other hand, Cakaloglu et al. explicitly teaches a feature for identifying a language associated with a content item and selecting a machine learning model from a plurality of machine learning model based on the identified language (see Cakaloglu et al., [0062] for selecting a topic or categorization model to apply to the data items based on the identified language; also see [0059]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cakaloglu et al.'s teaching to Alizadeh et al.’s system (as modified by Smith et al.) by implementing a plurality of machine learning models for different languages and a feature for identifying a language associated with a content item and selecting a machine learning model from a plurality of machine learning model based on the identified language. Ordinarily skilled artisan would have been motivated to do so to provide Alizadeh et al.’s system with an improved framework for processing data items of a variety of languages. In addition, both of the references (Alizadeh et al. and Cakaloglu et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
Claims 21-22 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Alizadeh et al. (U.S. Publication No. 2022/0383142, Publication date 12/01/2022), in view of Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Wang et al. (CN-103913721-A, Publication date 07/09/2014).
As to claims 21-22, Alizadeh et al. as modified by Smith et al. teaches all limitations as recited in claims 1 and 9 respectively including identifying an operation/event/topic associated with a content item using a machine learning model (see Alizadeh et al., [0171]; also see Smith et al., [0024] for determining a topic associated with a newsfeed item or article).
However, Alizadeh et al. as modified by Smith et al. does not explicitly teach a feature for generating/receiving model output as continuous values which defined a first range for “1” or “yes” and a second range for “0” or “no” as equivalently recited as follows:
“the second indication comprising a continuous variable output defining a first range indicative of the one or more predefined diverse narratives not being associated with the one or more content items of the at least one content item and a second range indicative of the one or more predefined diverse narratives being associated with the one or more content items of the at least one content item”.
On the other hand, Wang et al. explicitly teaches a feature for generating/receiving model output as continuous values which defined a first range for “1” or “yes” and a second range for “0” or “no” (see Wang et al., [0052] and [0111] wherein predict result of the neural network (i.e., a machine-learning model) is continuous value, which define a first range [0.5, 1) mapping to 1 or “yes” and second range (0, 0.5) mapping to 0 or “no”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wang et al.'s teaching to Alizadeh et al.’s system (as modified by Smith et al.) by implementing a feature of generating/receiving an model output (e.g., the second indication) as a continuous value defined by first range and second range. Ordinarily skilled artisan would have been motivated to do so to provide Alizadeh et al.’s system with an alternative effective way to provide model output. In addition, both of the references (Alizadeh et al. and Wang et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, a system for using a model for generating a prediction. This close relation between both of the references highly suggests an expectation of success when combined.
Claims 17-18 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Cakaloglu et al. (U.S. Publication No. 2022/0414123, Publication date 12/29/2022).
As to claim 17, Smith et al. teaches:
“A method for identifying diverse narratives” (see Smith et al., Abstract and [0024] for identifying topics associated with newsfeed items or article), the method comprising:
“generating, using a plurality of narrative machine-learning models on at least one content item of a plurality of content items received from at least one internet source, an indication of one or more predefined diverse narratives that are associated with one or more content items of the at least one content item, wherein the plurality of narrative machine-learning models are trained to determine whether the one or more content items are associated with the one or more predefined diverse narratives” (see Smith et al., [0024] and [0026] for receiving the newsfeed items or articles (i.e., content items); and see [0031] for providing a content to the classifier model trained to determine which of a plurality of predetermined topics related/associated with the content, wherein a plurality of predetermined topics can be interpreted as equivalent to one or more predefined diverse narratives as recited; also see [0031] for receiving a prediction that the content is related/associated with one of the plurality of predetermined topics (e.g., gun control, abortion, foreign policy, etc.), wherein the prediction can be interpreted as equivalent to an indication as recited; also see [0024] and [0026] wherein computer or model(s) for identifying topics and/or entities associated with newsfeed items or articles can be interpreted as equivalent to narrative machine-learning models as recited),
“wherein each predefined diverse narrative of the one or more predefined diverse narratives correspond to one or more selected from the group of: a diagnostic frame and a prognostic frame” see Smith et al., [0026] classifying newsfeed items or articles into target topics/issues, wherein issues can be interpreted as diagnostic frame (i.e., related to identify a problem/issue)); and
“generating an output based on the indication of one or more predefined diverse narratives” (see Smith et al., Fig. 1C, [0024] and [0026] for displaying the anchor topic (e.g., guns or gun control) associated with the content (e.g., a newsfeed item or article) and a set of related contents/opinions).
In case that Smith et al. does not explicitly teach a feature of using a plurality of machine learning models (i.e., narrative machine learning models) for determining topics (i.e., narratives) associated with content items as recited.
On the other hand, Cakaloglu et al. explicitly teaches using a plurality of machine learning models (i.e., narrative machine learning models) for determining topics (i.e., narratives) associated with content items (see Cakaloglu et al., [0059] and [0062] for using multiple topic or categorization models for identifying topics associated with data items).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Cakaloglu et al.'s teaching to Smith et al.’s system by implementing a feature of using a plurality of machine learning models for different languages for processing data items in different languages. Ordinarily skilled artisan would have been motivated to do so to provide Smith et al.’s system with an improved framework for processing data items of a variety of languages. In addition, both of the references (Smith et al. and Cakaloglu et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
As to claim 18, this claim is rejected based on the same arguments as above to reject claim 17 and is similarly rejected including the following:
Smith et al. as modified by Cakaloglu et al. teaches:
“wherein training the plurality of narrative machine-learning models comprises” (see Smith et al., [0024] and [0026] for training a supervised classification model or computer models; also see Cakaloglu et al., [0028]-[0029] for training machine learning models to identify topics of each of data items):
“aggregating a plurality of training content items” (see Smith et al., [0024] for a corpus or data set of articles);
“labelling each training content item of the plurality of training content items to be associated with a respective one or more predefined diverse narratives” (see Smith et al., [0024] for training a supervised classification model using a corpus or data set of articles with known topic labels, wherein known topic labels can be interpreted as equivalent to labels associated with predefined narratives as recited); and
“outputting the plurality of training content items with corresponding indications of the associated one or more predefined diverse narratives” (see Smith et al., [0024] for training the supervised classification model to predict/output the correct topic associated with each content item wherein each identified topic can be interpreted as an indication of associated one or more predefined diverse narratives as recited).
Claims 19 and 20 (effective filling date 10/16/2023) are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Cakaloglu et al. (U.S. Publication No. 2022/0414123, Publication date 12/29/2022), and further in view of Pandey et al. (U.S. Publication No. 2025/0119494, effectively filed date 10/04/2023).
As to claim 19, Smith et al. as modified by Cakaloglu et al. teaches all limitations as recited in claim 18 including identifying an operation/event/topic associated with a content item using a plurality of machine learning models (also see Smith et al., [0024] for determining a topic associated with a newsfeed item or article, also see Cakaloglu et al., [0027]-[0028] for implementing a plurality of trained machine learning models to identify topics for each of the content/data items).
However, Smith et al. as modified by Cakaloglu et al. does not explicitly teach a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using machine learning models as equivalently recited as follows:
“wherein, prior to providing the at least one content item to a plurality of narrative machine-learning models, the at least one content item is converted to text, and wherein the text is provided to the plurality of narrative machine-learning models”.
On the other hand, Pandey et al. explicitly teaches a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using machine learning models (see Pandey et al., [0162] for converting an audio call (i.e., a content item) into text before executing the GenAI model on the text of the audio call to identify the topic of the conversation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Pandey et al.'s teaching to Smith et al.’s system (as modified by Cakaloglu et al.) by implementing a feature for converting a content item into text before classifying or identifying topics/labels associated with the content item using a machine learning model. Ordinarily skilled artisan would have been motivated to do so to provide Smith et al.’s system with an effective way to identify tags/topics for content items of different forms (e.g., audio content) using models/classifiers based on text. In addition, both of the references (Smith et al. and Pandey et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, system for using machine-learning model(s) for identifying topics/events/operations associated with content items. This close relation between both of the references highly suggests an expectation of success when combined.
As to claim 20, this claim is rejected based on the same arguments as above to reject claim 19 and is similarly rejected including the following:
Smith et al. as modified by Cakaloglu et al. and Pandey et al. teaches:
“wherein the plurality of content items include one or more long-form content items” (see Smith et al., [0024] and [0048] wherein content items include a newsfeed item or article, news stories, etc., which are examples of long-form content items as recited).
Claim 23 (effective filling date 10/16/2023) is rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (U.S. Publication No. 2018/0285461, Publication date 10/04/2018), and further in view of Cakaloglu et al. (U.S. Publication No. 2022/0414123, Publication date 12/29/2022), and further in view of Wang et al. (CN-103913721-A, Publication date 07/09/2014).
As to claims 21-22, Smith et al. as modified by Cakaloglu et al. teaches all limitations as recited in claim 17 including identifying an operation/event/topic associated with a content item using a machine learning model (see Smith et al., [0024] for determining a topic associated with a newsfeed item or article).
However, Smith et al. as modified by Cakaloglu et al. does not explicitly teach a feature for generating/receiving model output as continuous values which defined a first range for “1” or “yes” and a second range for “0” or “no” as equivalently recited as follows:
“the second indication comprising a continuous variable output defining a first range indicative of the one or more predefined diverse narratives not being associated with the one or more content items of the at least one content item and a second range indicative of the one or more predefined diverse narratives being associated with the one or more content items of the at least one content item”.
On the other hand, Wang et al. explicitly teaches a feature for generating/receiving model output as continuous values which defined a first range for “1” or “yes” and a second range for “0” or “no” (see Wang et al., [0052] and [0111] wherein predict result of the neural network (i.e., a machine-learning model) is continuous value, which define a first range [0.5, 1) mapping to 1 or “yes” and second range (0, 0.5) mapping to 0 or “no”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Wang et al.'s teaching to Smith et al.’s system (as modified by Cakaloglu et al.) by implementing a feature of generating/receiving an model output (e.g., the second indication) as a continuous value defined by first range and second range. Ordinarily skilled artisan would have been motivated to do so to provide Smith et al.’s system with an alternative effective way to provide model output. In addition, both of the references (Smith et al. and Wang et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, a system for using a model for generating a prediction. This close relation between both of the references highly suggests an expectation of success when combined.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG THAO CAO whose telephone number is (571)272-2735. The examiner can normally be reached Monday - Friday: 9:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 571-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Phuong Thao Cao/Primary Examiner, Art Unit 2164