Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgments
The submission filed on 01/02/26 is acknowledged.
Status of Claims
Claims 9-14 and 16-24 are pending.
In the Amendment filed on 01/02/26, claims 9, 16, 18, 19 and 21-23 were amended, claim 15 was cancelled, and no claims were added.
Claims 9-14 and 16-24 are rejected.
Response to Arguments
Regarding the objections to the specification
The objections to the specification are withdrawn in view of the instant amendments to the specification.
Regarding the claim objections
The claim objections are withdrawn in view of the instant amendments to the specification. Applicant's attention is directed to the new claim objections.
Regarding the rejections under 35 U.S.C. 101
Applicant’s arguments have been fully considered but are not persuasive.
The Examiner responds to Applicant's arguments below. (Headings and page numbers refer to Applicant's Response, unless otherwise indicated.)
Step 2A, Prong One: Not a mental process (pp. 34-35)
Applicant argues that the claims improve computer functioning/technology, analogous to software improvements in McRO and Enfish.1
The Examiner respectfully disagrees.
First, the subject matter in McRO (e.g., automating 3D facial animation) and Enfish (e.g., self-referential database) is not comparable to that of the instant claims.
Second, the additional elements in claim 9 include a processor, machine learning, and a platform. All of these are merely generic computer elements and/or generally linking to a particular field of use. They are not described but are recited at a high level of generality. Further, they are merely added on to a conceptually prior commercial/business process of determining whether a creator's social media activity is fake, classifying creators in a risk category, and controlling access to resources; in other words, they are merely used as a tool to apply the abstract idea or generally link the abstract idea to a particular field of use. (Note the limitations related to the machine learning model, such as training, validating, etc. constitute merely standard, basic and/or ordinary training of a machine learning model and do not represent improvements in machine learning or computer functioning/technology.)
Applicant also argues that the claims do not recite a mental process. However, the claims were not rejected as reciting a mental process. Accordingly, this argument is moot.
Finally, Applicant also argues that "the claim's recited internal constraints … cabin preemption." The Examiner responds that preemption is not a standalone test for eligibility.
Step 2A, Prong Two: Integrated into a practical application (p. 35)
Applicant presents the same substantive argument as was presented for Step 2A, Prong One, above.2 This argument has been addressed at Step 2A, Prong One, above. Note that the putative improvements mentioned by Applicant at Step 2A, Prong Two, namely, "improve the accuracy and reliability of social-network data ingestion and prevent fraudulent amplification that would otherwise corrupt analytics and waste computing and storage resources," are merely the expected result of using/training the model as recited (abstract idea content) together with the processor and machine learning as recited (additional elements). They do not reflect any improvement in technology/computer functioning. Rather, the additional elements operate, as off-the-shelf elements, in their ordinary capacities.
Step 2B3 (pp. 35-36)
Applicant argues the claims are not well-understood, routine and conventional. However, the claims were not rejected as being well-understood, routine and conventional. Therefore, this argument is moot.
Rather, the claims were rejected under Step 2B because, when the additional elements are included in the consideration of the claims as a combination, they are merely generic computer elements recited at a high level of generality, that are used to apply the abstract idea, or they merely generally link the abstract idea to a particular field of use. However, "adding the words 'apply it' (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer" and "generally linking the use of the judicial exception to a particular technological environment or field of use" are "[l]imitations that the courts have found not to be enough to qualify as 'significantly more' when recited in a claim with a judicial exception." MPEP 2106.05. (Eligibility Step 2B: Whether a Claim Amounts to Significantly More), I. (THE SEARCH FOR AN INVENTIVE CONCEPT), A.(Relevant Considerations For Evaluating Whether Additional Elements Amount To An Inventive Concept).
Regarding the rejections under 35 U.S.C. 103
Applicant’s arguments have been fully considered but are moot in view of the new combinations of references currently cited as teaching the claims.
Claim Objections
Claims 9 and 23 are objected to because of the following informalities:
Claim 9 recites:
train at least one machine learning model using a first labeled dataset of creators characterized by fake activity and a second labeled dataset of creators characterized by non-fake activity;
…
execute a first one of the at least one machine learning model configured to receive, as inputs, data associated with followers of the creator and predict, as outputs, bought followers by estimating values of the first plurality of features; and
…
execute a second one of the at least one machine learning model configured to receive, as inputs, data associated with the creator and predict, as output, bought likes by estimating values of the second plurality of features, wherein activities of the creator are determined to be fake based on the estimated values of the first plurality of features and/or the estimated values of the second plurality of features;
Claim 23 recites:
wherein the first one of the at least one machine learning model is different from the second one of the at least one machine learning model.
Claim 9 is understood on its face to cover not only the case in which the "first one of the at least one machine learning model" (hereafter, first machine learning model) and the "second one of the at least one machine learning model" (hereafter, second machine learning model) are different machine learning models but also the case in which they are the same machine learning model. Note the use of the terms "first" and "second" on its face indicates that the two models can be different models. Evidence that the two models can be the same machine learning model is found in the training step above: if the first and second machine learning models were necessarily different, the claim would require at least two machine learning models; the recitation in the training step of "at least one machine learning model" leaves open the possibility that the first and second machine learning models are the same. Further evidence that the two models can be the same machine learning model is found in claim 23: if the first and second machine learning models were necessarily different in claim 9, then claim 23 would appear to be redundant with content already present in claim 9 and, as such, would be non-compliant with 35 U.S.C. 112(d).
Note, however, that Applicant's arguments for patentability of claim 9 as set forth in the instant Response repeatedly cite the "multiple models" of claim 9 (e.g., "at least two machine learning models" (p. 34); "different models" (p. 36); "multi-model" (p. 38)) as a reason for patentability (in respect of both 35 U.S.C. 101 and 35 U.S.C. 103). Accordingly, it is not clear if Applicant is arguing (1) that claim 9 requires two distinct models, or if Applicant holds (2) that two uses, or two instances, of a single model, one use/instance involving inputs x1 and outputs y1 and the other use/instance involving inputs x2 and outputs y2, constitute two different models.
If Applicant is arguing (1), then Applicant contradicts the interpretation set forth above, namely, the interpretation in which claim 9 covers both the case in which the first machine learning model and the second machine learning model are different machine learning models and the case in which they are the same machine learning model. In this case, the training step of claim 9 is confusing, since it recites "at least one machine learning model" whereas really "at least two machine learning models" are required. Further, in this case, claim 23 would appear to be redundant with claim 9 and non-compliant with 35 U.S.C. 112(d).
On the other hand, if Applicant is holding (2), then this should be made clear in the record, as it may contradict the ordinary understanding of what constitutes a single machine learning model and what constitutes multiple different machine learning models. That is to say, in ordinary parlance two uses or instances of a single machine learning model may be understood as involving only a single machine learning model, and it may be understood that changing an input or an output of a model does not necessarily mean that the model has become a different model. In short, if Applicant is holding (2), this may unduly limit the meaning of a 'single machine learning model' and unduly expand the meaning of 'two different machine learning models', beyond their ordinary meanings.
Applicant should clarify whether Applicant is arguing (1) or holding (2). Further, Applicant should amend claim 9, as warranted. Further, Applicant should amend or cancel claim 23, as warranted.
Appropriate correction is required.
Claim Rejections - 35 U.S.C. § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 9-14 and 16-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Lack of Written Description/Not in Specification
Claim 9 recites:
"control access to platform resources based on the classification."
Support in the disclosure is not found for the above-indicated recitation.
As best understood, the portions of the disclosure most closely related to the above-indicated recitation are in the specification as filed at 0072, 0083, 0103-0104, 0106, 0112-0113, and 0114-0119.
0072 reads as follows:
The term "social media", "social media network", or "social media channels" refers to a plurality of online accessible datasets configured to be displayed in graphical, video, audio, or textual form, wherein each of the plurality of online accessible datasets is associated with a specific user, wherein the ability to access or interact with a given one of the plurality of online accessible datasets is at least partially controlled by the specific user associated with that one of the plurality of online accessible datasets, and wherein the ability to access or interact with any of the plurality of online accessible datasets is also at least partially controlled by a single administrative entity.
0083 reads as follows:
In embodiments, the at least one database system 105 implements a plurality of database engines to support storage, analysis, and reporting of different types of data collected through the platform 101. Some exemplary database engines used by embodiments of the present specification may include MariDB, Redis, ElasticSearch, and HBase. In embodiments, MariaDB may be used for data pertaining to user common data, teams, creator data, campaign data, campaign report data, talent lists, access control policies, and content workflow. In embodiments, Redis may be used for caching campaign reports, talent lists, access control policies, access token, and other forms of data. In embodiments, ElasticSearch may be used for data pertaining to creator profiles. In embodiments, HBase may be used for data pertaining to social networks, campaign posts, analytics, and creator trackers.
0103 reads as follows:
A table listing users who have access to the collection with the following columns:
- Email
- # of trackers
- Billing Plan
- Team name (if any)
- Actions - the administrator is allowed to:
o Share with users - share collection with one or more additional users, adding them to the list.
o Remove selected users from list - hide and make collection inaccessible for selected users.
0104 includes content analogous to 0103 but for teams instead of users.
0106 reads as follows:
In embodiments, team members can share access to their collections with any member of the team. Also, the administrator can give any user access to any collection.
0112-0113 reads as follows:
[0112] The administrator can click on a user (from the list of users) to view an individual user GUI with a more detailed display of data pertaining to the selected user along with a management interface to change/update the user's details, permissions/privileges and usage limitations, as follows:
- User's personal details (Name, Email etc.)
- Role
o Full-fledge User/Groups functionality with permission settings.
o Superuser feature.
o Superuser can create arbitrary number of groups, each group have a unique set of access permissions.
- User's connected accounts (Social network accounts)
o Channel
o Username
o Actions - The administrator can remove the user's connected accounts from the system
- User's teams (List of teams user belongs to, if any)
o Name (link to team's details)
o Number of members
o Team billing plan
o Actions - remove or add new
- User's collections - provides a table displaying a user's collections with the following columns:
o Name (link to collection's details)
o Volume
o Status
o Actions - remove or add new
- User's billing (stripe account if exists) and permissions (volume, allowed reports, allowed features, such as export, type of trackers, deletions)
- User's trackers
o Name
o Volume
o Status
o Actions - remove a tracker or add a new tracker
[0113] Following are additional actions or functionalities available to the administrator:
- Impersonate (sign in as this user)
- Edit email
- Edit permissions and trial period
o Edit volume
o Edit allowed reports number
o Edit max # of trackers per report
o Edit allowed deletions per month number
o Allow / disable historical data
o Edit trial period
o Allow / disable hashtag trackers o Allow / disable location trackers
o Allow / disable comparison report
o Allow / disable export
- Edit trackers (single section)
- Edit associated teams (single section)
- Edit associated collections (single section)
- Remove account (cancel membership)
- Suspend account
- Edit connected social network accounts
0114-0119 includes content analogous to 0112-0113 but for teams instead of users.
As seen above, these portions of the specification do not teach or suggest controlling access to platform resources based on the classification. Nor is any other portion of the disclosure seen to teach or suggest this subject matter.
Accordingly, support is not found for above-quoted recitation of claim 9.
Claims 10-14 and 16-24 are rejected by virtue of their dependency from a rejected claim.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 9-14 and 16-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 9-14 and 16-24 are directed to a system, which are/is one of the statutory categories of invention. (Step 1: YES)
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system for detecting whether a creator's activity on social media is fake (as per the disclosure, to enable/facilitate influencer marketing, in other words, vetting/verifying social media influencers in order to weed out fakes, for the sake of brands seeking to use the social media influencers for marketing their products, i.e., as part of a project of providing an integrated platform to enable a private marketplace to empower brands and influencers to directly connect to each other, see Abstract, specification as filed at 0003, 0009).
For claim 9, the limitations (indicated below in bold) of:
train at least one machine learning model using a first labeled dataset of creators characterized by fake activity and a second labeled dataset of creators characterized by non-fake activity;
identify a first plurality of features from profile data and a second plurality of features from post engagement data;
validate outputs of the at least one machine learning model by determining precision, recall, and an area under a receiver operating characteristic curve;
terminate the training when the precision, recall, and area under the receiver operating characteristic curve reach predefined threshold values;
execute a first one of the at least one machine learning model configured to receive, as inputs, data associated with followers of the creator and predict, as outputs, bought followers by estimating values of the first plurality of features; and
execute a second one of the at least one machine learning model configured to receive, as inputs, data associated with the creator and predict, as output, bought likes by estimating values of the second plurality of features, wherein activities of the creator are determined to be fake based on the estimated values of the first plurality of features and/or the estimated values of the second plurality of features;
classify creators into a risk category based on a percentage of posts identified as fake; and
control access to platform resources based on the classification.
as drafted, constitute a process that, under the broadest reasonable interpretation, covers "certain methods of organizing human activity," specifically, "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components or generally linking the use of a judicial exception to a particular technological environment or field of use. The Examiner notes that "fundamental economic practices" or "fundamental economic principles" describe concepts relating to the economy and commerce, including hedging, insurance, and mitigating risks, and "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. MPEP 2106.04(a)(2)II.A.,B. If a claim limitation, under its broadest reasonable interpretation, covers "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components or generally linking the use of a judicial exception to a particular technological environment or field of use, then it falls within the "certain methods of organizing human activity" grouping of abstract ideas. Accordingly, claim 9 recites an abstract idea. (Step 2A - Prong 1: YES. The claims recite an abstract idea.)
This judicial exception is not integrated into a practical application. Claim 9 recites the additional elements of at least one social media network, a plurality of programmatic instructions that, when executed by at least one processor, [perform operations], machine learning, and platform, that implement the abstract idea. These additional elements are not described by the applicant and they are recited at a high level of generality (i.e., one or more generic computer elements performing generic computer functions, or generally linking the use of a judicial exception to a particular technological environment or field of use), such that they amount to no more than mere instructions to apply the exception using generic computer elements (namely, at least one social media network, a plurality of programmatic instructions that, when executed by at least one processor, [perform operations], machine learning, and platform), and/or such that they amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (namely, at least one social media network, machine learning, and platform). Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2A - prong 2: NO. The additional elements do not integrate the abstract idea into a practical application.)
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of at least one social media network, a plurality of programmatic instructions that, when executed by at least one processor, [perform operations], machine learning, and platform, to perform the noted steps amount to no more than mere instructions to apply the exception using generic computer elements or amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Mere instructions to apply an exception using generic computer elements or generally linking the use of a judicial exception to a particular technological environment or field of use cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, claim 9 is not patent eligible. (Step 2B: NO. The claims do not provide significantly more.)
Dependent claims 10-14 and 16-24 are similarly rejected because they further define/narrow the abstract idea of independent claim 9 as discussed above, and/or do not integrate the abstract idea into a practical application or provide an inventive concept such as would render the claims eligible, whether each is considered individually or as an ordered combination.
As for further defining/narrowing the abstract idea:
Dependent claims 10-14 and 16-24 merely further describe the content of the features used by the models to generate output from input (claims 10-14, 24), describe validating the models by using metrics (claims 16, 18-19), describe the values of the aforementioned features (claims 17 and 20), describe the models (claims 21 and 22), and describe that “the first … model is different from the second … model” (claim 23).
As for additional elements:
Claims 13, 14 and 24 recite the “social media network.” This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Claims 16, 18, 19 and 23 recite “machine learning.” This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Claim 21 recites “wherein … machine learning … comprises at least one of a deep feed forward network, a perceptron network, a feed forward network, a radial basis network, a recurrent neural network, a long term memory network, a short term memory network, a gated recurrent unit network, an auto encoder network, a variational auto encoder network, a denoising auto encoder network, a sparse auto encoder network, a Markov chain network, a Hopfield network, a Boltzmann machine network, a restricted Boltzmann machine network, a deep belief network, a deep convolutional network, a deconvolutional network, a deep convolutional inverse graphics network, a generated adversarial network, a liquid state machine, an extreme learning machine, an echo state network, a deep residual network, a Kohonen network, a support vector machine network, a neural Turing machine network, a convolutional neural network with transfer learning network, deep learning feed-forward network, or convolutional neural network.” This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Claim 22 recites “wherein … machine learning … comprises at least one of a deep feed forward network, a perceptron network, a feed forward network, a radial basis network, a recurrent neural network, a long term memory network, a short term memory network, a gated recurrent unit network, an auto encoder network, a variational auto encoder network, a denoising auto encoder network, a sparse auto encoder network, a Markov chain network, a Hopfield network, a Boltzmann machine network, a restricted Boltzmann machine network, a deep belief network, a deep convolutional network, a deconvolutional network, a deep convolutional inverse graphics network, a generated adversarial network, a liquid state machine, an extreme learning machine, an echo state network, a deep residual network, a Kohonen network, a support vector machine network, a neural Turing machine network, a convolutional neural network with transfer learning network, deep learning feed-forward network, or convolutional neural network.” This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element or generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself.
Claims 10-12, 17 and 20 do not recite any additional elements, and accordingly, for the reasons provided above with respect to the independent claims, are not patent eligible.
Therefore, dependent claims 10-14 and 16-24 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 9-14, 18 and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. ("What They Do in Shadows: Twitter Underground Follower Market"), hereafter Aggarwal, in view of Satya et al. ("Uncovering Fake Likers in Online Social Networks"), hereafter Satya, further in view of Cormack et al. (U.S. Patent Application Publication No. 2014/0280238 A1), hereafter Cormack, and further in view of Sutton et al. (U.S. Patent Application Publication No. 2015/0052138 A1), hereafter Sutton.
Regarding Claim 9
Aggarwal teaches:
A system for determining whether an activity of a creator is fake on at least one social media network, (Abstract, §§ 1, 6.2-6.3, Table 8, Fig. 14, § 7 (¶¶ 1, 2), system identifies follower's behavior (activity of creator) as suspicious/phony/purchased/ fake; Aggarwal's follower teaches the creator, and Aggarwal's follower's activity, such as following another user, constitutes activity, and where that following is purchased it is fake)
wherein the system comprises a plurality of programmatic instructions that, when executed by at least one processor: (under broadest reasonable interpretation, "supervised learning model/mechanism" (Abstract and § 1), "supervised predictive model" (§ 6), and "SVM" (§ 6.2, Table 7) teach program code (instructions) executed by a processor)
train at least one machine learning model using a first labeled dataset of creators characterized by fake activity and a second labeled dataset of creators characterized by non-fake activity; (§ 6.2, Table 7, training data for supervised learning includes dataset of purchased/suspicious followers (first labeled dataset of creators characterized by fake activity) and dataset of legitimate followers (a second labeled dataset of creators characterized by non-fake activity); "supervised learning" confirms that the datasets are labeled)
identify a first plurality of features from profile data and a second plurality of features from post engagement data; (§ 6.1, Table 6, A (User Profile, number of posts), C (Content), D (Behaviour))
…
execute a first one of the at least one machine learning model (Abstract and § 1 "supervised learning model/mechanism," § 6 "supervised predictive model," § 6.2, Table 7, "SVM") configured to receive, as inputs, data associated with followers of the creator and predict, as outputs, bought followers by estimating values of the first plurality of features; and (§ 1, p.1, col. 2, "(iii) key identifiers (features) to distinguish between phony follower accounts and legitimate users", § 1, last ¶ "Lastly, We present an anatomy of the purchased Twitter followers. We characterize profile attributes and behavioural features of purchased followers. We identify key indicators (features) to distinguish between suspicious following behaviour from that of legitimate Twitter users. We use these identifiers and build a supervised learning mechanism which detects suspicious following behaviour with an accuracy of 89.2%"; similar subject matter is taught in Abstract, §§ 1, 5.3 (1st ¶), 6, 6.1; regarding as inputs, data associated with followers of the creator: § 4.1 "Phony Follower Data Collection"; § 5.3 including §§ 5.3.1-5.3.3 shows how features are created from input data; note 5.3.1. and 5.3.2. teach input data re friends (i.e., followers) of the phony follower (followers of the creator) and, in any event, any input data re the phony follower is by the property of transitivity associated with anything the phony follower is associated with, such as followers of the phony followers, therefore any input data re the phony follower is data associated with followers of the creator; regarding features and by estimating values of a first plurality of features: § 6.1, Tables 6-7, § 6.4, features are also referred to as "identifiers/indicators" in Abstract, §§ 1, 5.3)
…
wherein activities of the creator are determined to be fake based on the estimated values of the first plurality of features …; (Abstract, §§ 1, 6-6.3, Table 8, Fig. 14, § 7 (¶¶ 1, 2), system identifies follower's behavior (activity of creator) as suspicious/ phony/purchased/fake)
Aggarwal does not explicitly disclose but Satya teaches:
execute a second one of the at least one machine learning model (§ 6 "machine learning algorithms") configured to receive, as inputs, data associated with the creator and predict, as output, bought likes by estimating values of the second plurality of features, (Abstract, §§ 1, 3, 6, Tables 4-6, e.g., Abstract "To uncover fake Likes in online social networks (predict, as output, bought likes), we: (1) first collect a substantial number of profiles of both fake and legitimate Likers (receive, as inputs, data associated with the creator) using linkage and honeypot approaches, (2) analyze the characteristics of both types of Likers, (3) identify effective features (a second plurality of features) exploiting the learned characteristics and apply them in supervised learning models, and (4) thoroughly evaluate their performances against three baseline methods and under two attack models. Our experimental results show that our proposed methods with effective features significantly outperformed baseline methods, with accuracy = 0.871, false positive rate = 0.1, and false negative rate = 0.14."; regarding receive, as inputs, data associated with the creator and features: see also § 3 "DATA COLLECTION To build an accurate classification model, the first challenge is to obtain a good size of labeled training data."; §6 "we … collected temporal data of 1,400 (i.e., 700 fake and 700 legitimate Likers) … we extracted 30 temporal, 1,079 spatial/categorical and 30 spatio-temporal/category-temporal feature values from each Liker’s profile in both training and test sets … the entire dataset containing the profiles of 13,147 users …proposed 16 features …"; regarding predict, as output, bought likes: see also, e.g., §§ 1 (e.g., "Problem 1 (Fake Liker Detection Problem)"), 6, Tables 4-6)
wherein activities (activity of liking another user) of the creator are determined to be fake based on … the estimated values of the second plurality of features. (Abstract, §§ 1, 5, 6, Tables 3, 6 )
Note, alternatively to Aggarwal, Satya also teaches:
train at least one machine learning model using a first labeled dataset of creators characterized by fake activity and a second labeled dataset of creators characterized by non-fake activity; (§ 3, e.g., "labeled training data"; Table 1, "labeled datasets" include dataset of "fake likers," etc. (first labeled dataset of creators characterized by fake activity) and dataset of "legit likers," etc. (a second labeled dataset of creators characterized by non-fake activity))
identify a first plurality of features from profile data and a second plurality of features from post engagement data; (§ 5, "1. Profile Features," "2. Posting Activity Features")
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Aggarwal's systems and methods for detecting fake followers, by incorporating therein these teachings of Satya regarding detecting fake likes, because like fake followers, fake likes undermine the integrity of social media entities, which integrity is needed to maintain a good reputation and influence, e.g., for marketing and social purposes, see Aggarwal, Abstract, § 1, see Satya, Abstract, § 1.
Aggarwal teaches model evaluation using various standard metrics including area under the curve (AUC) (§6.3), but provides only a brief account of model evaluation.
Thus, Aggarwal in view of Satya does not explicitly disclose but Cormack teaches:
validate outputs of the at least one machine learning model by determining precision, recall, and an area under a receiver operating characteristic curve; terminate the training when the precision, recall, and area under the receiver operating characteristic curve reach predefined threshold values; (0176-0179, 0181, 0183, e.g., "Therefore, it is important to determine (e.g., calculate) a metric that would indicate to user 210 when active learning or training may stop. Calculation of such a metric may include finding one more appropriate threshold values for classes 130 or subclass 140 that allows the classification system 100 to meet a predefined criterion. … any threshold meeting the acceptable level may be used." (0177), "When the estimation of precision and/or recall reach a pre-determined level (e.g., an acceptable minimum level of F1 for the iteration) classification system 100 may indicate to user 210 that further human review [training] is not required." (0178), "As a further alternative or supplement, the classification system may request any effectiveness measure. Classification system 100 may request an effectiveness measure similarly to how classification system may request a measure of precision or recall. … Such measures include recall, precision, …. In addition, graphs indicating the tradeoff between recall and precision, such as recall-precision curves, receiver operating characteristic (ROC) curves, and gain curves may be presented to the user, so as to track the effectiveness that has been achieved or could be achieved, for a given amount of review effort." (0181)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, by incorporating therein these teachings of Cormack regarding validating a machine learning model, because validating a machine learning model serves to ensure the quality/accuracy of the model and its output/results, thereby improving performance, and because, as explained above, Aggarwal teaches model evaluation, so Cormack's teachings align with Aggarwal's approach, but since model evaluation is not Aggarwal's main focus, Aggarwal does not provide a comprehensive account of it; thus, Cormack's teachings provide a more comprehensive and detailed account of validation, which provides more elaborate implementation detail such as would improve upon the lesser amount of validation Aggarwal teaches.
Aggarwal teaches a classifier that distinguishes between fake and legitimate creators (e.g., §§ 1, 6.2). Based on various features (e.g., related to profile, behavior, content and network) (e.g., § 6.1, Table 6), Aggarwal classifies some creators as suspicious (classify creators into a risk category) (e.g., §§ 1, 6.2) and teaches suspension or deletion of such accounts (control access to platform resources based on the classification) (§ 7). Thus, Aggarwal teaches the bulk of the following limitations (the last two steps of claim 9). However, Aggarwal does not explicitly disclose that the classification is based on a percentage of posts identified as fake.
While Aggarwal in view of Satya and Cormack does not explicitly disclose the following limitations in their entirety, Sutton teaches them:
classify creators into a risk category based on a percentage of posts identified as fake; and (0016-0017 "In some embodiments of the technology, the inference engine 112 builds a list of social accounts that are known to publish social data that are classified as “spam,” e.g., undesirable content. The inference engine 112 evaluates the percentage of overall posts and comments seen from a first social account that were classified as spam, and the number of additional social accounts (e.g., a second social account, a third social account, etc.) on which that spam content was posted. If the percentage of posts classified as spam and/or the number of social accounts the spam content was posted surpass specified (e.g., pre-determined) thresholds [based on a percentage of posts identified as fake], it is determined that there exists an increased likelihood that the next posting of social data published from the first social account will also be spam. [classify creators into a risk category] [0017] The inference engine 112 is configured to instruct the classification engine 114 to take one or more actions, and to apply new rules to all social data classified by the system 106 regardless of which social account published the data and/or regardless of which social account the data was published to. In several embodiments, the inference engine 112 can inform the classification engine 114 to (1) consider all social data subsequently posted by the first social account as inappropriate; (2) change sensitivity to spam classification on subsequent social data posted by the first social account; …" -- (1) and (2) amount to designating the first social account as risky and accordingly teach classify creators into a risk category.")
control access to platform resources based on the classification. (0025-0026, e.g., 0025 "If the system classifies a social data entry as a content incident (e.g., flags a comment as spam, abusive speech, etc.), [based on the classification] a user or social media account owner can interface with the system at user interface 400 (FIG. 4) to specify which action, if any, the user would like to assign to flagged social data entry. For example, the user can take actions with regard to the commentator or social media participant (e.g., add commenter to a watch list, add commenter to a block list [control access to platform resources]), or can take action with respect to a particular comment or social data entry (e.g., post response, remove response, ignore incident, ignore all similar future incidents, etc.)."; 0126 following the user's action as described in 0125, "inference engine 112 can [apply such rules when] subsequent social data classifications are made by the system" (control access to platform resources); Fig. 3 (block users on spam list; add commenter to blocked or watched commenters list) (control access to platform resources); Fig.4 ("Add to Commenter Watch List", "Add to Commenter Block List") (control access to platform resources); see also claims 1-2, 4-8 (e.g., claims 1, 2, 6 block entity from posting based on classifying social data as spam) (control access to platform resources based on the classification))
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, and as further modified by Cormack's teachings regarding validating a machine learning model, by incorporating therein these teachings of Sutton regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, because it would prevent fake creators/accounts from influencing other users and would suppress spam/malicious behavior on social media, see Sutton, 0005-0006, Aggarwal, § 7, and because, as explained above, Aggarwal teaches the bulk of these teachings of Sutton, so Sutton's teachings align with Aggarwal's approach but merely substitute a different risk factor (namely, fake posts) for Aggarwal's features, as the grounds for making the classification; Sutton's risk factor is comparable to Aggarwal's features and serves the same purpose, so this substitution amounts to "(B) Simple substitution of one known element for another to obtain predictable results." MPEP 2143.I.
Regarding Claim 10
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal further teaches:
wherein the first plurality of features is based on profile information of the creator. (§ 6.1, Table 6, A (User Profile))
Regarding Claim 11
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal further teaches:
wherein the first plurality of features comprise at least one of a ratio of a number of individuals who are following the creator relative to a number of individuals that are being followed by the creator, a length of a description associated with the creator, a username of the creator, a full name of the creator, differences between the full name and the username, a number of digits in the username, a function of a number of posts to the number of individuals who are following the creator, and a function of the number of posts to the number of individuals who being following by the creator. (§§ 5.3.2, 6.1, Table 6, B Follower-Followee Ratio, aka follower-friends ratio since friend = followee (a ratio of a number of individuals who are following the creator relative to a number of individuals that are being followed by the creator))
Regarding Claim 12
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Satya further teaches:
wherein the second plurality of features are derived from information indicative of posts of the creator. (§ 5 "2. Posting Activity Features")
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding the content of features for detecting fake likes, because the content of features taught by Satya has proven to be successful at detecting fake likes, see Satya, Abstract, §§ 1, 5, 6, and because Aggarwal endorses the use of such content in the context of detecting fake followers. see Aggarwal, §§ 5, 6.
Regarding Claim 13
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Satya further teaches:
wherein the second plurality of features comprise at least one of a mean of a number of likes or comments on posts of the creator to the at least one social media network, a median of the number of likes or comments on the posts of the creator to the at least one social media network, and deviations of a distribution of the number of likes or comments on the posts of the creator to the at least one social media network. (§ 5 "4. Social Attention Features: average [mean] # of Likes … per post; average [mean] # of comments … per post")
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding the content of features for detecting fake likes, because the content of features taught by Satya has proven to be successful at detecting fake likes, see Satya, Abstract, §§ 1, 5, 6.
Regarding Claim 14
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Satya further teaches:
wherein the second plurality of features comprise at least one of a mean of a frequency of posting by the creator to the at least one social media network, a median of a frequency of posting by the creator to the at least one social media network, and deviations of a distribution of frequencies of posting by the creator to the at least one social media network. (§ 5 "2. Posting Activity Features: … average # of posts per day; … maximum # of posts in a day")
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding the content of features for detecting fake likes, because the content of features taught by Satya has proven to be successful at detecting fake likes, see Satya, Abstract, §§ 1, 5, 6.
Regarding Claim 18
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above.
Satya further teaches:
… second …; (§ 6 "machine learning algorithms")
Aggarwal teaches model evaluation using various standard metrics including area under the curve (AUC) (§6.3), but does not explicitly disclose a precision or recall metric or that the area under the curve is the area under a receiver operating characteristic curve. However, Cormack further teaches:
wherein said … one of the at least one machine learning model is validated by computing areas under a receiver operating characteristic curve or by using a precision metric and/or a recall metric. (0181 "As a further alternative or supplement, the classification system may request any effectiveness measure. … Such measures include recall, precision, …. In addition, graphs indicating the tradeoff between recall and precision, such as recall-precision curves, receiver operating characteristic (ROC) curves, and gain curves may be presented to the user, so as to track the effectiveness that has been achieved or could be achieved, for a given amount of review effort.")
Regarding Claim 21
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal further teaches:
wherein the first one of the at least one machine learning model comprises at least one of a deep feed forward network, a perceptron network, a feed forward network, a radial basis network, a recurrent neural network, a long term memory network, a short term memory network, a gated recurrent unit network, an auto encoder network, a variational auto encoder network, a denoising auto encoder network, a sparse auto encoder network, a Markov chain network, a Hopfield network, a Boltzmann machine network, a restricted Boltzmann machine network, a deep belief network, a deep convolutional network, a deconvolutional network, a deep convolutional inverse graphics network, a generated adversarial network, a liquid state machine, an extreme learning machine, an echo state network, a deep residual network, a Kohonen network, a support vector machine network, a neural Turing machine network, a convolutional neural network with transfer learning network, deep learning feed-forward network, or convolutional neural network. (§ 6.2, Table 7 SVM (a support vector machine network))
Regarding Claim 22
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 and intervening claim 21 as set forth above. Satya further teaches:
wherein the second machine learning model comprises at least one of a deep feed forward network, a perceptron network, a feed forward network, a radial basis network, a recurrent neural network, a long term memory network, a short term memory network, a gated recurrent unit network, an auto encoder network, a variational auto encoder network, a denoising auto encoder network, a sparse auto encoder network, a Markov chain network, a Hopfield network, a Boltzmann machine network, a restricted Boltzmann machine network, a deep belief network, a deep convolutional network, a deconvolutional network, a deep convolutional inverse graphics network, a generated adversarial network, a liquid state machine, an extreme learning machine, an echo state network, a deep residual network, a Kohonen network, a support vector machine network, a neural Turing machine network, a convolutional neural network with transfer learning network, deep learning feed-forward network, or convolutional neural network. (§ 6 SVM (a support vector machine network))
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding using an SVM as the type of machine learning model for detecting fake likes, because Aggarwal already uses an SVM for detecting fake followers (Aggarwal § 6.2), because both Aggarwal and Satya recognize that detecting fake followers is similar to detecting fake likes (Aggarwal § 1, Satya § 1), and because the SVM has proven to be successful at detecting fake likes, see Satya, Abstract, §§ 1, 6.
Regarding Claim 23
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 and intervening claims 21-22 as set forth above. Aggarwal and Satya collectively further teach:
wherein the first one of the at least one machine learning model is different from the second one of the at least one machine learning model. (Aggarwal teaches the first machine learning model (Abstract and § 1 "supervised learning model/mechanism," § 6 "supervised predictive model," § 6.2, Table 7, "SVM"); Satya teaches the second machine learning model (§ 6 "machine learning algorithms"); insofar as Satya's machine learning model performs a task (detecting fake likes/likers) different from Aggarwal's machine learning model (detecting fake followers), and is taught in a reference different from that in which Aggarwal's machine learning model is taught, Aggarwal's machine learning model (the first machine learning model) is different from Satya's machine learning model (the second machine learning model))
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding using a machine learning model specifically for detecting fake likes, because logically speaking it would be simpler to use a separate machine learning model for each separate task, respectively (i.e., a separate machine learning model for Satya's task of detecting fake likes and a separate machine learning model for Aggarwal's task of detecting fake followers), rather than engineering a single machine learning model that could perform two different tasks (i.e., a single machine learning model for performing both Aggarwal's task of detecting fake followers and Satya's task of detecting fake likes).
Regarding Claim 24
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Satya further teaches:
wherein the second plurality of features comprise a function of a number of likes or comments on the creator's posts to the at least one social media network and/or a function of a frequency of the creator's posting to the at least one social media network. (§ 5 "4. Social Attention Features: average [mean] # of Likes … per post; average [mean] # of comments … per post" (a function of a number of likes or comments on the creator's posts to the at least one social media network); § 5 "2. Posting Activity Features: … average # of posts per day; … maximum # of posts in a day" (a function of a frequency of the creator's posting to the at least one social media network))
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these further teachings of Satya regarding the content of features for detecting fake likes, because the content of features taught by Satya has proven to be successful at detecting fake likes, see Satya, Abstract, §§ 1, 5, 6.
Claims 16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. ("What They Do in Shadows: Twitter Underground Follower Market"), hereafter Aggarwal, in view of Satya et al. ("Uncovering Fake Likers in Online Social Networks"), hereafter Satya, further in view of Cormack et al. (U.S. Patent Application Publication No. 2014/0280238 A1), hereafter Cormack, further in view of Sutton et al. (U.S. Patent Application Publication No. 2015/0052138 A1), hereafter Sutton, and further in view of Yamamoto (U.S. Patent Application Publication No. 2017/0228651 A1).
Regarding Claim 16
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal further teaches:
... fake activity or non-fake activity. (Abstract, §§ 1, 6.2-6.3, Table 8, Fig. 14, § 7 (¶¶ 1, 2), system identifies follower's behavior (activity of creator) as suspicious/phony/ purchased/fake)
Aggarwal in view of Satya, Cormack and Sutton does not explicitly disclose but Yamamoto teaches:
wherein said first one of the at least one machine learning model is further validated by modulating said precision to be higher with a lower amount of said recall or to be lower with a higher amount of said recall based on a desired level of accuracy for the first one of the at least one machine learning model's predictions of …. (0080; as per Abstract, 0002-0004, 0013-0014, 0078, the discussion of 0080 refers to a computerized, feature-driven predictive/classification model, which is trained/adjusted with feedback of results to improve/update the model, i.e., a machine learning model)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these teachings of Yamamoto regarding evaluating a model based on precision and recall and adjusting either to optimize one at the expense of the other, because it affords the user the option to optimize the appropriate metric of success, depending on what is suitable for the context or problem at hand, see Yamamoto, 0080.
Regarding Claim 19
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 and intervening claim 18 as set forth above. Aggarwal further teaches:
... fake activity or non-fake activity. (Abstract, §§ 1, 6.2-6.3, Table 8, Fig. 14, § 7 (¶¶ 1, 2), system identifies follower's behavior (activity of creator) as suspicious/phony/ purchased/fake)
Aggarwal in view of Satya, Cormack and Sutton does not explicitly disclose but Yamamoto teaches:
wherein said second one of the at least one machine learning model is further validated by modulating the precision metric to be higher with a lower amount of the recall metric or to be lower with a higher amount of the recall metric based on a desired level of accuracy for the second one of the at least one machine learning model's predictions of …. (0080; as per Abstract, 0002-0004, 0013-0014, 0078, the discussion of 0080 refers to a computerized, feature-driven predictive/classification model, which is trained/adjusted with feedback of results to improve/update the model, i.e., a machine learning model)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these teachings of Yamamoto regarding evaluating a model based on precision and recall and adjusting either to optimize one at the expense of the other, because it affords the user the option to optimize the appropriate metric of success, depending on what is suitable for the context or problem at hand, see Yamamoto, 0080.
Claims 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. ("What They Do in Shadows: Twitter Underground Follower Market"), hereafter Aggarwal, in view of Satya et al. ("Uncovering Fake Likers in Online Social Networks"), hereafter Satya, further in view of Cormack et al. (U.S. Patent Application Publication No. 2014/0280238 A1), hereafter Cormack, further in view of Sutton et al. (U.S. Patent Application Publication No. 2015/0052138 A1), hereafter Sutton, and further in view of Merrill et al. (U.S. Patent Application Publication No. 2019/0043070 A1), hereafter Merrill.
Regarding Claim 17
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal in view of Satya, Cormack and Sutton does not explicitly disclose but Merrill teaches:
wherein estimated values of the first plurality of features comprise Shapley values. (0269, 0299, 0306, 0326)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these teachings of Merrill regarding Shapley values, because they can determine the contribution of each variable (feature) to the predictions of a machine learning model, they can determine which explanatory variables (features) contribute the most to the predictions, and therefore they can improve the efficacy of prediction tasks using machine learning, see Merrill, 0299-0300, 0306.
Regarding Claim 20
Aggarwal in view of Satya, Cormack and Sutton teaches the limitations of base claim 9 as set forth above. Aggarwal in view of Satya, Cormack and Sutton does not explicitly disclose but Merrill teaches:
wherein estimated values of the second plurality of features comprise Shapley values. (0269, 0299, 0306, 0326)
It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Aggarwal's systems and methods for detecting fake followers, as modified by Satya's teachings regarding detecting fake likes, as further modified by Cormack's teachings regarding validating a machine learning model, and as further modified by Sutton's teachings regarding classifying creators into a risk category based on a percentage of fake posts and controlling access to resources based on the classification, by incorporating therein these teachings of Merrill regarding Shapley values, because they can determine the contribution of each variable (feature) to the predictions of a machine learning model, they can determine which explanatory variables (features) contribute the most to the predictions, and therefore they can improve the efficacy of prediction tasks using machine learning, see Merrill, 0299-0300, 0306.
Conclusion
The prior art made of record and not relied upon, as set forth in the accompanying Notice of References Cited (PTO-892), is considered pertinent to applicant's disclosure.
Farooqi ("Characterizing Key Stakeholders in an Online Black-Hat Marketplace") provides a micro-economic analysis of a particular black-Hat marketplace, which inter alia sells followers and likes, analyzing the selling and buying behavior of users of the site, in particular the key users, in order to develop countermeasures.
Mehrotra ("Detection of Fake Twitter Followers using Graph Centrality Measures") teaches detection of fake social media followers using Decision Tree and Random Forest classifiers implemented on an ANN, using graph centrality features (features pertaining to the centrality of a node in a graph of nodes and edges) rather than platform specific features like followers, posts, etc. Mehrotra also provides a survey of the literature pertaining to detecting fake followers.
Cresci ("Fame for sale: Efficient detection of fake Twitter followers") teaches detecting fake Twitter followers, considering/comparing a variety of datasets, features, classifiers/models, and model evaluations, as well as providing a literature review pertaining to detecting fake followers.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS W PINSKY whose telephone number is (571)272-4131. The examiner can normally be reached on 8:30 am - 5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DWP/
Examiner, Art Unit 3626
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626
1 Note: the Examiner responds to Applicant's arguments, even though in some cases as here they fall outside of the scope of the particular stage of analysis (i.e., step 2A, prong 1, etc.) in which they are presented.
2 See footnote 1, in the "Step 2A, Prong One" section, above.
3 Note this heading does not appear in Applicant's Response. In the Response, the Step 2B arguments are included at the end of the "Step 2A, Prong Two" section.