DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 4-10, 13-19, and 21-25 as presented on 10/16/2025 are pending and are examined herein.
Claims 1, 4-10, 13-19, and 21-25 are rejected under 35 USC 101.
Claims 1, 4-10, 13-19, and 21-25 are rejected under 35 USC 103.
Claims 22, 24, and 25 are not rejected under 35 USC 103.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered.
Response to Arguments
Applicant’s arguments filed 10/16/2025 regarding the rejection under 35 USC 101 have been fully considered, but are not persuasive. Applicant argues that the claims are eligible because they reflect an improvement to technology. Applicant argues that the claims specify a particular way of achieving a desired outcome as evidence of the improvement to technology. Examiner respectfully disagrees. To the extent that the claims reflect an improvement, it is an improvement to an abstract idea. While the claim recites a particular sequence of steps, many of the steps are part of the abstract idea and the additional elements are insufficient to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea for the reasons given in the rejection.
Applicant further argues that the claim recites steps which are not practical to perform in the human mind. Examiner respectfully disagrees that the claims are eligible. The rejection did not identify the entire block of limitations identified by Applicant as being wholly practical to perform in the human mind. The rejection explains which portions were practical to perform in the human mind and why the portions which were not fail to make the claim eligible. Applicant would need to argue the rejection more particularly to be persuasive.
Applicant requests an indication of allowable subject matter. If any of claims 22, 24, or 25 were amended so as to not be properly rejectable under any statutory category, those claims would be allowable.
Applicant’s arguments filed 10/16/2025 regarding the rejection under 35 USC 103 have been fully considered, but are moot in view of the new grounds of rejection presented herein.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-10, 13-19 and 21-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
Each of the claims falls within one of the four statutory categories.
Step 2
The examiner has reviewed the claims and identified several abstract ideas which require Step 2 analysis. To perform the Step 2 analysis, the claims are shown with the abstract ideas indicated in bold and additional elements indicated as non-bolded.
Regarding claim 1, claim 1 recites a computer-implemented method of ranking entity objects, the method comprising:
receiving, at a cloud server over a network, a request from a client device associated with a source entity for ranking a plurality of target entities related to the source entity, wherein the plurality of target entities include two or more existing target entities and a potential target entity and each of the source entity and the target entities is associated with a user group;
in response to the request, accessing a task database system via a first application programming interface (API) to identify a plurality of target entity objects corresponding to the plurality of target entities;
for each of the target entity objects,
accessing a data source via a second API to retrieve a first set of metadata associated with the target entity object, the first set of metadata describing the target entity perceived from other entities and generated by the data source,
retrieving a second set of metadata from the task database system via the first API, the second set of metadata describing one or more tasks collaboratively performed between the source entity and the target entity,
extracting a first set of features from the first set of metadata and extracting a second set of features from the second set of metadata,
applying a first machine-learning (ML) model to the first set of features to determine a first score representing a degree of how valuable the target entity is perceived by the source entity;
applying a second ML model to the first set of features and the second set of features to determine a second score representing a likelihood the target entity will perform a task collaboratively with the source entity within a predetermined time period, wherein the second score for the potential target entity is based on similarity scores between technological profiles of the potential target entity and each of the two or more existing target entities, and
generating an entity score for the target entity based on the first score and the second score, wherein the entity score represents a degree of relevancy between the source entity and the target entity;
ranking the plurality of target entities based on their respective entity scores; and
transmitting ranking information of at least a portion of the ranked target entities to the client device over the network.
Step 2A Prong 1: Claim 1 recites an abstract idea including concepts relating to organizing or analyzing information in a way that can be performed mentally or is analogous to human mental work, such as concepts related to organizing or analyzing information. Identifying a plurality of target entity objects, extracting features, determining a first score, determining a second score, generating an entity score and ranking the plurality of target entities based on their respective entity scores are mental processes involving comparison, analysis, judgement and opinion that may be performed by the human mind through observation and laid out with pen and paper. (MPEP §2106.04(a)(2)).
Step 2A Prong 2: The additional elements of receiving … a request from a client device, retrieving a second set of metadata from the task database and transmitting ranking information of at least a portion of the ranked target entities simply add insignificant extra-solution activity of data gathering and outputting to the judicial exception, as discussed in MPEP § 2106.05(g), and therefore does not integrate the judicial exception into a practical application.
The additional elements of “accessing a task database system via a first application … accessing a data source via a second API” simply add insignificant extra-solution activity of data gathering and outputting to the judicial exception, as discussed in MPEP § 2106.05(g), and therefore does not integrate the judicial exception into a practical application. The additional element of “applying a machine-learning (ML) model to the first set of features” are mere instructions to ‘apply it’ a process to a generic computer, using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). It does not improve the function or operation of the computer, and accordingly, the additional elements do not integrate the judicial exception into a practical application.
Step 2B: The additional elements of receiving … a request from a client device, retrieving a second set of metadata from the task database and transmitting ranking information of at least a portion of the ranked target entities encompass the well understood, routine and conventional activity of receiving and transmitting data over a network, (MPEP § 2106.05(d)(i)), and does not amount to significantly more than the judicial exception.
The additional elements of accessing a task database system via a first application , accessing a data source via a second API is directed to the well understood and routine activity of storing and retrieving information in memory, and does not amount to significantly more than the judicial exception. MPEP 2106(05)(d)(iv).
The additional element of “applying a machine-learning (ML) model to the first set of features” are mere instructions to ‘apply it’ a process to a generic computer, using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). Accordingly, the additional elements does not amount to significantly more than the judicial exception.
Regarding claim 4, claim 4 recites the method of claim 1, further comprising:
selecting a predetermined number of top-ranked target entities based on their respective entity scores; and
transmitting the ranking information of the top-ranked entities to the client device to be displayed in a graphical user interface (GUI) of the client device.
Step 2A Prong 1: Claim 4 recites an abstract idea “selecting a predetermined number of top-ranked target entities based on their respective entity scores” which is a mental processes involving comparison, analysis, judgement and opinion that may be performed by the human mind through observation and laid out with pen and paper. (MPEP §2106.04(a)(2)).
Step 2A Prong 2: The additional elements of transmitting the ranking information simply add insignificant extra-solution activity of data gathering and outputting to the judicial exception, as discussed in MPEP § 2106.05(g), and therefore does not integrate the judicial exception into a practical application.
Step 2B: Transmitting the ranking information encompass the well understood, routine and conventional activity of receiving and transmitting data over a network, (MPEP § 2106.05(d)(i)), and does not amount to significantly more than the judicial exception.
Regarding claim 5, claim 5 recites the method of claim 1, wherein the data source includes at least one of a public firmographic database, a popularity ranking database, or a user satisfaction ranking database.
Step 2A Prong 1: Claim 5 depends from claim 1 and therefore incorporates the abstract ideas of claim 1.
Step 2A Prong 2: The additional limitations of ‘wherein the data sources includes at least one of a public firmographic database, a popularity ranking database, or a user satisfaction ranking database’ merely generally links the use of the judicial exceptions to a particular technological environment, and therefore does not integrate the judicial exception into a practical application.
Step 2B: Carrying over the reasoning from above, the data source including public firmographic and other data merely limits the type of data source limits the field of use of the judicial exception, source and does not improve the function of the computer or integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 6, claim 6 recites the method of claim 1, wherein the first set of metadata of a target entity includes at least one of a number of users within a corresponding user group of the target entity, resources used by the user group, or interactions with other entities.
Step 2A Prong 1: Claim 6 depends from claim 1 and therefore incorporates the abstract ideas of claim 1.
Step 2A Prong 2/ Step 2B: The additional element of “the first set of metadata of a target entity includes at least one of a number of users” merely limits the type of metadata retrieved during the extra-solution activity of data gathering, and does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 7, claim 7 recites the method of claim 1, wherein the second set of metadata of a target entity includes at least one of one or more prior tasks completed between the source entity and the target entity, types of the tasks completed, or subsequent activities of the prior completed tasks performed between the source entity and the target entity.
Step 2A Prong 1: Claim 7 depends from claim 1 and therefore incorporates the abstract ideas of claim 1.
Step 2A Prong 2/ Step 2B: The additional element of “the second set of metadata of a target entity includes at least one or more prior tasks” merely limits the type of metadata retrieved during the extra-solution activity of data gathering, and does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 8, claim 8 recites the method of claim 1, wherein the second ML model uses one or more ML algorithms, including a market basket analysis, a term frequency-inverse document frequency (TFIDF) representation, cosine similarity, decision tree, random forest, or a gradient boosting.
Step 2A Prong 1: Claim 8 claims a mathematical concept of using algorithms, analysis and equations, (ML algorithms) and thus is directed to an abstract idea comprising mathematical concepts MPEP 2106.04(a).
Step 2A Prong 2/ Step 2B: Claim 8 includes no additional elements that would integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 9, claim 9 recites the method of claim 1, wherein the entity score is calculated based on a product of the first score and the second score.
Step 2A Prong 1: Claim 9 claims calculating a product of the first score and second score, and thus is directed to a mathematical concept of multiplication MPEP 2106.04(a).
Step 2A Prong 2/ Step 2B: Claim 9 includes no additional elements that would integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 21, claim 21 recites the method of claim 1, wherein the similarity scores represent a similarity in technology usage between each of the at least one potential target entities and each of the at least one existing target entities.
Step 2A Prong 1: Claim 21 further defines the characteristics of the similarity scores, which are used as input to the mental process of ‘determining a second score’, and therefore forms part of the abstract idea of determining a second score which is a mental process including observation, evaluation, judgment and opinion, that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. (MPEP 2106.04)
Step 2A Prong 2/ Step 2B: Claim 21 includes no additional elements that would integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Regarding claim 22, claim 22 recites the method of claim 21, wherein determination of the similarity scores between the respective one of the at least one potential target entity and each of the at least one existing target entities includes:
for the respective one of the at least one potential target entity and each of the at least one existing target entities,
generating a geometric vector representing a point in space with a plurality of dimensions, wherein each dimension corresponds to usage of a particular technology, a magnitude of the geometric vector indicates a number of technologies used, and a direction of the vector indicates which technologies are used;
determining a similarity score between the respective one of the at least one potential target entity and each of the at least one existing target entities based on an angle between corresponding geometric vectors.
Step 2A Prong 1: Claim 22 adds the additional steps of “generating a geometric vector … determining a similarity score between the respective one of the at least one potential target entity and each of the at least one existing target entities based on an angle between corresponding geometric vectors” which are mental processes including observation, evaluation, judgment and opinion, that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. (MPEP 2106.04)
Step 2A Prong 2/ Step 2B: Claim 22 includes no additional elements that would integrate the judicial exception into a practical application or amount to significantly more than the judicial exception.
Claim 10 recites substantially similar subject matter to claim 1 including substantially the same abstract idea. Claim 10 further recites the following additional elements, which considered individually and as an ordered combination with the additional elements addressed above with respect to claim 1 does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
A non-transitory machine-readable medium having instructions stored therein for identifying target accounts, the instructions, when executed by a processor, causing the processor to perform operations, the operations comprising: (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Regarding claims 13-18 and 23-24, the rejection of claim 10 is incorporated herein. Claims 13-18 and 23-24 recite substantially similar subject matter to claims 4-9 and 21-22 and are rejected with the same rationale.
Claim 19 recites substantially similar subject matter to claim 1 including substantially the same abstract idea. Claim 19 further recites the following additional elements, which considered individually and as an ordered combination with the additional elements addressed above with respect to claim 1 does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea:
A data processing system, comprising: a processor; and a memory coupled to the processor to store instructions for identifying target accounts, the instructions, which when executed by the processor, causing the processor to perform operations, the operations comprising (This is a high level recitation of generic computer components for performing the abstract idea. This does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).)
Regarding claim 25, the rejection of claim 19 is incorporated herein. Claim 25 recites substantially similar subject matter to claim 22 and is rejected with the same rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-10, 13-19, 21 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over "Anderson" (US 2020/0019915 A1) in view of “Venkata” (US 2020/0097879 A1), further in view of “Kim” (Recommendation of startups as technology cooperation candidates from the perspectives of similarity and potential: A deep learning approach).
Regarding claim 1, Anderson describes a computer-implemented method of ranking entity objects (agents) (¶[0028] an example workflow for distributing leads by ranking agents and assigning lead scores), the method comprising:
receiving, at a cloud server (Fig. 2, leads processing server 130, leads processing engine 136) over a network (network 103) , a request from a client device associated with a source entity (Fig. 1, marketing channel 102 or partner 106) (¶[0028] … the work flow may include leads 108 submitted as web requests 106 to a leads processing engine 136) for ranking a plurality of target entities (¶[0017] agents) related to the source entity, wherein the plurality of target entities include two or more existing target entities and a potential target entity and each of the source entity and the target entities is associated with a user group (¶[0034] users of lead distribution system 100 );
in response to the request, accessing a task database system (leads distribution system 100, agent ranking component 162 ) via a first application programming interface (API) (¶[0039] leads processing engine 136 may communicate… using an application program interface (API) that provides a set of predefined protocols and other tools to enable the communication ) to identify a plurality of target entity objects corresponding to the plurality of target entities ; (¶[0064] some embodiments, agent ranking component 162 is configured to determine a ranking score or agency performance index (API) for all participating agents)
for each of the target entity objects (agents),
accessing a data source (database 132 ¶[0036] database 132 may store information related to leads, customers, agents and/or other information,) via a second API (interface between agent ranking component, database 132) to retrieve a first set of metadata (¶[0078] lists 15 example of metadata; ¶[0079] “By way of example, three different factors such as (i) closing ratio (ii)total sales (III) total SQI”) associated with the target entity object (agent) , the first set of metadata describing the target entity perceived from other entities (¶[0078] customer service, general customer feedback and from customer surveys) and generated (e.g. applies weights to the values) by the data source (¶[0079] … any or all of the above-identified factors may be weighted,)
retrieving a second set of metadata (¶[0043] lead attributes or factors obtained from a variety of datasets) from the task database system (leads distribution system 100, Leads scoring component 160) via the first API, (API of ¶[0039] ) (the second set of metadata describing one or more tasks collaboratively performed between the source entity and the target entity,)
extracting a first set of features (¶[0064] agency performance index) from the first set of metadata
...ranking the plurality of target entities (agents) (based on their respective entity scores); and (¶[0080] after agent rankings have been determined, each agent is slotted in a "tier" level that is based on the agent's ranking in comparison to the other agent)
transmitting ranking information of at least a portion of the ranked target entities (to the client device over the network). (¶[0080] agent ranking component 162 may be configured to calculate tiers periodically and storing in database 132” (e.g. transmitting ranking information.)
Anderson fails to disclose the claimed elements of:
the second set of metadata describing one or more tasks collaboratively performed between the source entity and the target entity,
extracting a second set of features from the second set of metadata, and
applying a first machine-learning (ML) model to the first set of features to determine a first score representing a degree of how valuable the target entity is perceived by the source entity ,
applying a second ML model to the first set of features and the second set of features to determine a second score representing a likelihood the target entity will perform a task collaboratively with the source entity within a predetermined time period,
wherein the second score for the potential target entities is based on similarity scores between technological profiles of the potential target entity and each of the two or more existing target entities ;
generating an entity score for the target entity based on the first score and the second score, wherein the entity score represents a degree of relevancy between the source entity and the target entity;
ranking … based on their respective entity scores and
transmitting ranking information … to the client device over the network.
US Patent Application 2020/0097879 to Venkata, in the same field identifying leads and opportunities, discloses a computer-implemented method of ranking ( Venkata: ¶[0024] scoring via opportunity score) entity objects (Venkata: ¶[0024] opportunities), related to target entities (Venkata: ¶[0024] Fig. 4, Company X, Y, Z , A B) , in response to a request, from a client (user of service, e.g., sales representative). Opportunity scores may be returned to the client (e.g. user, sales representative) and displayed on their user interface (Venkata: ¶[0024] “recommendations for and scoring of sales opportunities is incorporated into the sidebar of a user interface”) .
More specifically, Venkata discloses elements (i)-(iii) and (vii)-(viii) as explained below:
retrieving a second set of metadata (Venkata: ¶[0021] tools 152, structure 154) from the task database system (Fig. 1, activity manager 160) … the second set of metadata describing one or more tasks (Venkata: ¶[0021] planned activities) collaboratively performed (Venkata: ¶[0025] “collaborative relationships” of activities, ¶0021 structure 154 describes which activities are related (c Venkata: (collaborative relationships) in the form of a model]) between the source entity and the target entity,
¶[0021] Tools 152 can include system resources and tools available for use in identifying and classifying activities of the sales representative with respect to an opportunity. Structure 154 can include activity structure information that can be used to classify activities of the sales representative with respect to an opportunity, including which activities are related to others in the form of a deterministic finite automata model
¶[0025] “Working memory 175 receives information… about… user's past, present, and planned activities, their collaborative relationships, and their role and responsibilities with respect to a given opportunity”
extracting a first set of features (Venkata: ¶[0021] e.g. information… about… user's past, present, and planned activities, their collaborative relationships, and their role and responsibilities with respect to a given opportunity) from the first set of metadata (Venkata: ¶[0021] Relationships 156) (¶[0021]: “Relationships 156 can include important entities and relationships that can be used to classify activities of the sales representative with respect to an opportunity.”)
and
extracting a second set of features (Venkata: ¶[0022] data from the generic productivity tools 197 and domain specific tools 199) from the second set of metadata (Venkata: ¶[0022] tools 152, structure 154) ;
¶[0022] “Native systems 195 may obtain data from the generic productivity tools 197 and domain specific tools 199 for each sales representative, while retaining the structure of activities, and provide the data and information to the workspace/context manager 180
applying a first machine-learning (ML) model (Venkata ¶[0030]:classification model) (Venkata ¶[0030]: convolutional neural network) chained with recurrent neural networks) to the first set of features (Venkata ¶[0030]:past, present activities) and the second set of features (Venkata ¶[0030]: activities) to determine a first score (Venkata ¶[0030]: size of the business) representing a degree of how valuable (Venkata ¶[0030]:size of business) the target entity (Venkata ¶[0030]: customer opportunity) is perceived by the source entity (Venkata ¶[0030]: sales representative)
¶[0030] The classification model is generated using convolutional neural networks chained with recurrent neural networks … an opportunity can be evaluated based on its activities (second set of features) and known previous activities (first set of features) to determine a score (first score indicating size of business) and guidance for the sales representative.
… The opportunity may be scored in two ways:
as a probability of winning or as a size of the business. In
some cases, the opportunity may be scored as an expected
value of business based on a combination of the expected
probability of winning and the size of business expected.
The probability of winning is modeled as a classification
problem, and the size of the business is modeled as a
regression problem
applying a second ML model to the first set of features and the second set of features to determine a second score (Venkata ¶[0030]: probability of winning) representing a likelihood the target entity will perform a task collaboratively with the source entity within a predetermined time period (Venkata ¶[0025]: Working memory 175 receives information from activity manager 160 about the existing known structure of activity streams, episodic memory 170 about the present ongoing activity pathways which may sometimes diverge, in part, from canonical activity pathways, and workspace/ context manager 180 about the specific context of the user in terms of opportunities they are working on, including the user's past, present, and planned activities, their collaborative relationships, and their role and responsibilities with respect to a given opportunity. Working memory 175 can be a sequential acyclic graph of canonical atomic activities and can store all working memory information for open opportunities for which activities are being performed.)
generating an entity score (Venkata ¶[0030 opportunity score) for the target entity based on the first score and the second score, (Venkata ¶[0030]: The opportunity may be scored in two ways: as a probability of winning or as a size of the business. In some cases, the opportunity may be scored as an expected value of business based on a combination of the expected probability of winning and the size of business expected. The probability of winning is modeled as a classification problem, and the size of the business is modeled as a regression problem.)
wherein the entity score represents a degree of relevance (Venkata ¶[0024]: statistically most relevant) between the source entity and the target entity (¶[0024] The presentation manager 190 is used to present information to the user through the user interface of the user's system, showing, for example, the statistically most relevant information, and nothing else, based on the opportunity score determined from the bidirectional LSTM model working with the deterministic finite automata models-based predictions.)
(vii) ranking the plurality of target entities (Venkata ¶[0021]: Company X, Y, Z , A B having associated opportunities) based on their respective entity scores (Venkata ¶[0021]: opportunity scores); and
¶[0021]: “Together, the tools 152, structure 154, and relationships 156 are used within activity knowledge base 150 by activity manager 160 to accurately classify, rank, and calculate the probability of winning the opportunity through activities and actions of sales representatives on open opportunities (e.g., non-completed deals).”
(viii) transmitting ranking information (Venkata ¶[0024]: opportunity score) of at least a portion of the ranked target entities to the client device over the network.
(Venkata ¶[0024] “recommendations for and scoring of sales opportunities is incorporated into the sidebar of a user interface” “showing, for example, the statistically most relevant information, and nothing else)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the metadata feature extraction of Venkata with the agent ranking of Anderson because it is simply combining prior art elements according to known methods to yield predictable results. One would be motivated to incorporate the metadata feature extraction of Venkata into the agent ranking system of Anderson because, as taught by ¶[0003] of Venkata, using historical information as well as machine learning algorithms, failing opportunities may be improved.
The combination of Anderson and Venkata fails to disclose :
…a second score representing a likelihood the target entity will perform a task collaboratively with the source entity within a predetermined time period, wherein the second score for the potential target entity is based on similarity scores between technological profiles of the potential target entity and each of the two or more existing target entities;
However, Kim—directed to analogous art--teaches
…a second score representing a likelihood the target entity will perform a task collaboratively with the source entity within a predetermined time period, wherein the second score for the potential target entity is based on similarity scores between technological profiles of the potential target entity and each of the two or more existing target entities; (Kim, Abstract describes determining feature vectors representing technological meanings for companies for the purposes of determining cooperation between the companies. The technological similarity score is described in section 3.1. Section 3.2.1, first two paragraphs indicate that the framework represents a likelihood of successful collaboration. In the combination with Anderson, Anderson at [0045-0046] indicates that the lead opportunities (analogous to cooperation opportunities) are considered for a particular time period. )
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Anderson and Venkata by Kim because “We believe that our framework can reduce the efforts and resources needed to find strategically suitable startups, because the candidate lists are presented based on bibliographic information. Finding appropriate partners based on their technical positions in the market can help navigate the early stages of cooperation strategies for technological in novation. Furthermore, candidate lists generated according to the strategies and interests of the acquirer can support decision making regarding cooperation. In addition, reducing information asymmetry between startups seeking growth and companies looking to evolve through acquisition will not only benefit the direct participants but also bolster the entire industry.” See Kim, Conclusion.
Regarding claim 4, the rejection of claim 1 is incorporated herein. Furthermore, Venkata further discloses the method further comprising:
selecting a predetermined number (Venkata: ¶[0024] statistically most relevant) of top-ranked target entities (Venkata: ¶[0024] opportunities) based on their respective entity scores (Venkata: ¶[0024] opportunity score); and
transmitting the ranking information of the top-ranked entities (Venkata: ¶[0024] “the statistically most relevant information, and nothing else”) to the client device to be displayed in a graphical user interface (GUI) Venkata: ¶[0024] “user interface”) of the client device (Venkata: ¶[0024] “user’s system”).
Venkata: ¶[0024] The presentation manager 190 is used to present information to the user through the user interface of the user's system, showing, for example, the statistically most relevant information, and nothing else
Regarding claim 5, the rejection of claim 1 is incorporated herein. Furthermore, Venkata teaches wherein the data source includes at least one of a public firmographic database, a popularity ranking database, or a user satisfaction ranking database. (Venkata: e.g. key performance indicator data)
Venkata: ¶[0032]: Further, quantitative data for each existing customer account may include metric indicators (e.g., key performance indicators) that include information regarding New End User Accounts and Existing Contracts, Provision of Services to End Users, Number of End Users, Not In Good Order (NIGO) Cases and Errors, Service Quality, Response Time, Number of Outstanding Cases, Cases Closed Without Resolution, Service Recovery, Surveys-Survey scores, and so forth.
Regarding claim 6, the rejection of claim 1 is incorporated herein. Furthermore, Kim teaches
“wherein the first set of metadata of a target entity includes at least one of a number of users within a corresponding user group of the target entity (Kim, section 3.2.1, the data includes at least a number of employees, who are users of company resources.), resources used by the user group, or interactions with other entities.
Regarding claim 7, the rejection of claim 1 is incorporated herein. Furthermore, Venkata teaches wherein the second set of metadata of a target entity includes at least one of one or more prior tasks completed between the source entity and the target entity, types of the tasks completed, or subsequent activities of the prior completed tasks performed between the source entity and the target entity (e.g. (Venkata: service quality and history). (Venkata ¶[0016] Determining whether the opportunity is likely to close or whether it may be at risk may be based on … health of the existing relationship with the customer such as service quality and history…”)
Regarding claim 8, the rejection of claim 1 is incorporated herein. Furthermore, Venkata teaches wherein the second ML model uses one or more ML algorithms, including a market basket analysis, a term frequency-inverse document frequency (TFIDF) representation, cosine similarity, decision tree, random forest, or a gradient boosting. (Venkata ¶[0030]: The classification model is generated using convolutional neural networks chained with recurrent neural networks, and the regression model is generated with boosting based decision tree ensembles)
Regarding claim 9, the rejection of claim 1 is incorporated herein. Furthermore, Venkata teaches wherein the entity score is calculated based on a product of the first score (value) and the second score (probability of winning) (Venkata: ¶[0030] In some cases, the opportunity may be scored as an expected value of business based on a combination of the expected probability of winning and the size of business expected.)
Regarding claim 10, Anderson describes a non-transitory machine-readable medium having instructions stored therein for identifying target accounts, the instructions, when executed by a processor, causing the processor to perform operations (Anderson, ¶[0160 computer readable medium storing instructions executed on by a processor), the operations comprising:
The remainder of claim 10 is substantially similar to claim 1. Claim 10 is rejected with the same rationale.
Regarding claims 13-18, the rejection of claim 10 is incorporated herein. Claims 13-18 recite substantially similar subject matter to claims 4-9, respectively, and are rejected with the same rationale.
Regarding claim 19, Anderson discloses a data processing system, comprising:
a processor (Anderson, Fig. 4, Processor(s) 404); and
a memory coupled to the processor to store instructions for identifying target accounts, (Fig. 4, storage 410) the instructions, which when executed by the processor, causing the processor to perform operations, (¶[0160 computer readable medium storing instructions executed on by a processor), the operations comprising:
The remainder of claim 19 is substantially similar to claim 1. Claim 10 is rejected with the same rationale.
Regarding claim 21, the rejection of claim 1 is incorporated herein. Furthermore, Kim teaches wherein the similarity scores comprise cosine similarity scores and represent a similarity in technology usage between the potential target entity and each of the two or more existing target entities. (Kim, Abstract describes determining feature vectors representing technological meanings for companies for the purposes of determining cooperation between the companies. The technological similarity score is described in section 3.1. Section 3, first paragraph indicates that the similarities may be cosine similarities. )
Regarding claim 23, the rejection of claim 10 is incorporated herein. Claim 23 recites substantially similar subject matter to claim 21 and is rejected with the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Park (Exploring Potential R&D Collaboration Partners through Patent Analysis based on Bibliographic Coupling and Latent Semantic Analysis) – Abstract describes measuring technological similarity to identify potential R&D collaborators.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Markus A Vasquez whose telephone number is (303)297-4432. The examiner can normally be reached Monday to Friday 10AM to 2PM PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARKUS A. VASQUEZ/ Primary Examiner, Art Unit 2121