Prosecution Insights
Last updated: April 19, 2026
Application No. 17/681,418

MULTI-LEVEL GRAPH EMBEDDING

Final Rejection §101§103
Filed
Feb 25, 2022
Examiner
WHITAKER, ANDREW B
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Microsoft Technology Licensing, LLC
OA Round
6 (Final)
19%
Grant Probability
At Risk
7-8
OA Rounds
4y 9m
To Grant
38%
With Interview

Examiner Intelligence

Grants only 19% of cases
19%
Career Allow Rate
103 granted / 553 resolved
-33.4% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
57 currently pending
Career history
610
Total Applications
across all art units

Statute-Specific Performance

§101
34.1%
-5.9% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 553 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of the Claims The following is a Final Office Action in response to amendments and remarks filed 10 September 2025. Claims 1, 14, and 18 have been amended. Claims 1-8, 11-12, and 14-20 are pending and have been examined. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant again argues that the 35 U.S.C. 101 rejection under the Alice Corp. vs. CLS Bank Int’l be withdrawn; however the Examiner respectfully disagrees. Again, the arguments are not compliant under 37 CFR 1.111(b) as they amount to a mere allegation of patent eligibility based upon a bare assertion of improvement. The Examiner respectfully does not find the assertion persuasive because a bare assertion of an improvement without the detail necessary to be apparent is not sufficient to show an improvement (MPEP 2106.04(d)(1) (discussing MPEP 2106.05(a)). That is, the Examiner does not find any evidence that the claimed aspects are any improvement over conventional systems. This argument, again, appears to be whether or not the use of computer or computing components for increased speed and efficiency is an improvement; however the Examiner respectfully disagrees. Nor, in addressing the second step of Alice, does claiming the improved speed or efficiency inherent with applying the abstract idea on a computer provide a sufficient inventive concept. See Bancorp Servs., LLC v. Sun Life Assurance Co. of Can., 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”); CLS Bank, Int’l v. Alice Corp., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) aff’d, 134 S. Ct. 2347 (2014) (“[S]imply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.” (citations omitted)). As such the argument is not persuasive, and the rejection not withdrawn. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. Applicant argues that the Ramanath does not teach the proposed amendments however the Examiner respectfully disagrees for a plurality of reasons. Again, and as noted in the interview, the claim amendments simply describe how a query would operate, only providing selected results corresponding to a particular level of granularity “To help the user performing the search, the search system 216 may provide advanced targeting criteria called facets (e.g., skills, schools, companies, titles, etc.). The query can be entered by the user performing the search as free text, a facet selection (e.g., selectable user interface elements corresponding to the facets) or the combination of the two. As a result, semantic interpretation and segmentation in such queries is important. For example, in the query “java” or “finance,” the user performing the search could be searching for a candidate whose title contains the word or someone who knows a skill represented by the word. Relying on exact term or attribute match in faceted search for ranking is sub-optimal. The search system 216 provides a solution to the matching and ranking problem rather than just focusing on the query formulation (Ramanath ¶69)” where Ramanath is able to search for a particular candidate that would also fall inside that title with a skill. For example (as discussed in the interview), a query for a candidate with “java” or “finance,” would not present results for candidates with titles which contain such a skill or industry, such as a CEO of a tech company or finance firm, as the title is pertaining to the skill or industry which is due to the granularity searched. Again, as previously cited, Ramanath teaches the ability to “the search system 216 is configured to perform one or more of the following functions: constructing a deep semantic structured model architecture for a candidate search application setting, learning the supervised embeddings by using training data obtained from candidates recommended to the recruiters (e.g., with the inMail, or other message, accept events as the positive labels), performing training and optimization to obtain these embeddings with the desired level of accuracy via a DSSM architecture, and tuning the DSSM architecture and determining the network structure (e.g., number of layers, the dimension for each layer, etc) for candidate search applications (Ramanath ¶107)” and “FIG. 4 illustrates a graph data structure 400, in accordance with an example embodiment. The graph data structure 400 comprises an illustrative sub-network of the graph used to construct entity embeddings, such as embeddings for companies. In some example embodiments, each vertex or node 410 in the graph data structure 400 represents an entity, such as a company, and the edge weight, (denoted by the edge thickness) represents the number of members of the social networking service that share the entity, such as the number of members that have worked at both companies. Similar graph data structures can be constructed for other entity types, such as skills and schools, as well. In the example where the entities in the graph data structure 400 are companies, the search system 216 may embed each company (e.g., each node 410 in the graph data structure 400) into a fixed dimensional latent space. In some example embodiments, the search system 216 is configured to learn first order and second order embeddings from the graph data structure 400 (Ramanath ¶84)” which when broadly interpreted as one of ordinary skill in the art would do, reads upon the newly amended ability to generate different levels of granularities for embeddings. To put another way, Ramanath is able to generate as many embeddings for whatever level of accuracy or whatever number of layers or dimensions as requested by the user. Next, the Edge reference also teaches this same concept and, as previously cited, where embeddings are able to be for a user, group, colony, swarm or whole hive (Edge ¶83-¶84, ¶89, and ¶92-¶93). The Edge reference is relied upon to more explicitly teach the concept of having and specifying different levels or granularity (i.e. embeddings for entities wherein the entity can be an entire organization or the entity can be a singular person) and apply it to different “communities.” The Examiner also notes that duplication is obvious, MPEP 2144.04.VI.B. The duplication of parts (or steps) has no patentable significance unless a new and unexpected result is produced. Examiner finds no evidence that performing the processes in the claims for a second or different levels of granularities would produce new and unexpected results as compared to performing the processes in the claims for only a first level of granularity. As such, this argument is not persuasive, and the rejection not withdrawn. In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the Applicants in regards to distinctly and specifically pointing out the supposed errors in the Examiner's prior office action (37 CFR 1.111). The Examiner asserts that the Applicants only argue that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8, 11-12, and 14-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). However, the claim(s) recite(s) providing graph data in response to identified embeddings, based upon generated search embeddings, which is an abstract idea of organizing human activities. The limitations of in claim 1 “generating a plurality of sets of embeddings having different levels of granularity of a data graph, the data graph having i) nodes representing entities associated with an enterprise organization, and ii) edges between nodes representing relationships among the entities, the plurality of sets of embeddings including a first set of embedding generated for the data graph corresponding to a user level of granularity, a second set of embeddings generated for the data graph corresponding to a group level of granularity, and a third set of embeddings generate for the data graph corresponding to an enterprise level of granularity and wherein at least one node of the data graph has: a first corresponding embedding at the user level of granularity in the first set of embeddings; a second corresponding embedding at the group level of granularity; and a third corresponding embedding at the enterprise level of granularity; receiving a request for graph data based on the data graph; generating a search embedding corresponding to the request; selecting, based on a level of granularity from the different levels of granularity, a set of embeddings from the plurality of sets of embeddings wherein the selected set of embeddings comprises one of the first corresponding embedding. the second corresponding embedding, or the third corresponding embedding for the at least one node of the data graph depending on the associated level of granularity; processing the selected set of embeddings to generate a subset of adjacent embeddings that are each adjacent to the search embedding thereby identifying relevant data according to the level of granularity; and providing the graph data corresponding to the generated subset of adjacent embeddings in response to the request;” and the limitations of “generate a first sub-graph of a data graph, the data graph having i) nodes representing entities associated with an enterprise organization, and ii) edges between nodes representing relationships among the entities; generate a first set of embeddings having different levels of granularity for a first group of users within the enterprise using the first sub-graph, wherein embeddings of the first set of embedding correspond to respective nodes of the first sub-graph; generate a second sub-graph of the data graph having at least some different nodes from the first sub-graph; generate a second set of embeddings for a first group of users within the enterprise organization using the second sub-graph, wherein embeddings of the second set of embeddings correspond to respective nodes of the second sub-graph and wherein at least one node of the data graph has: a first corresponding embedding at a first level of the different levels of granularity; and a second corresponding embedding at a second level of the different levels of granularity; and respond to requests for graph data based on a data graph using one of the first set of embeddings and the second set of embeddings to identify adjacent nodes of the data graph as the graph data; receive a request for graph data; select, based on a level of granularity from the different levels of granularity, a set of embeddings from a plurality of sets of embeddings, wherein the selected set of embeddings comprises one of the first corresponding embedding, the second corresponding embedding, or the third corresponding embedding for the at least one node of the data graph depending on the associated level of granularity; process the selected set of embeddings according to a semantic similarity to identify a subset of adjacent embeddings that are each adjacent to the search embedding; and; providing the graph data corresponding to the generated subset of adjacent embeddings in response to the request; wherein to generate the plurality of sets of embeddings comprises to generate multiple embeddings for a node by temporarily pruning at least some nodes or edges from the data graph” in claims 14 and 18, as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “a computer-implemented method,” (or “A system for providing graph data, the system comprising: a node processor configured to receive requests for graph data; wherein the node processor is configured to:” in claim 14) nothing in the claim element precludes the step from the methods of organizing human interactions grouping. For example, but for the “a computer-implemented method,” (or “A system for providing graph data, the system comprising: a node processor configured to receive requests for graph data; wherein the node processor is configured to:” in claim 14) language, “generating,” “receiving,” “generating,” “process,” “providing,” “generate” and “respond” in the context of this claim encompasses the user manually graphs of nodes and edges representing entities and searching said graphs, which is business relation such as an organizational chart for a business or organization by managing personal behavior. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as one of the certain methods of organizing human activities, while some of the limitations may be based on mathematical concepts, but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human Activities” grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea. This judicial exception is not integrated into a practical application (Step 2A Prong Two). In particular, the claim only recites one additional element – using a computer implemented method or a node processor to perform the steps. The a computer implemented method or a node processor in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of graphing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The recitation of “using a neural network model” in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “using a neural network model” limits the identified judicial exceptions, this type of limitation merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole (Step 2A Prong Two: NO). Accordingly, the combination of these additional elements does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole. The claim does not include a combination of additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the combination of additional elements of using a computer implemented method or a node processor to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole. Claims 2-8, 11-12, 15-16, and 19-20 are dependent on claims 1, 14, and 18 and include all the limitations of claims 1, 14, and 18. Therefore, claims 2-8, 11-12, 15-16, and 19-20 recite the same abstract idea of “providing graph data in response to identified embeddings, based upon generated search embeddings.” The claim recites the additional limitations further limiting the data (nodes, entities, graphs) which is still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 14, and 18, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim 17 is dependent on claim 14 and include all the limitations of claim 14. Therefore, claim 17 recite the same abstract idea of “providing graph data in response to identified embeddings, based upon generated search embeddings.” The claim recites the additional limitations which include an application programming interface which is not an inventive concept that meaningfully limits the abstract idea, and is simply only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.04(d)(I) discussing MPEP 2106.05(h). Again, as discussed with respect to claims 1, 14, and 18, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claims 1-8, 11-12, and 14-20 are therefore not eligible subject matter, even when considered as a whole. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-8, 11-12, and 14-20 is/are rejected under 35 U.S.C. 103 as being obvious over Ramanath et al. (US PG Pub. 2020/0005153) further in view of Edge et al. (US PG Pub. 2021/0019325). As per claim 1, Ramanath discloses a computer-implemented method of providing graph data, the method comprising (method, Ramanath ¶8 and Fig. 5): generating a plurality of sets of embeddings having different levels of granularity of a data graph, the data graph having i) nodes representing entities associated with an enterprise organization, (generate initial embeddings, Ramanath ¶125-¶126; generating, by the computer system, a graph data structure based on the accessed profile data, the generated graph data structure comprising a plurality of nodes and a plurality of edges, each one of the plurality of nodes corresponding to a different entity indicated by the accessed profile data, and each one of the plurality of edges directly, connecting a different pair of the plurality of nodes and indicating a number of the plurality of users whose profile data indicates both entities of the pair of nodes that are directly connected by the edge; generating, by the computer system, a corresponding embedding vector for each one of the entities indicated by the accessed profile data using an unsupervised machine learning algorithm; and performing, by the computer system, a function of the online service using the generated embedding vectors of the entities. In some example embodiments, the online service comprises a social networking service, ¶20; wherein profile data is for users and various organizations, ¶63; As a result, the search system 216 reduces the size of the problem by a few orders of magnitude by constructing a smaller and denser graph, ¶83; FIG. 4 illustrates a graph data structure 400, in accordance with an example embodiment. The graph data structure 400 comprises an illustrative sub-network of the graph used to construct entity embeddings, such as embeddings for companies. In some example embodiments, each vertex or node 410 in the graph data structure 400 represents an entity, such as a company, and the edge weight, (denoted by the edge thickness) represents the number of members of the social networking service that share the entity, such as the number of members that have worked at both companies. Similar graph data structures can be constructed for other entity types, such as skills and schools, as well. In the example where the entities in the graph data structure 400 are companies, the search system 216 may embed each company (e.g., each node 410 in the graph data structure 400) into a fixed dimensional latent space. In some example embodiments, the search system 216 is configured to learn first order and second order embeddings from the graph data structure 400, ¶84; wherein learning second order embeddings from the graph, and performing training and optimization to obtain these embeddings with a desired level of accuracy, ¶86-¶87; see also ¶94 and ¶107 discussing different levels of accuracy, layers and dimensions) (Examiner notes the different orders with different levels of accuracy as the ability to graphs generated at different levels of granularity); receiving a request for graph data based on the data graph (receives requests, Ramanath ¶61; the search system 216 uses a variety of graph embedding algorithms. In some example embodiments, the search system 216 employs a Large-Scale Information Network Embeddings (LINE) approach. One LINE approach comprises constructing the graph of a social networking service by defining the members of the social networking service as vertices, and use some form of interaction (e.g., clicks, connections, or social actions) between members to compute the weight of the edge between any two members. However, for candidate search, this would create a large sparse graph resulting in intractable training and a noisy model. Instead, in some example embodiments, the search system 216 defines a weighted graph, G=(V, E, w . . . ), over the entities whose representations need to be learned (e.g., skill, title, company), and use the number of members sharing the same entity on their profile to induce an edge weight (w . . . ) between the vertices. As a result, the search system 216 reduces the size of the problem by a few orders of magnitude by constructing a smaller and denser graph, ¶83; the generated graph data structure comprises a plurality of nodes and a plurality of edges, with each one of the plurality of nodes corresponding to a different entity indicated by the accessed profile data, and each one of the plurality of edges directly connecting a different pair of the plurality of nodes and indicating a number of the plurality of users whose profile data indicates both entities of the pair of nodes that are directly connected by the edge, ¶90); generating a search embedding corresponding to the request using a neural network model (At operation 540, the search system 216 performs a function of the social networking service using the generated embedding vectors of the entities. In some example embodiments, the function comprises receiving, from a client computing device, a search query indicating an entity of the first facet type, generating one or more search results for the search query using the generated embedding vectors of the entities, with the one or more search results comprising at least one of the plurality of users, and causing the one or more search results to be displayed on the client computing device, 2-8, 11-12, 15-16, and 19-20 ¶92; In some example embodiments, the initial embedding vectors for the plurality of entities are generated using a neural network, ¶139); selecting, based on a level of granularity from the different levels of granularity, a set of embeddings from the plurality of sets of embeddings wherein the selected set of embeddings comprises one of the first corresponding embedding. the second corresponding embedding, or the third corresponding embedding for the at least one node of the data graph depending on the associated level of granularity (At operation 540, the search system 216 performs a function of the social networking service using the generated embedding vectors of the entities. In some example embodiments, the function comprises receiving, from a client computing device, a search query indicating an entity of the first facet type, generating one or more search results for the search query using the generated embedding vectors of the entities, with the one or more search results comprising at least one of the plurality of users, and causing the one or more search results to be displayed on the client computing device, Ramanath ¶92; To help the user performing the search, the search system 216 may provide advanced targeting criteria called facets (e.g., skills, schools, companies, titles, etc.). The query can be entered by the user performing the search as free text, a facet selection (e.g., selectable user interface elements corresponding to the facets) or the combination of the two. As a result, semantic interpretation and segmentation in such queries is important. For example, in the query “java” or “finance,” the user performing the search could be searching for a candidate whose title contains the word or someone who knows a skill represented by the word. Relying on exact term or attribute match in faceted search for ranking is sub-optimal. The search system 216 provides a solution to the matching and ranking problem rather than just focusing on the query formulation, ¶69; FIG. 6 illustrates a visualization of a deep neural network architecture 600, in accordance with an example embodiment. In FIG. 6, training data is fed into the deep neural network architecture 600. The tracking data can be broken down into the query Q and a bunch of documents D. In the candidate search case, the query can be a faceted query (e.g., query text plus entities, such as title, company, skills, etc.). Each facet has a corresponding vector representation, and a query can be represented by a concatenation of all of the facets that it identifies. So, the x.sub.Q layer is a concatenation of all of the facets of a query, and the document is a member that the recruiter is trying to retrieve. A member's profile has a lot of facets that can each be represented as a vector and can be represented as a concatenation of all of these facets, which is the layer x.sub.D. Once the search system 216 has these two vector representations, it can use the similarity between these two to say that if the query and the document are similar, then they should be ranked higher. If not, then they can be moved down the list. However, the problem is that initially these vectors are randomly initialized. The purpose of the supervised representation is to have labels at the end of training. In some example embodiments, the search system 216 performs backpropagation so that the members that are similar to the query have a vector representation that is similar in vector space, and a member that is not similar to the query is dissimilar in vector space, ¶97; wherein learning second order embeddings from the graph, and performing training and optimization to obtain these embeddings with a desired level of accuracy, ¶86-¶87; see also ¶94, ¶97, and ¶107 discussing different levels of accuracy); processing the selected set of embeddings to generate a subset of adjacent embeddings that are each adjacent to the search embedding thereby identifying relevant data according to the level of granularity; and (Second order embeddings are generated based on the observation that vertices with shared neighbors are similar. In this case, each vertex plays two roles: the vertex itself, and a specific context of other vertices. Let u.sub.i and u.sub.1′ be two vectors, where u.sub.i is the representation of v.sub.i when it is treated as a vertex, while u.sub.i′ is the representation of v.sub.i when it is used as a specific context, Ramanath ¶86; At operation 530, the search system 216 generates a corresponding initial embedding vector for each one of the entities indicated by the accessed profile data using an unsupervised machine learning algorithm. In some example embodiments, the unsupervised machine learning algorithm is configured to optimize the corresponding embedding vector of each one of the entities to result in a level of similarity between the corresponding embedding vectors of two entities increasing as the number of the plurality of users whose profile data indicates the two entities increases. In some example embodiments, the unsupervised machine learning algorithm is further configured to optimize the corresponding embedding vector of each one of the entities to result in a level of similarity between the corresponding embedding vectors of two entities increasing as the number of neighbor nodes shared by the two entities increases. In some example embodiments, the initial embedding vectors for the plurality of entities are generated using a neural network, ¶139); and providing the graph data corresponding to the generated subset of adjacent embeddings in response to the request (the generated graph data structure comprises a plurality of nodes and a plurality of edges, with each one of the plurality of nodes corresponding to a different entity indicated by the accessed profile data, and each one of the plurality of edges directly connecting a different pair of the plurality of nodes and indicating a number of the plurality of users whose profile data indicates both entities of the pair of nodes that are directly connected by the edge, Ramanath ¶90; generating one or more search results for the search query using the generated embedding vectors of the entities, with the one or more search results comprising at least one of the plurality of users, and causing the one or more search results to be displayed on the client computing device, ¶92); wherein generating the plurality of sets of embeddings comprises generating multiple embeddings for a node by temporarily pruning at least some nodes or edges from the data graph (pruning, Ramanath ¶146). Ramanath does not expressly disclose ii) edges between nodes representing relationships among the entities, the plurality of sets of embeddings including a first set of embedding generated for the data graph corresponding to a user level of granularity, a second set of embeddings generated for the data graph corresponding to a group level of granularity, and a third set of embeddings generate for the data graph corresponding to an enterprise level of granularity; wherein at least one node of the data graph has: a first corresponding embedding at the user level of granularity in the first set of embeddings; a second corresponding embedding at the group level of granularity; and a third corresponding embedding at the enterprise level of granularity. However, Edge teaches ii) edges between nodes representing relationships among the entities, the plurality of sets of embeddings including a first set of embedding generated for the data graph corresponding to a user level of granularity, a second set of embeddings generated for the data graph corresponding to a group level of granularity, and a third set of embeddings generate for the data graph corresponding to an enterprise level of granularity; wherein at least one node of the data graph has: a first corresponding embedding at the user level of granularity in the first set of embeddings; a second corresponding embedding at the group level of granularity; and a third corresponding embedding at the enterprise level of granularity (Additional views, such as view 1520 of graph 1510, show a community structure at a higher level of granularity. Each of the nodes in view 1520 may represent a community of users that is inferred by analyzing the data corresponding to graph 1510 by using graph embedding techniques, Edge ¶89; The already-collected but not yet connected data may include email telemetry data, which may be processed to gather insights into the workplace. As an example, FIG. 19 shows a conventional organization design 1910 and the organization's functional reality 1920 based on a graph induced from email telemetry data, which may be a subset of the already-collected but not yet connected data. Functional reality 1920 may include nodes representing people (e.g., employees) that are connected to each other via links that represent the email interactions among the people. The dark nodes in functional reality 1920 may represent people (e.g., employees) who are connected outside of their management hierarchy. Light gray nodes may present people that are operating within the silos of their reporting structure. Communities of the various types of nodes may be inferred using machine learning and the techniques described earlier. Siloed working conditions (e.g., represented by the communities or networks that mirror the hierarchy) may stifle innovation by fragmenting company knowledge and creating echo chambers. This analysis thus may provide insights to managers, including underscoring those communities that are bureaucratic and limit the autonomy and the effectiveness of the informal networks. Additional insights into the workplace may be obtained by additional application of graph-theoretic techniques to the already-collected but not yet connected data. As an example, FIG. 20 shows a framework 2000 that illustrates certain attributes of a workplace. Framework 2000 includes a horizontal axis representative of the fluidity of the collaborative links and a vertical axis representative of the proportion of the links that are external to a group. The fluidity of the collaborative links is based on the Omni score based on the embedding of multiple graphs into the same dimensions. The degree of the changes in the collaborative links may represent focus shifting. The proportion of the links that are external to the group may indicate boundary crossing by employees. The proportion of the links that are external to the group may referred to as “freedom to collaborate across the organization” and is based on an analysis of the alignment between the community members and the organizational hierarchy. The calculation involves computing the minimum spanning tree (MST) of all community members, adding all MST nodes to a peer set, adding all peers of all MST nodes to the peer set (except for peers of the MST root node), and calculating alignment as the ratio (community size)/(peer set size), and freedom to collaborate across the organization as 1—alignment. Framework 2000 includes four quadrants that classify the workplace into a colony 2010, hive 2020, nest 2030, or swarm 2040. Colony 2010 may relate to a workplace community that has stable relationships within the group. Hive 2020 may represent a workplace community that has agile reorganization within a group boundary. Nest 2030 may relate to stable relationships spanning group boundaries. Swarm 2040 may relate to agile reorganization spanning group boundaries. Attributes, such as individual learning, group learning, and group execution, for these groups are also identified in FIG. 20. FIG. 21 shows another framework 2100 for investigating similar workplace analytics. Framework 2100 also shows the degree of the changes in the collaborative links represented as focus shifting and the proportion of the links that are external to the group represented as boundary crossing by employees. Framework 2100 includes the same four quadrants as in FIG. 20 that classify the workplace into a colony 2110, hive 2120, nest 2130, or swarm 2140. Groups 2112 and 2114 are shown as having the attributes associated with a colony; group 2122 is shown as having the attributes associated with a hive; group 2132 is shown as a group having most of the employees having attributes associated with a nest and having a small number of employees having attributes associated with a swarm, ¶92-¶93). Both the Ramanath and Edge references are analogous in that both are directed towards/concerned with data graphs and embeddings of entities. Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Edge’s ability to ascertain different organizational communities in Ramanath’s system to improve the system and method with reasonable expectation that this would result in a an entity management system that is able to infer different connections throughout an organization. The motivation being that there is a need to connect previously collected data as predictive graph techniques are able to identify and prioritize relations of interest (Edge ¶4-¶8 and ¶43). The Examiner notes that the intended use, such as for use within an enterprise organization, does not patentably distinguish the claimed invention from the prior art. The Examiner also notes that any differences related merely to the meaning and information conveyed through labels (i.e. labels of entities and relationships) which does not explicitly alter or impact the functionality of the claimed invention, does not patentably distinguish the claimed invention from the prior art in terms of patentability (MPEP 2144.04). The Examiner also notes that duplication is obvious, MPEP 2144.04.VI.B. The duplication of parts (or steps) has no patentable significance unless a new and unexpected result is produced. Examiner finds no evidence that performing the processes in the claims for a second or different levels of granularities would produce new and unexpected results as compared to performing the processes in the claims for only a first level of granularity. As per claim 2, Ramanath and Edge disclose as shown above with respect to claim 1. Ramanath further discloses wherein the entities include users, documents, emails, meetings, and conversations associated with the enterprise organization (The graph data structure 400 comprises an illustrative sub-network of the graph used to construct entity embeddings, such as embeddings for companies. In some example embodiments, each vertex or node 410 in the graph data structure 400 represents an entity, such as a company, and the edge weight, (denoted by the edge thickness) represents the number of members of the social networking service that share the entity, such as the number of members that have worked at both companies. Similar graph data structures can be constructed for other entity types, such as skills and schools, as well. In the example where the entities in the graph data structure 400 are companies, the search system 216 may embed each company (e.g., each node 410 in the graph data structure 400) into a fixed dimensional latent space. In some example embodiments, the search system 216 is configured to learn first order and second order embeddings from the graph data structure 400, Ramanath ¶84; of user profiles, ¶82; In some example embodiments, the at least one entity comprises one of a job title, a company, a skill, a school, a degree, and an educational major. However, other types of entities are also within the scope of the present disclosure, ¶102) (Examiner notes the ability to use other types of entities, as the equivalent to the entities including users, documents, emails, meetings, and conversations associated with the enterprise organization). As per claim 3, Ramanath and Edge disclose as shown above with respect to claim 1. Ramanath further discloses wherein the relationships include document authorship, document modification, document sharing, meeting invites, linked data between documents, email sending, and email replying (involving user actions, Ramanath ¶26-¶27; some form of interaction, ¶83). As per claim 4, Ramanath and Edge disclose as shown above with respect to claim 1. Ramanath further discloses wherein the request for graph data is a request for nodes of the data graph that are related to a search query (receives requests, Ramanath ¶61; the search system 216 uses a variety of graph embedding algorithms. In some example embodiments, the search system 216 employs a Large-Scale Information Network Embeddings (LINE) approach. One LINE approach comprises constructing the graph of a social networking service by defining the members of the social networking service as vertices, and use some form of interaction (e.g., clicks, connections, or social actions) between members to compute the weight of the edge between any two members. However, for candidate search, this would create a large sparse graph resulting in intractable training and a noisy model. Instead, in some example embodiments, the search system 216 defines a weighted graph, G=(V, E, w . . . ), over the entities whose representations need to be learned (e.g., skill, title, company), and use the number of members sharing the same entity on their profile to induce an edge weight (w . . . ) between the vertices. As a result, the search system 216 reduces the size of the problem by a few orders of magnitude by constructing a smaller and denser graph, ¶83; the generated graph data structure comprises a plurality of nodes and a plurality of edges, with each one of the plurality of nodes corresponding to a different entity indicated by the accessed profile data, and each one of the plurality of edges directly connecting a different pair of the plurality of nodes and indicating a number of the plurality of users whose profile data indicates both entities of the pair of nodes that are directly connected by the edge, ¶90). As per claim 5, Ramanath and Edge disclose as shown above with respect to claim 1. Ramanath further discloses wherein the request for graph data is a request for edges between selected nodes of the data graph and the graph data corresponds to predicted relationships between the selected nodes (digital representation of the relationships between these entities, Ramanath ¶82). As per claim 6, Ramanath and Edge disclose as shown above with respect to claim 1. Ramanath further discloses wherein each embedding of the search embedding and the set of embeddings is a vector having an integer n dimensions (Instead, it is more desirable to utilize semantic representations of entities, for example, in the form of low dimensional embeddings. Such representations allow for the sparse and numerous entities to be better incorporated as part of a machine learning model. Therefore, in some example embodiments, the search system 216 employs the application of representational learning for entities in the candidate search domain, and, in some example embodiments, leverages a graph data structure to learn such representations using an unsupervised approach, Ramanath ¶80; in a fixed dimensional latent space, ¶84-¶86). As per claim 7, Ramanath and Edge disclose as shown above with respect to claim 6. Ramanath further discloses wherein each embedding of the set of embeddings corresponds to a node of the data graph (At operation 540, the search system 216 performs a function of the social networking service using the generated embedding vectors of the entities. In some example embodiments, the function comprises receiving, from a client computing device, a search query indicating an entity of the first facet type, generating one or more search results for the search query using the generated embedding vectors of the entities, with the one or more search results comprising at least one of the plurality of users, and causing the one or more search results to be displayed on the client computing device, Ramanath ¶92). As per claim 8, Ramanath and Edge disclose as shown above with respect to claim 7. Ramanath further discloses wherein embeddings of the set of embeddings correspond to different types of entities within the enterprise organization (At operation 540, the search system 216 performs a function of the social networking service using the generated embedding vectors of the entities. In some example embodiments, the function comprises receiving, from a client computing device, a search query indicating an entity of the first facet type, generating one or more search results for the search query using the generated embedding vectors of the entities, with the one or more search results comprising at least one of the plurality of users, and causing the one or more search results to be displayed on the client computing device, Ramanath ¶92). As per claim 11, Ramanath discloses as shown above with respect to claim 1. Ramanath further discloses the method further comprising pre-computing the plurality of sets of embeddings before receiving the request; and wherein at least one set of embeddings is pre-computed for selection in response to different request types (This information is stored, for example, in the database 218. Similarly, when a representative of an organization initially registers the organization with the social networking service, the representative may be prompted to provide certain information about the organization. This information may be stored, for example, in the database 218, or another database (not shown). In some example embodiments, the profile data may be processed (e.g., in the background or offline) to generate various derived profile data. For example, if a member has provided information about various job titles the member has held with the same company or different companies, and for how long, this information can be used to infer or derive a member profile attribute indicating the member's overall seniority level, or seniority level within a particular company. In some example embodiments, importing or otherwise accessing data from one or more externally hosted data sources may enhance profile data for both members and organizations. For instance, with companies in particular, financial data may be imported from one or more external data sources, and made part of a company's p
Read full office action

Prosecution Timeline

Feb 25, 2022
Application Filed
Oct 11, 2023
Non-Final Rejection — §101, §103
Dec 21, 2023
Interview Requested
Jan 04, 2024
Examiner Interview Summary
Jan 04, 2024
Applicant Interview (Telephonic)
Jan 16, 2024
Response Filed
Feb 28, 2024
Final Rejection — §101, §103
May 06, 2024
Response after Non-Final Action
May 21, 2024
Response after Non-Final Action
May 30, 2024
Request for Continued Examination
Jun 01, 2024
Response after Non-Final Action
Jul 29, 2024
Non-Final Rejection — §101, §103
Oct 29, 2024
Applicant Interview (Telephonic)
Oct 29, 2024
Examiner Interview Summary
Nov 01, 2024
Response Filed
Dec 16, 2024
Final Rejection — §101, §103
May 20, 2025
Request for Continued Examination
May 22, 2025
Response after Non-Final Action
Jun 06, 2025
Non-Final Rejection — §101, §103
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Examiner Interview Summary
Sep 10, 2025
Response Filed
Sep 25, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600221
REAL ESTATE NAVIGATION SYSTEM FOR REAL ESTATE TRANSACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12530700
SYSTEM AND METHOD FOR DETERMINING BLOCKCHAIN-BASED CRYPTOCURRENCY CORRESPONDING TO SCAM COIN
2y 5m to grant Granted Jan 20, 2026
Patent 12443963
License Compliance Failure Risk Management
2y 5m to grant Granted Oct 14, 2025
Patent 12299696
METHODS AND SYSTEMS FOR PROCESSING SMART GAS REGULATORY INFORMATION BASED ON REGULATORY INTERNET OF THINGS
2y 5m to grant Granted May 13, 2025
Patent 12282962
DISTRIBUTED LEDGER FOR RETIREMENT PLAN INTRA-PLAN PARTICIPANT TRANSACTIONS
2y 5m to grant Granted Apr 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
19%
Grant Probability
38%
With Interview (+19.2%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 553 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month