Prosecution Insights
Last updated: April 19, 2026
Application No. 17/989,537

TIME AND EVENT DELAYING SHARING OF CONTENT WITH SPECIFIC RULE SETS

Final Rejection §103
Filed
Nov 17, 2022
Examiner
LE, MIRANDA
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Life Record Holdings Inc.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
368 granted / 492 resolved
+19.8% vs TC avg
Strong +77% interview lift
Without
With
+77.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
19 currently pending
Career history
511
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
69.2%
+29.2% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is responsive to Amendment, filed 08/13/2025. Claims 1-21 are pending in this application. This action is made Final. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-21 are rejected under 35 U.S.C. 103 as being unpatentable over Salkola et al. (US Pub No. 2020/0409936), in view of Moon et al. (US Pub No. 2020/0410012). As to claims 1, 20, Salkola teaches a computer-implemented method for sharing content, the method comprising: obtaining, with a computer system, content from an online feed, webpage, or other corpus of content (i.e. The proactive inference layer 280 may infer user interests and preferences based on user profile that is retrieved from the user context engine 225 ... a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task, [0057]); receiving, with the computer system, a selection of a portion of content within a content path based on the obtained content (i.e. a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task. Therefore, the proactive agent 285 may execute the proactive task in a personalized and context-aware manner. As an example and not by way of limitation, the proactive inference layer may infer that the user likes the band Maroon 5 and the proactive agent 285 may generate a recommendation of Maroon 5's new song/album to the user, [0057]; The knowledge graph may comprise vertices representing entities and edges representing relationships between entities, [0007]); generating, with the computer system, rule criteria for an event based on an event type for the selected portion of content (i.e. each proactive task may be associated with an agenda item. The agenda item may comprise a recurring item such as a daily digest ... a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task, [0057]); determining, with the computer system, indications of users to apply rule criteria of the event based on the event type (i.e. a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task. Therefore, the proactive agent 285 may execute the proactive task in a personalized and context-aware manner. As an example and not by way of limitation, the proactive inference layer may infer that the user likes the band Maroon 5 and the proactive agent 285 may generate a recommendation of Maroon 5's new song/album to the user, [0057]; The assistant system may additionally assist the user to manage different tasks such as keeping track of events, [0006]); analyzing, with the computer system, the obtained content to generate descriptors for the obtained content, the descriptors being generated with a decoder-based transformer with multiheaded attention (i.e. training data may comprise vectors each representing a training object and an expected label for each training object, [0118]; the proactive agent 285 may generate candidate entities associated with the proactive task based on user profile. The generation may be based on a straightforward backend query using deterministic filters to retrieve the candidate entities from a structured data store. The generation may be alternatively based on a machine-learning model that is trained based on user profile, entity attributes, and relevance between users and entities, the machine-learning model may be based on support vector machines (SVM), the machine-learning model may be based on a regression model, the machine-learning model may be based on a deep convolutional neural network (DCNN), [0058]); determining, with the computer system, based on a set of descriptors corresponding to a portion of the obtained content, that the set of descriptors are indicative of the event with an event model, the event model being configured to cluster vector representations of the generated descriptors according to a similarity metric (i.e. the proactive agent 285 may also rank the generated candidate entities based on user profile and the content associated with the candidate entities. The ranking may be based on the similarities between a user's interests and the candidate entities. As an example and not by way of limitation, the assistant system 140 may generate a feature vector representing a user's interest and feature vectors representing the candidate entities. The assistant system 140 may then calculate similarity scores (e.g., based on cosine similarity) between the feature vector representing the user's interest and the feature vectors representing the candidate entities. The ranking may be alternatively based on a ranking model that is trained based on user feedback data, [0058]); determining, with the computer system, for at least one of the users, that one or more of the descriptors in the set of descriptors satisfy the rule criteria of the event for the user (i.e. the proactive task may comprise recommending the candidate entities to a user. The proactive agent 285 may schedule the recommendation, thereby associating a recommendation time with the recommended candidate entities. The recommended candidate entities may be also associated with a priority and an expiration time. In particular embodiments, the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]); and updating, with the computer system, in response to the determination, permissions of the user to permit the user access to the portion of the content within the content path (i.e. to generate a personalized and context-aware communication content comprising the selected candidate entities, subject to the user's privacy settings, [0059]; the assistant xbot 215 may communicate with a proactive agent 285 in response to a user input .. may ask the assistant xbot 215 to set up a reminder. The assistant xbot 215 may request a proactive agent 285 to set up such reminder and the proactive agent 285 may proactively execute the task of reminding the user at a later time, [0060]; the assistant system 140 may comprise a summarizer 290. The summarizer 290 may provide customized news feed summaries to a user. In particular embodiments, the summarizer 290 may comprise a plurality of meta-agents. The plurality of meta-agents may use the first-party agents 250, third-party agents 255, or proactive agents 285 to generated news feed summaries, [0061]; the assistant system may check privacy settings to ensure that accessing a user's profile or other user information and executing different tasks are permitted subject to the user's privacy settings, [0006]; The assistant system may require access to knowledge described by entities and stored in a knowledge graph. The knowledge graph may comprise vertices representing entities and edges representing relationships between entities, [0007]). Although Salkola implicitly teaches the term "path" (i.e. The knowledge graph may comprise vertices representing entities and edges representing relationships between entities, [0007]), Salkola does not clearly state this term. Moon specifically teaches this term (i.e. the one or more candidate nodes may be selected from the one or more nodes corresponding to the one or more episodic memories of the user. The main QA model takes these graphs traversal paths and expanded memory slots as input and predicts correct answers via multiple module networks (e.g. COUNT, CHOOSE, etc.). Examples of such queries include “Where did we go after we had brunch with Jon?”, “How many times did I go to jazz concerts last year?”, etc. For episodic memory QA, a machine has to understand the contexts of a question and navigate multiple MG episode nodes as well as KG nodes to gather comprehensive information to match the query requirement, [0073]). It would have been obvious to one of ordinary skill of the art having the teaching of Salkola, Moon before the effective filing date of the claimed invention to modify the system of Salkola to include the limitations as taught by Moon. One of ordinary skill in the art would be motivated to make this combination in order to take the graph traversal paths and expanded memory slots as input and predicts correct answers via multiple module networks, in view of Moon ([0073]), as doing so would give the added benefit of having the query be associated a context, and selecting the one or more candidate nodes may be further based on the context associated with the query, as taught by as taught by Moon ([0073]). As per claim 2, Salkola, as combined, teaches the computer-implemented method of claim 1, wherein: the content path comprises a plurality of nodes, each corresponding to one or more content items within the content path (i.e. The assistant system may require access to knowledge described by entities and stored in a knowledge graph. The knowledge graph may comprise vertices representing entities and edges representing relationships between entities, [0007]); each of the plurality of nodes is associated with descriptions indicative of the one or more content items corresponding to the node (i.e. Each attribute value may be also associated with a semantic weight. A semantic weight for an attribute value may represent how the value semantically appropriate for the given entity considering all the available information. For example, the knowledge graph may comprise an entity of a movie “The Martian” (2015), which includes information that has been extracted from multiple content sources (e.g., an online social network, an online encyclopedia, movie review sources, media databases, and entertainment content sources), and then deduped, resolved, and fused to generate the single unique record for the knowledge graph, [0050]); the similarity metric comprises a Euclidian distance or a cosine similarity distance (i.e. The assistant system 140 may then calculate similarity scores (e.g., based on cosine similarity) between the feature vector representing the user's interest and the feature vectors representing the candidate entities, [0058]); and the vectors are clustered with density-based spatial clustering (i.e. FIG. 15 illustrates an example view of a vector space 1500. In particular embodiments, an object or an n-gram may be represented in a d-dimensional vector space, where d denotes any suitable number of dimensions. Although the vector space 1500 is illustrated as a three-dimensional space, this is for illustrative purposes only, as the vector space 1500 may be of any suitable dimension. In particular embodiments, an n-gram may be represented in the vector space 1500 as a vector referred to as a term embedding. Each vector may comprise coordinates corresponding to a particular point in the vector space 1500 (i.e., the terminal point of the vector), vectors 1510, 1520, and 15150 may be represented as points in the vector space 1500, as illustrated in FIG. 15. An n-gram may be mapped to a respective vector representation, [0111]; A similarity metric of vectors, [0113]). As per claim 3, Moon, as combined, teaches the computer-implemented method of claim 2, wherein receiving a selection of a portion of content within a content path comprises: identifying a subset of nodes within the content path based on edge parameters of edges formed between nodes within the subset of nodes (i.e. In particular embodiments, the assistant system 140 may receive, from a client system 130 associated with a user, a query from the user. The assistant system 140 may then determine, based on the query, one or more initial memory slots. In particular embodiments, the assistant system 140 may access a memory graph associated with the user. The memory graph may comprise a plurality of nodes and a plurality of edges connecting the nodes. In particular embodiments, one or more of the nodes may correspond to one or more episodic memories of the user, respectively. Each edge may correspond to a relationship between the connected nodes. In particular embodiments, the assistant system 140 may select, by one or more machine-learning models based on the initial memory slots, one or more candidate nodes from the memory graph. The assistant system 140 may then generate a response based on the initial memory slots and episodic memories corresponding to the selected candidate nodes. In particular embodiments, the assistant system 140 may further send, to the client system 130 in response to the query, instructions for presenting the response, [0070). As per claim 4, Moon, as combined, teaches the computer-implemented method of claim 3, further comprising: forming the edges between the node within the subset of nodes based on the descriptors corresponding to the respective nodes (i.e. The memory graph may comprise a plurality of nodes and a plurality of edges connecting the nodes. In particular embodiments, one or more of the nodes may correspond to one or more episodic memories of the user, respectively. Each edge may correspond to a relationship between the connected nodes, [0008]). As per claim 5, Moon, as combined, teaches the computer-implemented method of claim 3, further comprising: forming the edges between the nodes within the subset of nodes based on a shard context of the respective nodes (i.e. The system can insert conversational recommendations for exploring related memories based on the system's model of which memories are naturally interesting for users to consume in a particular context. In FIG. 6B, the system suggests the user to look for other memory instances that share the same activity and set of people. The system can make the suggestions more personalized by learning the sequences in which users like to explore memories, from the users' past sessions, [0088]). As per claim 6, Moon, as combined, teaches the computer-implemented method of claim 3, further comprising: forming the edges between the nodes within the subset of nodes based on a temporal ordering of the respective nodes (i.e. FIG. 4 illustrates example episodic memory question answering 400 with user queries and memory graphs with knowledge graph entities. Relevant memory nodes are provided as initial memory slots via graph search lookup. The memory graph network walks from the initial nodes to at-tend to relevant contexts and expands the memory slots when necessary. In particular embodiments, the one or more candidate nodes may be selected from the one or more nodes corresponding to the one or more episodic memories of the user. The main QA model takes these graphs traversal paths and expanded memory slots as input and predicts correct answers via multiple module networks (e.g. COUNT, CHOOSE, etc.). Examples of such queries include “Where did we go after we had brunch with Jon?”, “How many times did I go to jazz concerts last year?”, etc. For episodic memory QA, a machine has to understand the contexts of a question and navigate multiple MG episode nodes as well as KG nodes to gather comprehensive information to match the query requirement, [0073]); catalog based browsing systems that allow scrolling through memories across a single dimension, most commonly, time of creation, [0075]). As per claim 7, Moon, as combined, teaches the computer-implemented method of claim 2, wherein receiving a selection of a portion of content within the content path comprises: receiving a selection of the content path (i.e. Storing graph nodes as memory slots and allowing the network to dynamically expand memory slots through graph traversals may be effective solutions for addressing the technical challenge of disambiguating ambiguous and incomplete descriptions of reference memory without extensive candidate memory generation as graph traversals may only identify the most relevant memory slots stored in a readily available memory graph to accurately determine reference memory, [0077]). receiving a selection of a subset of nodes within the content path (i.e. building a synthetic memory graph generator to create multiple episodic memory graph nodes connected with real entities may be an effective solution for addressing the technical challenge of target memory being only indirectly linked to reference memory or entities as episodic memory graph nodes are connected with real entities in the memory graph, [0078]). As per claim 8, Salkola, as combined, teaches the computer-implemented method of claim 1, wherein generating rule criteria for an event based on an event type for the selected portion of content comprises: determining one or more candidate event types for the selected portion of content based on descriptors of the selected content (i.e. the proactive task may comprise recommending the candidate entities to a user. The proactive agent 285 may schedule the recommendation, thereby associating a recommendation time with the recommended candidate entities. The recommended candidate entities may be also associated with a priority and an expiration time. In particular embodiments, the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]). As per claim 9, Salkola, as combined, teaches the computer-implemented of claim 8, further comprising determining one or more descriptors for a content item, the determining comprising one or more of: classifying one or more objects or a scene depicted in image data to obtain text descriptors; classifying optical characters depicted in image data to obtained text descriptors (i.e. If the user input is based on an image or video modality, the assistant system 140 may process it using optical character recognition techniques within the messaging platform 205 to convert the user input into text, [0048]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]); classifying audio to obtain text descriptors (i.e. If the user input is based on an audio modality (e.g., the user may speak to the assistant application 136 or send a video including speech to the assistant application 136), the assistant system 140 may process it using an audio speech recognition (ASR) module 210 to convert the user input into text, [0048]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]); or processing natural language text to obtain text descriptors (i.e. The assist system 140 may use natural-language understanding to analyze the user request based on user profile and other relevant information, [0038]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]). As per claim 10, Salkola, as combined, teaches the computer-implemented of claim 9, wherein receiving a selection of a portion of content within a content path further comprises: determining candidate selections of different portions of content within the content path based on the descriptors (i.e. the proactive agent 285 may generate candidate entities associated with the proactive task based on user profile. The generation may be based on a straightforward backend query using deterministic filters to retrieve the candidate entities from a structured data store, [0058]); receiving a selection of a candidate selection corresponding to the portion of the content within the content path (i.e. The generation may be alternatively based on a machine-learning model that is trained based on user profile, entity attributes, and relevance between users and entities. As an example and not by way of limitation, the machine-learning model may be based on support vector machines (SVM). As another example and not by way of limitation, the machine-learning model may be based on a regression model. As another example and not by way of limitation, the machine-learning model may be based on a deep convolutional neural network (DCNN). In particular embodiments, the proactive agent 285 may also rank the generated candidate entities based on user profile and the content associated with the candidate entities, [0058]); displaying the one or more candidate event types corresponding to the portion of the content responsive to the candidate selection (i.e. The ranking may be based on the similarities between a user's interests and the candidate entities. As an example and not by way of limitation, the assistant system 140 may generate a feature vector representing a user's interest and feature vectors representing the candidate entities. The assistant system 140 may then calculate similarity scores (e.g., based on cosine similarity) between the feature vector representing the user's interest and the feature vectors representing the candidate entities. The ranking may be alternatively based on a ranking model that is trained based on user feedback data, [0058]); and receiving a selection of a candidate event type (i.e. the assistant xbot 215 may communicate with a proactive agent 285 in response to a user input. As an example and not by way of limitation, the user may ask the assistant xbot 215 to set up a reminder. The assistant xbot 215 may request a proactive agent 285 to set up such reminder and the proactive agent 285 may proactively execute the task of reminding the user at a later time, [0060]; The ranking may be based on the similarities between a user's interests and the candidate entities. As an example and not by way of limitation, the assistant system 140 may generate a feature vector representing a user's interest and feature vectors representing the candidate entities. The assistant system 140 may then calculate similarity scores (e.g., based on cosine similarity) between the feature vector representing the user's interest and the feature vectors representing the candidate entities. The ranking may be alternatively based on a ranking model that is trained based on user feedback data, [0058]). As per claim 11, Salkola, as combined, teaches the computer-implemented of claim 10, further comprising: generating the rule criteria based on the selected candidate event type (i.e. a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task. Therefore, the proactive agent 285 may execute the proactive task in a personalized and context-aware manner. As an example and not by way of limitation, the proactive inference layer may infer that the user likes the band Maroon 5 and the proactive agent 285 may generate a recommendation of Maroon 5's new song/album to the user, [0057]; The assistant system may additionally assist the user to manage different tasks such as keeping track of events, [0006]). As per claim 12, Moon, as combined, teaches the computer-implemented of claim 1, wherein determining indications of users to apply rule criteria of the event based on the event type comprises: identifying, based on the event type and relationships between a user creator of the content path and other users, one or more other users of types of users to which the event applicable (i.e. The first user may specify privacy settings that apply to a particular edge 1006 connecting to the concept node 1004 of the object, or may specify privacy settings that apply to all edges 1006 connecting to the concept node 1004. As another example and not by way of limitation, the first user may share a set of objects of a particular object-type (e.g., a set of images). The first user may specify privacy settings with respect to all objects associated with the first user of that particular object-type as having a particular privacy setting (e.g., specifying that all images posted by the first user are visible only to friends of the first user and/or users tagged in the images), [0214]). As per claim 13, Moon, as combined, the computer-implemented of claim 12, further comprising: determining that a new user corresponds to a type of user to which the event is applicable based on relationships formed between the new user and the user creator or the one or more other users (i.e. The user's privacy settings may specify that such information may be used only for particular processes, such as authentication, and further specify that such information may not be shared with any third-party system 170 or used for other processes or applications associated with the social-networking system 160 or assistant system 140, [0224]); and determining to apply the rule criteria of the event to the new user (i.e. A privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged by the social-networking system 160 or shared with other systems (e.g., a third-party system 170), such as, for example, by setting appropriate privacy settings, [0043]). As per claim 14, Salkola, as combined, teaches the computer-implemented of claim 1, wherein analyzing obtained content to generate descriptors for the obtained content comprises obtaining one or more of content items of other content paths, or content ingested from a feed or web-crawler, the analyzing further comprising: classifying one or more objects or a scene depicted in image data to obtain text descriptors; classifying optical characters depicted in image data to obtained text descriptors (i.e. If the user input is based on an image or video modality, the assistant system 140 may process it using optical character recognition techniques within the messaging platform 205 to convert the user input into text, [0048]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]); classifying audio to obtain text descriptors (i.e. If the user input is based on an audio modality (e.g., the user may speak to the assistant application 136 or send a video including speech to the assistant application 136), the assistant system 140 may process it using an audio speech recognition (ASR) module 210 to convert the user input into text, [0048]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]); or processing natural language text to obtain text descriptors (i.e. The assist system 140 may use natural-language understanding to analyze the user request based on user profile and other relevant information, [0038]; The entity may be classified into one of a plurality of domains. The domain is associated with a pre-determined list of required attributes corresponding to the domain, [0011]). As to claim 15, Salkola, as combined, teaches the computer-implemented of claim 14, wherein determining, based on a set of descriptors corresponding to a portion of the obtained content, that the set of descriptors are indicative of the event comprises: determining the set of descriptors correspond to the event type (i.e. a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task. Therefore, the proactive agent 285 may execute the proactive task in a personalized and context-aware manner. As an example and not by way of limitation, the proactive inference layer may infer that the user likes the band Maroon 5 and the proactive agent 285 may generate a recommendation of Maroon 5's new song/album to the user, [0057]; The assistant system may additionally assist the user to manage different tasks such as keeping track of events, [0006]); and determining that one or more of the descriptors in the set of descriptors correspond to the at least one of the users (i.e. a proactive agent 285 may retrieve user profile from the user context engine 225 when executing the proactive task. Therefore, the proactive agent 285 may execute the proactive task in a personalized and context-aware manner. As an example and not by way of limitation, the proactive inference layer may infer that the user likes the band Maroon 5 and the proactive agent 285 may generate a recommendation of Maroon 5's new song/album to the user, [0057]). As per claim 16, Salkola, as combined, teaches the computer-implemented of claim 15, wherein determining that the one or more of the descriptors in the set of descriptors satisfy the rule criteria of the event for the user comprises: comparing properties of the descriptors in the set of descriptors that correspond to the user with the rule criteria (i.e. the proactive task may comprise recommending the candidate entities to a user. The proactive agent 285 may schedule the recommendation, thereby associating a recommendation time with the recommended candidate entities. The recommended candidate entities may be also associated with a priority and an expiration time. In particular embodiments, the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]); and determining whether the comparison indicates satisfaction of the rule criteria (i.e. the proactive task may comprise recommending the candidate entities to a user. The proactive agent 285 may schedule the recommendation, thereby associating a recommendation time with the recommended candidate entities. The recommended candidate entities may be also associated with a priority and an expiration time. In particular embodiments, the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]). As per claim 17, Salkola, as combined, teaches the computer-implemented of claim 1, wherein determining that the one or more of the descriptors in the set of descriptors satisfy the rule criteria of the event for the user comprises: updating a record of the user based on the one or more of the descriptors in the set of descriptors corresponding the user (i.e. the proactive task may comprise recommending the candidate entities to a user. The proactive agent 285 may schedule the recommendation, thereby associating a recommendation time with the recommended candidate entities, [0059]); storing a record of the event (i.e. The recommended candidate entities may be also associated with a priority and an expiration time. In particular embodiments, the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]); and determining whether properties of the record of the user and the record of the event satisfy rule criteria governing access of the user to one or more content paths (i.e. the recommended candidate entities may be sent to a proactive scheduler. The proactive scheduler may determine an actual time to send the recommended candidate entities to the user based on the priority associated with the task and other relevant factors ... the dialog engine 235 may identify the dialog intent, state, and history associated with the user. Based on the dialog intent, the dialog engine 235 may select some candidate entities among the recommended candidate entities to send to the client system 130, [0059]). As per claim 18, Salkola, as combined, teaches the computer-implemented of claim 1, wherein updating, in response to the determination, permissions of the user to permit the user access to the portion of the content within the content path comprises: notifying the user of permission to access the portion of the content within the content path (i.e. the assistant xbot 215 may communicate with a proactive agent 285 in response to a user input. As an example and not by way of limitation, the user may ask the assistant xbot 215 to set up a reminder. The assistant xbot 215 may request a proactive agent 285 to set up such reminder and the proactive agent 285 may proactively execute the task of reminding the user at a later time, [0060]; the assistant system may check privacy settings to ensure that accessing a user's profile or other user information and executing different tasks are permitted subject to the user's privacy settings, [0006]). As per claim 19, Salkola, as combined, teaches the computer-implemented of claim 1, wherein updating, in response to the determination, permissions of the user to permit the user access to the portion of the content within the content path comprises: updating an interface accessible under an account of the user to indicate permission to access the portion of the content within the content path (i.e. the assistant xbot 215 may communicate with a proactive agent 285 in response to a user input. As an example and not by way of limitation, the user may ask the assistant xbot 215 to set up a reminder. The assistant xbot 215 may request a proactive agent 285 to set up such reminder and the proactive agent 285 may proactively execute the task of reminding the user at a later time, [0060]; the assistant system may check privacy settings to ensure that accessing a user's profile or other user information and executing different tasks are permitted subject to the user's privacy settings, [0006]). As per claim 21, Salkola, as combined, teaches the method of claim 1, comprising: creating the content with steps for content path creation (i.e. the assistant system may resolve entity records from multiple data sources such that records describing an entity are identified and are associated with a globally unique identifier. The assistant system may require access to knowledge described by entities and stored in a knowledge graph, [0007]). Response to Arguments Applicant's arguments with respect to claims 1-21 have been considered but are moot in view of the new ground(s) of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIRANDA LE whose telephone number is (571)272-4112. The examiner can normally be reached M-F 7AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIRANDA LE/ Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
Feb 08, 2025
Non-Final Rejection — §103
Aug 13, 2025
Response Filed
Nov 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591565
PREDICTING PURGE EFFECTS IN HIERARCHICAL DATA ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12547635
METHOD AND APPARATUS FOR SPATIAL DATA PROCESSING
2y 5m to grant Granted Feb 10, 2026
Patent 12517907
GRAPH-BASED QUERY ENGINE FOR AN EXTENSIBILITY PLATFORM
2y 5m to grant Granted Jan 06, 2026
Patent 12517929
MAPPING DISPARATE DATASETS
2y 5m to grant Granted Jan 06, 2026
Patent 12488015
SYSTEMS AND METHODS FOR INTERACTIVE ANALYSIS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+77.1%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month