DETAILED ACTION
This communication is in response to the Application filed on 12/22/2025. Claims 1-20 are pending and have been examined. Hence, this Action has been made FINAL.
Any previous objection/rejection not mentioned in this Office Action has been withdrawn by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 13, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered but they are not persuasive.
With respect to the 35 U.S.C. 112 rejections, the applicant amended the claims to address the concerns addressed in the non-final office action. The amendments clarify the indefinite claim language and the rejections have been withdrawn.
With respect to the 35 U.S.C. 101 rejections, the applicant amended the claims to address the concerns addressed in the non-final office action. The amended claim language successfully distinguishes itself from an abstract idea by incorporating the additional component into the limitations. The rejections under 35 U.S.C. 101 have been withdrawn.
With respect to the 35 U.S.C. 103 rejections, regarding the amendments to claims 1, 7, and 14, the applicants’ arguments are considered moot in view of an updated prior art search necessitated by the amendments. This is also the case for dependent claims 3-5, 10-12, and 17-19.
For claims 2, 6, 8, 9, 13, 15, 16, and 20 the applicant asserts that the additional references cited for these claims also do not teach the amended independent claim limitations. These arguments are considered moot for the same reason listed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-5, 7, 10-12, 14, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20250103636 (Chan et al.) in view of US Patent Application Publication US 20250028576 A1 (Zhang et al.) and US Patent Application Publication US 20240205038 (Dotan-Cohen et al.).
Regarding Claims 1, 7, and 14, Chan et al. teaches A system comprising: one or more processors;
(Alternatively, a non-transitory computer readable medium can comprise instructions, that when executed by one or more processors, cause a computing device to perform the acts of FIG. 7.) (Paragraph 85).
and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations comprising:
(Alternatively, a non-transitory computer readable medium can comprise instructions, that when executed by one or more processors, cause a computing device to perform the acts of FIG. 7.) (Paragraph 85).
receiving, from a virtual space associated with a communication platform, first data indicative of an instruction to generate a summary for a user profile;
(To elaborate, the aggregated interface system can generate an aggregated feed interface that condenses and consolidates digital communications and contextual activity data across various computer applications into a single, efficient interface. In many scenarios, either professionally or personally, user accounts are linked to many different computer applications, including applications dedicated to digital communication of one form or another as well as applications designed for tracking project collaboration, social media posting, and/or updating debugging tickets.) (Paragraph 16).
(As used herein, the term “aggregated feed interface” refers to a graphical user interface presented or displayed via a client device that includes one or more aggregate summaries.) (Paragraph 24).
(Relatedly, the term “aggregate summary” refers to a specific type of content item that is generated from, and condenses or summarizes, thread data across multiple data feeds of a user account (or multiple user accounts).) (Paragraph 25).
The method of Chan et al. has a virtual space consisting of a user account with a plurality of applications linked to it. The interface of the system (Fig. 5) presents summaries to the user, thus by opening the interface/application the user is requesting a summary to be generated. In this sense the user signing in or opening the interface is data indicative of a request to generate a summary.
receiving, in response to receiving the first data, second data associated with the user profile;
(As illustrated in FIG. 2, the aggregated interface system 102 accesses, receives, determines, or detects thread data from multiple data feeds. More specifically, the aggregated interface system 102 identifies data feeds of a user account by detecting digital communications involving the user account and/or by detecting contextual activity data performed by a client device and/or a server to execute a computer function for the user account.) (Paragraph 44).
Detecting contextual activity can mean to detect the user requesting a summary, in which case the system accesses the thread data from multiple feeds which represents a second data.
determining, based at least in part on the second data (that represents a temperature of a user device satisfying a threshold temperature) (addressed by Zhang et al.), to generate the summary using a machine-learning model associated with an operating system of a user device associated with the user profile;
(the aggregated interface system can generate an aggregate summary from thread data extracted from multiple data feeds of a user account. For instance, the aggregated interface system can extract digital communications and/or contextual activity data from data feeds generated by, or otherwise associated with, various computer applications linked to a user account.) (Paragraph 16).
(In particular, the aggregated interface system 102 can utilize the summary generation model 118 that is integrated with (e.g., trained by data from) the content management system 106 and/or the knowledge graph 120. For example, the knowledge graph 120 can store or encode relationship information to define relationships between user accounts and thread data within the content management system 106 (and/or housed at other server locations). From the knowledge graph 120, the summary generation model 118 can generate account-specific aggregate summaries from thread data to provide to the client device 108. For instance, the summary generation model 118 can generate an aggregate summary that summarizes thread data from multiple data feeds of the client device 108 and that includes and omits thread data as determined via the knowledge graph 120.) (Paragraph 39).
(For example, a summary generation model can refer to a large language model or another type of neural network that generates an aggregate summary from input thread data, including digital communications and/or contextual activity data.) (Paragraph 28).
(the aggregated interface system 102 may be implemented by the client device 108, and/or a third-party device. For example, the client device 108 can download all or part of the aggregated interface system 102 for implementation independent of, or together with, the server(s) 104.) (Paragraph 41).
Summaries are generated using both extracted data from the user account and contextual data from the user account. The knowledge graph utilizes this contextual information in summary generation and can include or omit data, thus making a determination on the summary generation. Furthermore, the thread data is used to generate a summary using a LLM or neural network. Also, as can be seen in Fig. 1, the system of Chan et al. can be implemented entirely on the client’s device.
wherein the machine-learning model is native to the operating system of the user device and configured to receive instructions via an API provided by the operating system;
(As mentioned above, the aggregated interface system can generate or identify content items using a large language model. As used herein, the term “large language model” refers to a machine learning model trained to perform computer tasks to generate or identify content items in response to trigger events (e.g., user interactions, such as text queries and button selections).) (Paragraph 29).
(For example, the aggregated interface system 102 may be implemented by the client device 108, and/or a third-party device.) (Paragraph 41).
(For example, the aggregated interface system 102 integrates or links data feeds using application programming interfaces (APIs) or custom-coded integrations to access or collect thread data from the various data feeds. In some cases, the aggregated interface system 102 utilizes APIs made available by source applications that host the various data feeds associated with the user account.) (Paragraph 44).
The system presented by Chan et al. can be implemented on the client’s device. The system also interfaces with APIs to receive data.
identifying third data associated with the virtual space and the user profile;
(user accounts are linked to many different computer applications, including applications dedicated to digital communication of one form or another as well as applications designed for tracking project collaboration, social media posting, and/or updating debugging tickets.) (Paragraph 16).
(In some embodiments, the ranking algorithm 408 is a personalized ranking algorithm specific to a user account, where the algorithm applies weights to thread data (or aggregate summaries) according to an activity history of the user account (and/or according to other data encoded in a knowledge graph).) (Paragraph 63).
Chan et al. uses an activity history of the user account. This activity history can be considered a third data associated with the user profile. Once again, the user account represents a virtual space due it being linked to a plurality of applications.
inputting the third data (and the level of attention) (addressed by Dotan-Cohen et al.) into the machine-learning model associated with the user device;
(a summary generation model can refer to a large language model or another type of neural network that generates an aggregate summary from input thread data, including digital communications and/or contextual activity data. … For example, a large language model can include parameters trained to generate or identify content items based on various contextual data, including graph information from a knowledge graph and/or historical user account behavior.) (Paragraphs 28-29).
(In some embodiments, the ranking algorithm 408 is a personalized ranking algorithm specific to a user account, where the algorithm applies weights to thread data (or aggregate summaries) according to an activity history of the user account (and/or according to other data encoded in a knowledge graph).) (Paragraph 63).
(the client device 108 can download all or part of the aggregated interface system 102 for implementation independent of, or together with, the server(s) 104.) (Paragraph 41).
The activity history can be included in the contextual data used to generate aggregate summaries by an LLM or neural network. The activity history can also be used as an ending step of the summary generation where multiple aggregate summaries are ranked and reordered according to it before being presented as a final summary. Furthermore, the summary generation model and knowledge graph can be located on the user’s device.
receiving, from the machine-learning model, fourth data indicative of a summary;
(As indicated above, the aggregated interface system can generate an aggregate summary of thread data using a summary generation model. As used herein, the term “summary generation model” refers to a machine learning model that generates an aggregate summary from thread data. For example, a summary generation model can refer to a large language model or another type of neural network that generates an aggregate summary from input thread data, including digital communications and/or contextual activity data.) (Paragraph 28).
The summary generation model generates a summary using thread data and contextual activity data.
and causing, in response to receiving the summary from the machine-learning model, the summary to be displayed via a user interface of the user device associated with the user profile.
(In addition, the server(s) 104 can transmit data to the client device 108 in the form of an aggregated feed interface that includes an aggregate summary generated from the received thread data.) (Paragraph 37).
The client’s device contains an interface that receives the summary.
Chan et al. does not explicitly teach: determining, based at least in part on the second data that represents a temperature of a user device satisfying a threshold temperature, to generate the summary using a machine-learning model associated with an operating system of a user device associated with the user profile; determining a level of attention associated with the user profile; inputting the third data and the level of attention into the machine-learning model associated with the user device;
However, Zhang et al. teaches a determining, based at least in part on the second data that represents a temperature of a user device satisfying a threshold temperature, to (generate the summary) (addressed by Chan et al.) using a machine-learning model associated with an operating system of a user device associated with the user profile;
(To allow for machine learning operations (e.g., via a neural network) to be adaptively executed on computing device 100, temperature measurements, known distances between different processing units on computing device 100, and other operational parameters can be used to schedule execution of machine learning operations. … Effectively, thus, a scheduler 180 may move execution to processing units that are cooler than the current processing unit that is currently executing the machine learning operations while maintaining the processing capabilities needed in order to execute the machine learning operations with a desired level of performance) (Paragraph 24).
Zhang et al. check the temperature of the components on a device. If the temperatures are too high on a certain processor it will decide not to execute the operations of the neural network on this one, and instead execute it on another.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the user account summarization method as taught by Chan et al. to consider the temperature of the user’s device as taught by Zhang et al. This would have been an obvious improvement as Chan et al. can be run on a user’s device using an MLM. These models draw a significant amount of power which can cause a rise in temperature which can damage the processor, thus its beneficial to avoid executing them on an already high temperature processor. (Zhang et al. Paragraph 16).
Chan et al. in view of Zhang et al. does not explicitly teach: determining a level of attention associated with the user profile; inputting the third data and the level of attention into the machine-learning model associated with the user device;
However, Dotan-Cohen et al. teaches a determining a level of attention associated with the user profile;
(For example, the level of detail of the text summary may relate to the level of engagement of a user. In one example, the text summary 314 is more detailed (and has more alphanumeric characters) when the user attended and actively participated in the meeting than when a user did not attend the meeting.) (Paragraph 94).
Dotan-Cohen et al. determines a level of attention in a user when generating summaries by identifying that the user was/wasn’t in a meeting that took place and adjusting the summary of the meeting accordingly.
inputting the third data and the level of attention into the machine-learning model associated with the user device;
(meeting feature determination logic 230 comprises computer instructions including rules, conditions, associations, predictive models, classification models, or other criteria for, among other operations, determining a meeting feature (determined by the meeting section determiner 270), … For example, meeting feature determination logic 230 comprises any suitable rules, such as Boolean logic, various decision trees (for example, random forest, gradient boosted trees, or similar decision algorithms), conditions or other logic, fuzzy logic, neural network, finite state machine, support vector machine, machine-learning techniques, or combinations of these to determine (or facilitate determining) the meeting feature according to embodiments described herein.) (Paragraph 69).
(In one embodiment, the meeting section determiner 270 determines the section by accessing user data and/or meeting data; determining a user feature based on the user data and/or a meeting feature based on the meeting data; classifying the first section into a personal classification specific to a user based on the meeting feature and/or the user feature; and generating an image and a text summary associated with the personal classification.) (Paragraph 93).
(the meeting section determiner 270 determines whether the user is a presenter (or presented), whether the user attended and actively participate (for example, by talking, engaging with the chat, and so forth) in the meeting, whether the user attended and did not actively participate in the meeting, or whether the user did not attend the meeting. For example, the level of detail of the text summary may relate to the level of engagement of a user.) (Paragraph 94).
The meeting feature logic component 230 comprises the instructions used to operate the meeting section determiner 270. The description of component 230 states that the logic can be done using any suitable rules such as machine learning techniques or a neural network. As component 270 performs text summarization it means that the use of a machine learning model would be suitable for performing the task. The meeting section determiner 270 generates a text summary and the size/contents of that summary are dependent on using level of attention data. The level of attention data is being represented by the level of engagement found for the user. The third data in this reference can be represented by the user/meeting data. Thus, the level of attention and the third data are input to a machine learning model generating a text summary in Dotan-Cohen et al.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the user account summarization method as taught by Chan et al. in view of Zhang et al. to consider the users level of attention as taught by Dotan-Cohen et al. This would have been an obvious improvement customize summaries based on how much information that specific user needs (Dotan-Cohen et al. Paragraph 94).
Regarding Claims 3, 10, and 17, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 1, 7, and 14.
Furthermore, Dotan-Cohen et al. teaches a wherein determining the level of attention is based at least in part on at least one of: determining, based at least in part on sensor data of the user device, a location of the user profile, or determining, based at least in part on the sensor data, a predicted activity being performed by a user associated with the user profile.
(For example, the level of detail of the text summary may relate to the level of engagement of a user. In one example, the text summary 314 is more detailed (and has more alphanumeric characters) when the user attended and actively participated in the meeting than when a user did not attend the meeting.) (Paragraph 94).
Dotan-Cohen et al. teaches finding a level attention based on their attendance in a specific meeting which is a predicted activity being performed by the user.
Regarding Claims 4, 11, and 18, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 1, 7, and 14. Paragraph 19
Furthermore, Chan et al. teaches a wherein identifying the third data comprises: identifying fifth data representing information previously downloaded to the user device;
(Specifically, the aggregated interface system can apply ranking weights to aggregate summaries and other content based on factors such as account activity history in relation to summary-related topics (and/or with user accounts from which digital communications originate) and others) (Paragraph 19)
(In some embodiments, the ranking algorithm 408 is a personalized ranking algorithm specific to a user account, where the algorithm applies weights to thread data (or aggregate summaries) according to an activity history of the user account (and/or according to other data encoded in a knowledge graph).) (Paragraph 63).
The system can combine multiple aggregate summaries and addition information into the final displayed summary as seen in Fig.4. Also seen in Fig. 4 is one element being an email which is a piece of information that would be previously downloaded on the user’s device.
identifying a subset of the fifth data that is relevant to the user profile;
(In some cases, the aggregated interface system can rank the aggregate summaries and/or other non-summary messages or content included in the aggregated feed interface. For example, the aggregated interface system can implement a ranking algorithm that is customized on a per-account basis to tailor the aggregated feed interface to a user account. Specifically, the aggregated interface system can apply ranking weights to aggregate summaries and other content based on factors such as account activity history in relation to summary-related topics (and/or with user accounts from which digital communications originate) and others.) (Paragraph 19).
The system identifies subsets of data relevant to a user where a subset could be represented by a single aggregate summary among the plurality of summaries.
determining a ranking of the subset of the fifth data;
(In some cases, the aggregated interface system can rank the aggregate summaries and/or other non-summary messages or content included in the aggregated feed interface. For example, the aggregated interface system can implement a ranking algorithm that is customized on a per-account basis to tailor the aggregated feed interface to a user account. Specifically, the aggregated interface system can apply ranking weights to aggregate summaries and other content based on factors such as account activity history in relation to summary-related topics (and/or with user accounts from which digital communications originate) and others.) (Paragraph 19).
The subsets (aggregate summaries) are ranked among each other. This can be seen in Fig.4 although that image only shows the ranking and not multiple summaries like the above quote describes.
and generating, based at least in part on the ranking and the subset of the fifth data, the summary.
(Specifically, the aggregated interface system can apply ranking weights to aggregate summaries and other content based on factors such as account activity history in relation to summary-related topics (and/or with user accounts from which digital communications originate) and others. The aggregated interface system can further present aggregate summaries and other content within the aggregated feed interface in an order based on the ranking (e.g., highest-ranked first).) (Paragraph 19).
The aggregate summaries are generated, then ranking weights are applied, then they are presented in a specific order. In this sense the ranking is the last step of the summary generation where the individual aggregated summaries are subsections if the final summary.
Regarding Claims 5, 12, and 19, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 1, 7, and 14.
Furthermore, Chan et al. teaches a causing a notification to be displayed via the user interface of the user profile, the notification requesting user input;
(In addition, the aggregate summary 506 includes a reply option 508.) (Paragraph 71).
In Chan et al. Fig. 5 it can be seen that the summary possesses a reply option for the. The reply option represents a notification requesting user input.
receiving, in response to displaying the notification, user input data representing an intent to generate a second summary utilizing a backend server of the communication platform;
(In particular, based on receiving an indication of a user interaction selecting the reply option 508, the aggregated interface system 102 determines or selects a data feed for replying (as described above) and provides an option (e.g., a pop-up window or an expanded view of the aggregate summary 506 to include a text field for typing the reply) for generating a reply message to send in relation to the aggregate summary 506. The aggregated interface system 102 can thus generate a new digital communication to provide to co-user accounts associated with thread data summarized by the aggregate summary 506. In some cases, the aggregated interface system 102 can further update the aggregate summary 506 based on the reply (and/or in response to replies by co-user accounts).) (Paragraph 71)
(the aggregated interface system 102 may be implemented by the client device 108, and/or a third-party device. For example, the client device 108 can download all or part of the aggregated interface system 102 for implementation independent of, or together with, the server(s) 104.) (Paragraph 41).
The user selecting the reply option can result in an updated summary being generated which is equivalent to a second summary. Furthermore, as can be seen in Fig. 1 the generation of the aggregated summaries can be via a backend server.
causing, in response to the user input data, a request to be sent to the backend server to generate the second summary; receiving the second summary from the backend server;
(In particular, based on receiving an indication of a user interaction selecting the reply option 508, … In some cases, the aggregated interface system 102 can further update the aggregate summary 506 based on the reply (and/or in response to replies by co-user accounts).) (Paragraph 71)
(the aggregated interface system 102 may be implemented by the client device 108, and/or a third-party device. For example, the client device 108 can download all or part of the aggregated interface system 102 for implementation independent of, or together with, the server(s) 104.) (Paragraph 41).
Once more, the user selecting the reply option can result in an updated summary being generated which is equivalent to a second summary. Furthermore, as can be seen in Fig. 1 the generation of the aggregated summaries can be via a backend server. As the new aggregated summary is used for updating the current one it must mean that the system receives it in order to do the updating.
and causing, in response to receiving the second summary from the backend server, the second summary to be displayed via the user interface of the user device of the user profile.
(In particular, based on receiving an indication of a user interaction selecting the reply option 508, … In some cases, the aggregated interface system 102 can further update the aggregate summary 506 based on the reply (and/or in response to replies by co-user accounts).) (Paragraph 71)
Aggregated summaries are displayed via the user interface as can be seen in element 506 of Fig.5. Updating this summary would display it in the same designated position on the UI for the user.
Claims 2, 8-9, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20250103636 (Chan et al.) in view of US Patent Application Publication US 20250028576 A1 (Zhang et al.), US Patent Application Publication US 20240205038 (Dotan-Cohen et al.), and further in view of US Patent Application Publication US 20240119932 (Khorshid et al.).
Regarding Claim 2, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 1.
Chan et al. in view of Zhang et al. and Dotan-Cohen et al. does not explicitly teach: wherein determining to generate the summary using the machine-learning model associated with the operating system of the user device is based at least in part on at least one of: a level of connectivity between the communication platform and a backend server being below a threshold level of connectivity, a battery level of the user device satisfying a threshold level, a type of user device currently used by the user profile is a mobile device, or a characteristic of the user device.
However, Khorshid et al. teaches wherein determining to generate the summary using the machine-learning model associated with the operating system of the user device is based at least in part on at least one of: a level of connectivity between the communication platform and a backend server being below a threshold level of connectivity, a battery level of the user device satisfying a threshold level, a type of user device currently used by the user profile is a mobile device, or a characteristic of the user device.
(In particular embodiments, the assistant system 140 may further assist the user to effectively and efficiently digest the obtained information by summarizing the information.) (Paragraph 54).
(if the client system 130 is not connected to a network 110 (i.e., when client system 130 is offline), the assistant system 140 may handle a user input in the first operational mode (i.e., on-device mode). … if there is a need for client system 130 to conserve battery power (e.g., when client system 130 has minimal available battery power or the user has indicated a desire to conserve the battery power of the client system 130), the assistant system 140 may handle a user input in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) … another factor may be a measure of latency for the connection between client system 130 and a remote server (e.g., the server associated with assistant system 140). For example, if a task associated with a user input may significantly benefit from and/or require prompt or immediate execution (e.g., photo capturing tasks), the assistant system 140 may handle the user input in the first operational mode (i.e., on-device mode)) (Paragraph 56).
Khorshid et al. teaches an assistant system capable of summarizing information. It is shown that information such as the client’s connection to the network or battery level are used to determine if/how the assistant system should be used.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the user account summarization method as taught by Chan et al. in view of Zhang et al. and Dotan-Cohen et al. to determine how/if summaries are generated based on device specific information such as connectivity and battery life as taught by Khorshid et al. This would have been an obvious improvement in order to change the operations of the model to account for changing conditions in a user’s device (Khorshid et al. Paragraph 56).
Regarding Claim 8 and 15, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 7 and 14.
Chan et al. in view of Zhang et al. and Dotan-Cohen et al. does not explicitly teach: wherein the second data comprises at least one of: a level of connectivity between a communication platform and a server, a battery level of the user device, a type of user device currently used by the user profile, or a characteristic of the user device.
However, Khorshid et al. teaches wherein the second data comprises at least one of: a level of connectivity between a communication platform and a server, a battery level of the user device, a type of user device currently used by the user profile, or a characteristic of the user device.
(In particular embodiments, the assistant system 140 may further assist the user to effectively and efficiently digest the obtained information by summarizing the information.) (Paragraph 54).
(if the client system 130 is not connected to a network 110 (i.e., when client system 130 is offline), the assistant system 140 may handle a user input in the first operational mode (i.e., on-device mode). … if there is a need for client system 130 to conserve battery power (e.g., when client system 130 has minimal available battery power or the user has indicated a desire to conserve the battery power of the client system 130), the assistant system 140 may handle a user input in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) … another factor may be a measure of latency for the connection between client system 130 and a remote server (e.g., the server associated with assistant system 140). For example, if a task associated with a user input may significantly benefit from and/or require prompt or immediate execution (e.g., photo capturing tasks), the assistant system 140 may handle the user input in the first operational mode (i.e., on-device mode)) (Paragraph 56).
Khorshid et al. teaches an assistant system capable of summarizing information. It is shown that information such as the client’s connection to the network or battery level are used to determine if/how the assistant system should be used.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the user account summarization method as taught by Chan et al. in view of Zhang et al. and Dotan-Cohen et al. to determine how/if summaries are generated based on device specific information such as connectivity and battery life as taught by Khorshid et al. This would have been an obvious improvement in order to change the operations of the model to account for changing conditions in a user’s device (Khorshid et al. Paragraph 56).
Regarding Claim 9 and 16, Chan et al. in view of Zhang et al., Dotan-Cohen et al., and Khorshid et al. teaches the system of claims 8 and 15.
Furthermore, Khorshid et al. teaches a wherein determining to generate the summary using the machine-learning model associated with the operating system of the user device is based at least in part on at least one of: determining that the level of connectivity is below a threshold level of connectivity, determining that the battery level of the user device meets or exceeds a threshold level, determining that the type of user device is a mobile device, determining that the user device includes the machine-learning model native within the operating system, or determining that the user device includes a GPU capable of summarizing data.
(In particular embodiments, the assistant system 140 may further assist the user to effectively and efficiently digest the obtained information by summarizing the information.) (Paragraph 54).
(if the client system 130 is not connected to a network 110 (i.e., when client system 130 is offline), the assistant system 140 may handle a user input in the first operational mode (i.e., on-device mode). … if there is a need for client system 130 to conserve battery power (e.g., when client system 130 has minimal available battery power or the user has indicated a desire to conserve the battery power of the client system 130), the assistant system 140 may handle a user input in the second operational mode (i.e., cloud mode) or the third operational mode (i.e., blended mode) … another factor may be a measure of latency for the connection between client system 130 and a remote server (e.g., the server associated with assistant system 140). For example, if a task associated with a user input may significantly benefit from and/or require prompt or immediate execution (e.g., photo capturing tasks), the assistant system 140 may handle the user input in the first operational mode (i.e., on-device mode)) (Paragraph 56).
Khorshid et al. teaches an assistant system capable of summarizing information. It is shown that information such as the client’s connection to the network or battery level are used to determine if/how the assistant system should be used.
Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication US 20250103636 (Chan et al.) in view of US Patent Application Publication US 20250028576 A1 (Zhang et al.), US Patent Application Publication US 20240205038 (Dotan-Cohen et al.), and further in view of US Patent Application Publication US 20240086461 (Varakin).
Regarding Claim 6, 13, and 20, Chan et al. in view of Zhang et al. and Dotan-Cohen et al. teaches the system of claims 1, 7, and 14.
Chan et al. in view of Zhang et al. and Dotan-Cohen et al. does not explicitly teach: identifying a second summary generated at a previous time, the second summary being associated with the user profile; identifying a list of one or more content items associated with the second summary; and causing the list of one or more content items and the summary to be input to the machine- learning model.
However, Varakin teaches a identifying a second summary generated at a previous time, the second summary being associated with the user profile;
(however, it should be appreciated that the system and methods described herein can also summarize written interactions (e.g., chat messages, emails, etc.).) (Paragraph 15).
(process 700 may further include receiving user inputs to modify the presented candidate summaries. For example, a user may change, replace, remove, or otherwise manipulate words in one or more summaries to fix mistakes or adjust the summary based on their preferences.) (Paragraph 59).
Varakin’s method generates summaries of audio interactions as well as chat messages/emails which would be considered associated with a user profile. After generating summaries, they are presented to the user before the summary generation process is altered. In this sense they are previously generated summaries as new summaries will be generated based on the user’s feedback.
identifying a list of one or more content items associated with the second summary;
(process 700 may further include receiving user inputs to modify the presented candidate summaries. For example, a user may change, replace, remove, or otherwise manipulate words in one or more summaries to fix mistakes or adjust the summary based on their preferences.) (Paragraph 59).
In this instance a user’s requested changes to the summary are a list of content items association with a previously generated summary.
and causing the list of one or more content items and the first summary to be input to the machine- learning model.
(In such implementations, the user feedback may be used to fine-tune the trained transformer model(s) (e.g., summary model 316). For example, parameters of the trained transformer model(s) may be adjusted to improve their respective outputs and/or the trained transformer model(s) may be completely retrained based on the user inputs.) (Paragraph 59).
(one or more candidate summaries generated at block 704 and/or selected at block 706 may be feed back into the trained transformer model(s) to generate additional candidate summaries. In other words, process 700 may be repeated, however, the transcription obtained at block 702 may be replaced with one or more of the generated candidate summaries.) (Paragraph 60).
The summary generation process shown in Fig. 7 is said to both receive user feedback to alter the model and to re-enter generated summaries to generate additional summaries. In this sense, both the content items and summaries are input into the summary model during the process.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the user account summarization method as taught by Chan et al. in view of Zhang et al. and Dotan-Cohen et al. to enter information related to created summaries and the summaries themselves back into the machine-learning model as taught by Varakin. This would have been an obvious improvement in order to add a way to continuously and iteratively train the summarization model (Varakin, Paragraph 59).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS DANIEL LOWEN whose telephone number is (571)272-5828. The examiner can normally be reached Mon-Fri 8:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS D LOWEN/Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
03/09/2026