DETAILED ACTION
Status of the Claims
The following is a Non-final Office Action in response to amendments and remarks filed 26 November 2025.
Claims 1, 8, and 16 have been amended.
Claims 1-20 are pending and have been examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant argues that the 35 U.S.C. 101 rejection under the Alice Corp. vs. CLS Bank Int’l be withdrawn; however the Examiner respectfully disagrees. As an initial note, the arguments are not compliant under 37 CFR 1.111(b) as they amount to a mere allegation of patent eligibility based upon a bare assertion of improvement. The Examiner respectfully does not find the assertion persuasive because a bare assertion of an improvement without the detail necessary to be apparent is not sufficient to show an improvement (MPEP 2106.04(d)(1) (discussing MPEP 2106.05(a)). That is, the Examiner does not find any evidence that the claimed aspects are any improvement over conventional systems. Contrary to Applicants’ assertions, and by Applicants’ own admission in the specification [0038], the use of the encoders is “While specific types of target encoders have been described herein, one of skill in the art will appreciate that these types of encoders are provided for illustrative purposes and other types of target encoders may be employed as part of the model 306 without departing from the scope of this disclosure” is clearly a concept one of ordinary skill in the art would recognize and thus is not an improvement. This argument, again, appears to be whether or not the use of computer or computing components for increased speed and efficiency integrates the claims into a practical application and amounts to significantly more; however the Examiner respectfully disagrees. Nor, in addressing the second step of Alice, does claiming the improved speed or efficiency inherent with applying the abstract idea on a computer provide a sufficient inventive concept. See Bancorp Servs., LLC v. Sun Life Assurance Co. of Can., 687 F.3d 1266, 1278 (Fed. Cir. 2012) (“[T]he fact that the required calculations could be performed more efficiently via a computer does not materially alter the patent eligibility of the claimed subject matter.”); CLS Bank, Int’l v. Alice Corp., 717 F.3d 1269, 1286 (Fed. Cir. 2013) (en banc) aff’d, 134 S. Ct. 2347 (2014) (“[S]imply appending generic computer functionality to lend speed or efficiency to the performance of an otherwise abstract concept does not meaningfully limit claim scope for purposes of patent eligibility.” (citations omitted)). As such, the arguments are not persuasive, and the rejection was not withdrawn.
Applicant’s next argue that the claims are eligible due to the use of machine learning; however the Examiner respectfully disagrees for a plurality of reasons. Firstly, the general theory behind machine learning is to model a computer around the human brain, such that the computer learns as our brains do. Secondly, while the specification may discuss sophisticated techniques and advanced functions such as machine learning, it is the claims that are deemed eligible or ineligible under §101. Here, the claims are more of a generalized guideline of how to arrange a software model to implement the overarching abstract idea. Thirdly, the claims recitation of the “using a machine learning model trained to…” is only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). Here, Applicant specifically relies upon the “machine learning model(s)” however this element does not direct the abstract idea to a practical application as merely reciting using a trained machine learning model is simply apply it or applying a known technology to perform the abstract idea. Using a trained machine learning model to process data and present a result is well-understood, routine, and conventional activities, and are not directed to an improvement to a technology or technical field as the claims are directed to merely using these known technologies in a known manner to present information simultaneously. Similar cases the courts have identified as being well-understood, routine, and conventional include: Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (nonprecedential); Recording, transmitting, and archiving digital images by use of conventional or generic technology in a nascent but well-known environment, without any assertion that the invention reflects an inventive solution to any problem presented by combining a camera and a cellular telephone, TLI Communications, 823 F.3d at 611-12, 118 USPQ2d at 1747; Instructions to display two sets of information on a computer display in a non-interfering manner, without any limitations specifying how to achieve the desired result, Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344-45, 127 USPQ2d 1553, 1559-60 (Fed. Cir. 2018); and Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." The claim(s) is/are not patent eligible. As such the arguments are not persuasive and the rejection not withdrawn.
Applicant further argues that the claims are similar to the recent Example 42 from the USPTO; however the Examiner respectfully disagrees. Claim 1 of Example 42 recites a combination of additional elements including storing information, providing remote access over a network, converting updated information that was input by a user in a non-standardized form to a standardized format, automatically generating a message whenever updated information is stored, and transmitting the message to all of the users. The claim as a whole integrates the method of organizing human activity into a practical application. Specifically, the additional elements recite a specific improvement over prior art systems by allowing remote users to share information in real time in a standardized format regardless of the format in which the information was input by the user. Thus, the claim is eligible because it is not directed to the recited judicial exception (abstract idea). The only correlations between the instant claims and the Claim 1 of Example 42 is the ability to collect and present information. However the instant claims are more akin to Claim 2 of Example 42 as they, as a whole, merely describe how to generally “apply” the concept of storing and updating user’s interests in a computer environment. The claimed computer components are recited at a high level of generality and are merely invoked as tools to perform an existing social network interest group process. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Hence, the claims are not similar to Claim 1 of Example 42, and thus the rejection was not withdrawn. The Examiner also notes that the Examples provided on the USPTO website are purely hypothetical for demonstration purposes only and do not serve as a benchmark for patent eligibility.
Applicant next argues that the claims are a technical solution to a technical problem (citing the August 2025 101 Memo); however the Examiner respectfully disagrees. Here, such a computer technology problem is, for example, a problem concerning processor efficiency whereby a possible claim solution optimizes placement and routing procedures. The instant claims provide no such solution to a technical problem, but a business solution to a business problem by automating tasks such as project management due to the benefits that computing devices provided, i.e. faster, more efficient, and etc. As such the arguments are not persuasive and the rejection not withdrawn.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). More specifically, Applicant argues that the Volkovs reference “Nowhere does Volkovs disclose the specific encoders for a comment and the associated modifiable portion of the collaborative content, as recited in claim 1;” however the Examiner notes that the Volkovs reference is used to teach the ability to use encoders with user reviews i.e. user generated content such as comments and provide recommendations thereof. This argument also appears to be regarding the intended use of Volkovs, however a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Here the Examiner has clearly articulated the White and Arnold references as the base system for which one of ordinary skill in the art would simply apply the concept of using encoders, as taught by the Volkovs reference. As such, the arguments are not persuasive, and the rejection not overcome.
In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the Applicants in regards to distinctly and specifically pointing out the supposed errors in the Examiner's prior office action (37 CFR 1.111). The Examiner asserts that the Applicants only argue that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are directed to a process (an act, or series of acts or steps), a machine (a concrete thing, consisting of parts, or of certain devices and combination of devices), and a manufacture (an article produced from raw or prepared materials by giving these materials new forms, qualities, properties, or combinations, whether by hand labor or by machinery). Thus, each of the claims falls within one of the four statutory categories (Step 1). However, the claim(s) recite(s) presenting smart notifications based upon received user comments (communications) which is an abstract idea of organizing human activities as well as a mental process.
The limitations of “identifying the at least one comment included in the collaborative content wherein the at least one commend is made in the collaborative content by at least one collaborative user of the plurality of [users]; analyzing the at least one comment associated with the collaborative content to determine a content intent category for the at least one comment, wherein the content intent category is determined using a machine learning model trained to classify comment intent by applying a comment encoder to extract and encode features of the at least one comment, and applying a context encoder to encode the modifiable portion of the collaborative content associated with the at least one comment, wherein at least one first hidden state is determined for the at least one comment, at least one second hidden state is determined for the modifiable portion of the collaborative content, and an attention operation is applied to the at least one first hidden state and the at least one second hidden state to determine at least one keyword indicating the comment intent, based upon the content intent category, identifying the comment as an actionable comment requiring an action from the one or more of [users]; determining that the actionable comment includes a modification request to modify the modifiable portion of the collaborative content that includes the actionable comment; in response to determining that the actionable comment includes the modification request, automatically modifying the modifiable portion of the collaborative content; in response to determining that the actionable comment does not include the modification request determining a notification for the actionable comment” in claims 1, 8, and 16, and “automatically modifying the collaborative content based upon the determined action” of claim 16 as drafted, is a process that, under its broadest reasonable interpretation, covers organizing human activities--fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and/or a mental process—concepts performed in the human mind (including an observation, evaluation, judgment, opinion) but for the recitation of generic computer components (Step 2A Prong 1). That is, other than reciting “A system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising:,” (or “A system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising:” in claim 16) nothing in the claim element precludes the step from the methods of organizing human interactions grouping or from practically being performed in the mind. For example, but for the “A system comprising” language, “identifying,” “analyzing,” “identifying,” “determining,” “automatically modifying,” and “automatically modifying” in the context of this claim encompasses the user manually organizing comments (human activities) in order to find some sort of actionable comment to send some sort of notification or alert to another user or themselves. Similarly, the limitations above, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a computer or with computing components. For example, but for the “A system comprising” language, “identifying,” “analyzing,” “identifying,” “determining,” “automatically modifying,” “generating,” and “automatically modifying” in the context of this claim encompasses the user thinking that some comments read by the user may require some sort of action or follow up. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation as a method of organizing human activities (or in the mind) but for the recitation of generic computer components, then it falls within the “Certain Methods of organizing Human Activities” and/or “Mental Processes” grouping of abstract ideas. Accordingly, the claim(s) recite(s) an abstract idea.
This judicial exception is not integrated into a practical application (Step 2A Prong Two). The recitation of the “at least one computing device” “a first computing device,’ and “a second computing device” are elements which simply provide insignificant extrasolution data gathering and output. Next, the claims recite “obtaining…” which is simply an extrasolution data gathering activity. The “presenting the smart notifications” step and the “displaying…” step in claim 16 is simply the post solution data output which is an insignificant activity. Next, claims 1 and 16 only recites one additional element – using at least one processor to perform the steps and a machine learned model to classify intent. The at least one processor and the machine learning model in the steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of electronic data storage, query, and retrieval) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Specifically the claims amount to nothing more than an instruction to apply the abstract idea using a generic computer or invoking computers as tools by adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.04(d)(I) discussing MPEP 2106.05(f). The claims recitation of the “system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising:” “at least one computing device” “a first computing device,’ “a second computing device” and “machine learning model” only generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.04(d)(I) discussing MPEP 2106.05(h). The recitation of “using a machine learning model trained…” in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “using a machine learning model trained to classify comment intent by applying a comment encoder...and applying a context encoder…” limits the identified judicial exceptions, this type of limitation merely confines the use of the abstract idea to a particular technological environment (neural networks) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea, even when considered as a whole.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). As discussed above with respect to integration of the abstract idea into a practical application (Step 2A Prong 2), the additional element of using at least one processor to perform the steps and a machine learning model to classify amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Reevaluating here in step 2B, the “obtaining,” “presenting,” and “displaying” step(s) which are insignificant extrasolution activities and “at least one computing device” “a first computing device,’ and “a second computing device” which are elements that perform the insignificant extrasolution activities are also determined to be well-understood, routine and conventional activity in the field. The Symantec, TLI, and OIP Techs court decisions in MPEP 2106.05(d)(II) indicate that the mere receipt or transmission of data over a network is well-understood, routine, and conventional function when it is claimed in a merely generic manner (as is here). Therefore, when considering the additional elements alone, and in combination, there is no inventive concept in the claim. As such, the claim(s) is/are not patent eligible, even when considered as a whole.
Claims 2-6, 9-10, 12-15, and 17-20 recite the additional limitations which limit the content categories, comment input, intent, contextual information which are all still directed towards the abstract idea previously identified and is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 16, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claim 7 recites the additional limitations further limiting the machine learning model which is not an inventive concept that meaningfully limits the abstract idea. Again, as discussed with respect to claims 1, 8, and 16, the claims are simply limitations which are no more than mere instructions to apply the exception using a computer or with computing components. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Even when considered as a whole, the claims do not integrate the judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Claims 1-20 are therefore not eligible subject matter, even when considered as a whole.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over White et al. (US PG Pub. 2019/0129749) further in view of Arnold et al. (US PG Pub. 2021/0382950) and Volkovs et al. (US PG Pub. 2022/0058489).
As per claims 1 and 8, White discloses system and method comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising (computing systems, program modules, multiprocessor system, microprocessors, White ¶82-¶83; memory, data storage, ¶84):
identifying the at least one comment included in the collaborative content wherein the at least one commend is made in the collaborative content by a first computing device of the plurality of computing devices (receives a content item 202 comprised of one or more natural language phrases, wherein the content item can be an electronic communication item (e.g., email, text message, instant message, meeting request, a voice message transcript), a calendar item, a task item, a document, a meeting transcript, or other content item in which a task can be explicitly encoded or expressed, White ¶26);
analyzing the at least one comment associated with the collaborative content to determine a content intent category for the at least one comment, wherein the content intent category is determined using a machine learning model trained to classify comment intent (perform natural language processing (NLP) on the content item 202 to semantically and/or contextually understand likely intents of a user, such as one or more intents to perform a task action, White ¶30; the condition classifier 212 is a machine learning model trained on a set of tasks and non-tasks, such that the machined learned model can determine whether a text string includes a commitment to perform a task action or a request to perform a task action, ¶31; conditioned tasks, ¶22; condition classifier to classify intents, ¶36-¶37) (Examiner notes the tasks, conditioned tasks, and non-tasks and the classification thereof equivalent to the intent category types. Examiner also notes that within tasks and non-tasks, there can also be categories);
based upon the content intent category, identifying the comment as an actionable comment requiring an action from a second computing device of the plurality of computing devices (perform task identification, White ¶28; conditioned tasks, White ¶22; see also example of commitment stated by the user, ¶28; The method 400 proceeds to DECISION OPERATION 410, where a determination is made as to whether the task is a conditional task or a non-conditional task. For example, the determination can be made based on whether the task includes a task action portion and a condition portion, wherein a task action portion identifies a task action and a condition portion identifies one or more conditions that are to be satisfied prior to executing the task action. As another example, the determination can be made when the task is a conditional based on an implicitly-identified trigger condition according to past user interactions, ¶77; when and how to engage a user, ¶79) (Examiner interprets the trigger for a conditional action as some sort of identification of an actionable comment requiring action from another collaborative user);
determining that the actionable comment includes a modification request to modify the modifiable portion of the collaborative content that includes the actionable comment (The example operating environment 100 can be used to implement one or more of the components of an example conditional task system 200 described in FIG. 2, including components for providing automatic extraction and application of conditional tasks from content. According to an aspect, the terms “conditioned task” or “conditional task” as used herein describe a task action that is conditioned on a set of attributes, such as one or a combination of the occurrence of an event, an action, a person, time, or location. “I will take care of the assignments if Susan sends me the documents” is an example conditional task comprising a task action (take care of the assignments) and a condition (if Susan sends [the user] the documents) on which the task action depends, White ¶22; FIG. 7 illustrates one example of the architecture of a system for providing automatic extraction and application of conditional tasks from content as described above. Content developed, interacted with, or edited in association with the one or more components of the conditional task system 200 are enabled to be stored in different communication channels or other storage types. For example, various documents may be stored using a directory service 722, a web portal 724, a mailbox service 726, an instant messaging store 728, or a social networking site 730. One or more components of the conditional task system 200 are operative or configured to use any of these types of systems or the like for providing computing device state or activity based task reminders and automatic tracking of statuses of task-related activities, ¶99);
in response to determining that the actionable comment includes the modification request, automatically modifying the modifiable portion of the collaborative content (The method 400 proceeds to OPERATION 422, where engagement occurs based on the task action semantic frame. In some examples, engagement comprises engaging an task action actor 226 to perform the task action or initiate the task action on behalf of the user, such as when the task action is a computer-implemented task (e.g., set an alarm, send a message, perform a transaction). In other examples, engagement comprises engaging an task action actor 226 embodied as a notification engine to provide a notification 306 to the user reminding the user of the task action, White ¶81);
in response to determining that the actionable comment does not include the modification request, generating a notification for the actionable comment (Aspects of the example conditional task system 200 provide for automated detection of a conditional task, extraction of attributes that characterize a condition associated with a task action, use of information about the condition and context data to determine how to monitor for satisfaction of the condition and to determine when and how to engage the user about the task action, and notification of the user at an appropriate time and in an appropriate way when the condition is satisfied, White ¶23); and
While White discloses how to automated the sending of notifications with alerts and reminders within a collaborative environment, White does not expressly disclose the obtaining collaborative content shared among a plurality of computing devices, wherein the collaborative content is a document, a spreadsheet, a slide presentation, an image, a video, or a webpage; presenting the notifications part of a collaborative user interface to highlight the actionable comment using a graphical indicator when at least one computing device of the plurality of computing devices associated with the actionable content accesses the collaborative content that includes the actionable comment.
However, Arnold teaches
obtaining collaborative content shared among a plurality of computing devices, wherein the collaborative content is a document, a spreadsheet, a slide presentation, an image, a video, or a webpage, wherein the collaborative content includes a modifiable portion and at least one comment (detect interaction, third party content, Arnold ¶54; a third-party source is associated with or provides third-party content, examples of which are web pages, emails, other electronic messages, calendar events, digital content such as photos, videos, documents, and other digital multimedia, and other digital data associated with a particular third-party source, ¶63 and ¶190; web accessible content, ¶324- ¶325; The contextual hub system 116 can manage permissions granted to collaborating users of a contextual hub. For instance, the contextual hub system 116 can determine that particular users may view, annotate, and/or edit content within a contextual hub. The contextual hub system 116 may designate a managing user who has authority to assign and manage permissions for collaborating users. The contextual hub system 116 can limit a user to viewing rights with which the user may view but not annotate or edit content within the contextual hub. The contextual hub system 116 may also limit a user to annotation rights which enables the user to view and annotate content within the contextual hub; however, the user is unable to move, add, or delete content within the contextual hub. In at least one embodiment, the contextual hub system 116 may grant editing rights to one or more collaborating users with which the user may add, remove, or otherwise edit content within the contextual hub, ¶289);
presenting the notifications part of a collaborative user interface to highlight the actionable comment using a graphical indicator when at least one computing device of the plurality of computing devices associated with the actionable content accesses the collaborative content that includes the actionable comment (The contextual hub system can detect a user interaction with a third-party source from the set of third-party sources by a first user from the group of users. For instance, the contextual hub system can detect that the first user has added an annotation (e.g., a comment or highlight) to a source within the contextual hub. The contextual hub system can provide a notification for display within the contextual hub that indicates the user interaction, Arnold ¶54; As further illustrated in FIG. 17B, the contextual hub system 116 enables a user to customize a message including the content. For instance, the communication element 1740 includes the message content element 1742. The contextual hub system 116 may automatically populate the message content element 1742 with a message. The user may change the automatically generated message within the message content element 1742. Furthermore, although FIG. 17B illustrates the inclusion of a link to the content within the message content element 1742, the contextual hub system 116 may utilize various other methods for communicating the content. For example, the contextual hub system 116 can generate a link to the content within the contextual hub. Thus, the selection of the link brings the selecting user to the contextual hub management graphical user interface wherein the selected content is highlighted or otherwise emphasized, ¶307; For instance, a client device can receive an annotation event and send a notification of the annotation event to the contextual hub system 116. In at least one embodiment, the client device presents options to add an annotation (e.g., a highlight or comment) to web-accessible content based on detecting that the user has right clicked the content. The client device may send, together with the notification of the annotation event, data relating to the annotation event including information identifying the annotating user, the location of the annotation, content associated with the annotation (e.g., text within a comment), and other data, ¶324)
Both the White and the Arnold references are analogous in that both are directed towards/concerned with collaboration management. Before the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Arnold’s ability to utilize different types of collaborative formats in White’s system to improve the system and method with reasonable expectation that this would result in a collaboration management system that is able to accommodate different content formats/types.
The motivation being that conventional systems suffer from several technological shortcomings that result in inaccurate, inefficient, and inflexible operation. For example, conventional systems often haphazardly and inaccurately organize web browsing windows and tabs. In particular, although conventional systems often offer tabbed browsing functionalities, many conventional systems rely on user to manage web browsing tabs and windows and require users to manually open new tabs, close tabs no longer needed, or group tabs within different browser windows. However, with the nature of web browsing and the large number of web sources that a user may typically access, the result of a user managing their own tabs is often a single browser window that includes are large number of randomly ordered tabs that are difficult to navigate and make it difficult for a user to locate specific web sources and content. Thus, conventional systems of managing a user's interaction with web sources are often inaccurate because they obscure meaningful organization (Arnold ¶3).
While White does disclose the use of software to encode for classification purposes (White ¶26); the combination of White and Arnold do not expressly disclose applying a comment encoder to extract and encode features of the at least one comment, and applying a context encoder to encode the modifiable portion of the collaborative content associated with the at least one comment, wherein at least one first hidden state is determined for the at least one comment, at least one second hidden state is determined for the modifiable portion of the collaborative content, and an attention operation is applied to the at least one first hidden state and the at least one second hidden state to determine at least one keyword indicating the comment intent.
However, Volkovs teaches applying a comment encoder to extract and encode features of the at least one comment, and applying a context encoder to encode the modifiable portion of the collaborative content associated with the at least one comment, wherein at least one first hidden state is determined for the at least one comment, at least one second hidden state is determined for the modifiable portion of the collaborative content, and an attention operation is applied to the at least one first hidden state and the at least one second hidden state to determine at least one keyword indicating the comment intent (autoencoder comprises encoders, Volkovs ¶29; data encoded into abstract representations, ¶32; contextual encoding BiLSTM, ¶39; hidden layers are able to be connected, ¶38; wherein the context is from user review data, Fig. 4) (Examiner interprets the user reviews to be comments).
The White, Arnold, and Volkovs references are analogous in that both are directed towards/concerned with content recommendation/collaboration management. Before the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Volkovs’ ability to utilize encoders in Arnold’s and White’s system to improve the system and method with reasonable expectation that this would result in a collaboration management system that is able to accommodate different content formats/types.
The motivation being that this could increase the quality of interactions between users and help users identify more relevant content which could need attention (Volkovs ¶18).
While White, Arnold, and Volkovs disclose how to automated the sending of notifications with alerts and reminders, White and Arnold do not expressly disclose the notifications be a “smart notification.”
However, the Examiner asserts that the data identifying the notification is simply a label for the components and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art. Any differences related merely to the meaning and information conveyed through labels (i.e., the type of notification) which does not explicitly alter or impact the steps of the method does not patentably distinguish the claimed invention from the prior art in terms of patentability (MPEP 2144.04).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the notification to be smart notification since the specific type of notification does not functionally alter or relate to the steps of the method and merely labeling the information differently from that in the prior art does not patentably distinguish the claimed invention.
As per claim 2, White, Arnold, and Volkovs disclose as shown above with respect to claim 1. White further discloses wherein the content intent category includes: a modification intent, and information exchange intent, and a social communication intent (conditioned tasks, White ¶22; extraction of attributes that characterize a condition associated with a task action, ¶23; see also example of commitment stated by the user, ¶28; condition classifier to classify intents, ¶36-¶37).
The Examiner notes, under MPEP 2144.04, any differences related merely to the meaning and information conveyed through labels which does not explicitly alter or impact the functionality of the claimed invention, does not patentably distinguish the claimed invention from the prior art in terms of patentability. As such, the intent category being a “a modification intent, and information exchange intent, and a social communication intent” is simply a label for the plan and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art.
As per claim 3, White, Arnold, and Volkovs disclose as shown above with respect to claim 1. White further discloses wherein determining the content intent category comprises providing the comment and contextual information associated with the comment as input to the machine learning model (in identifying a task included in a content item, the condition classifier 212 is operative or configured to perform natural language processing (NLP) on the content item 202 to semantically and/or contextually understand likely intents of a user, such as one or more intents to perform a task action. In some examples, context information is used to resolve task action intents. In some examples, the condition classifier 212 applies natural language processing and machine learning techniques to identify entities, entity properties, and entity relationships with other entities. Further, in some examples, the condition classifier 212 makes a call to another data source 104, such as a search engine or a knowledge graph to resolve entities in a task. For example, a knowledge graph represents entities and properties as nodes, and attributes and relationships between entities as edges, thus providing a structured schematic of entities and their properties and how they relate to the user, White ¶30; contextual information, ¶39).
As per claim 4, White, Arnold, and Volkovs disclose as shown above with respect to claim 3. White further discloses wherein providing the comment as input comprises one of: providing text of the comment as a whole; or providing a subset of the text of the comment (the condition classifier 112 is operative or configured to access or request context information and other relevant information for resolving intents or entities (e.g., access contact information for identifying a person or nickname, access calendar information for identifying “free time,” access GPS coordinates for identifying “home” or “work” locations), ¶39).
As per claim 5, White, Arnold, and Volkovs disclose as shown above with respect to claim 3. White further discloses wherein providing contextual information associated with the comment as input comprises at least one of: providing a portion of the collaborative content; providing information about an author of the comment; providing information about one or more collaborative users; or providing information related to other comments associated with the collaborative content (context information is used to resolve task action intents, White ¶30; person based, ¶36; the condition classifier 112 is operative or configured to access or request context information and other relevant information for resolving intents or entities (e.g., access contact information for identifying a person or nickname, access calendar information for identifying “free time,” access GPS coordinates for identifying “home” or “work” locations), ¶39; In some examples, the condition classifier 112 extracts relevant action-related information and trigger condition-related information. For example, relevant trigger condition-related information can be used to determine one or more trigger conditions or condition arguments and one or more condition actors 224 to monitor for listening for updates or events on the one or more trigger conditions, ¶40; combines information derived, ¶41).
As per claim 6, White, Arnold, and Volkovs disclose as shown above with respect to claim 5. White further discloses wherein the collaborative content is a document, and wherein the portion of the collaborative content comprises one or more of: a sentence of the document; a paragraph of the document; an image in the document; a table in the document; a page of the document; or a section of the document (automatic extraction and application of conditional tasks from content items, such as electronic communications, documents, and the like, where conditional tasks can be expressed as natural language where the meaning of the conditional task may be readily understandable by a person, but may not be readily understood by a computer. As used herein, a conditional task is a natural language phrase or expression that includes a task action and a condition that is to be satisfied prior to the action being taken. Generally, aspects disclosed herein are directed to analyzing a natural language phrase (extracted from a content item), detecting a task comprising a task action that a user intends to take or has been requested to take, determining whether the task action is conditional (i.e., a conditional task) or non-conditional, when the task action is conditional, identifying conditional triggers of the conditional task, monitoring identified conditional triggers for determining when the condition(s) have been satisfied, and determining when and how to engage the user about the task action, White ¶15; According to an aspect, as used herein, the term “context information” describes any information characterizing a situation related to an entity or to an interaction between users, applications, or the surrounding environment, ¶20).
As per claim 7, White, Arnold, and Volkovs disclose as shown above with respect to claim 1. White further discloses wherein the machine learning model further comprises a feature fusion layer to generate a feature vector based on outputs of the comment encoder and the context encoder (a model training engine 210, a condition classifier 212, a trigger monitoring engine 218, an engagement engine 220, and a user feedback engine 222 , White ¶24; train the condition classifier, ¶25; task is explicitly encoded, ¶26; In some examples, the condition classifier 112 extracts relevant action-related information and trigger condition-related information. For example, relevant trigger condition-related information can be used to determine one or more trigger conditions or condition arguments and one or more condition actors 224 to monitor for listening for updates or events on the one or more trigger conditions, ¶40; combines information derived, ¶41; see also ¶78) (Examiner notes the combining of the information derived as the feature fusion layer equivalent).
In addition, the Examiner asserts that claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) "adapted to" or "adapted for" clauses; (B) "wherein" clauses; and (C) "whereby" clauses (See MPEP 2111.04). In the instant case, the recited aspects of “to generate a feature vector based on outputs of the comment encoder and the context encoder" is/are not a positive system element(s) since it doesn’t structurally limit the system and merely describes the intended use of the system and/or the intended result of the use of the system.
As per claim 9, White, Arnold, and Volkovs disclose as shown above with respect to claim 8. White further discloses wherein identifying the subset of comments comprises filtering the set of comments based upon comment intent categories (the condition classifier 212 is operative or configured to generate constituency parse trees for commitment phrases extracted from content items 202. For example, the constituency trees provide additional layers of information about the grammatical structure of commitment phrases. For each commitment phrase, the condition classifier 212 can traverse the constituency tree and the tags associated with current internal or external nodes can induce transitions over a carefully designed state machine. For example, transitions to specific states can cause associated tokens to be captured as part of a task action or the object of the task action. An action-object pair can define a user's intent, White ¶32; conditional keywords for a task, ¶32-¶33).
As per claim 10, White, Arnold, and Volkovs disclose as shown above with respect to claim 1. White further discloses wherein the subset of comments identified as actionable comments are filtered by a modification intent (the condition classifier 212 is operative or configured to generate constituency parse trees for commitment phrases extracted from content items 202. For example, the constituency trees provide additional layers of information about the grammatical structure of commitment phrases. For each commitment phrase, the condition classifier 212 can traverse the constituency tree and the tags associated with current internal or external nodes can induce transitions over a carefully designed state machine. For example, transitions to specific states can cause associated tokens to be captured as part of a task action or the object of the task action. An action-object pair can define a user's intent, White ¶32; conditional keywords for a task, ¶32-¶33).
As per claim 11, White, Arnold, and Volkovs disclose as shown above with respect to claim 8. White further discloses wherein generating the at least one smart notification comprises generating an indication in a collaborative user interface identifying one or more comments from the subset of comments (notification of the user at an appropriate time and in an appropriate way when the condition is satisfied, White ¶23; notification includes reminder of the task action, ¶63; see also ¶70; instruct a notification engine to present a button in a graphical user interface (GUI), which when selected, makes a phone call to Mark, ¶72).
As per claim 12, White, Arnold, and Volkovs disclose as shown above with respect to claim 11. White further discloses wherein the indication comprises at least one of: highlighting the one or more comments; causing the collaborative content to open on the one or more comments; causing display of a user interface element operable to navigate a user to the one or more comments; or causing an audible indication of the collaborative content (Consider as an example that a user receives or sends an email (content item 202) via an email application or takes meeting notes (content item 202) via a note-taking application. In some examples, the application 204 can make a call to the condition classifier 212 to parse a content item 202 (e.g., email, email string, meeting notes) and other data (e.g., metadata, context information) to identify tasks expressed in the item, such as a commitment stated by the user (e.g., “I will write the report,” “Bring in the cushions if it's supposed to rain later”) or a request expressly or implicitly agreed upon by the user (e.g., in a received text message: “If you get home before me, fire up the grill” or in a received email: “Can you pick up Ann?” and in a subsequent reply email: “Yes, unless Bob needs me to run the staff meeting.”). In other examples, the application 204 is operative or configured to parse a content item 202 and identify tasks or evoke a third-party application to perform task-identification. In such cases, the application 204 can pass identified tasks and other extracted data (e.g., context information) to the condition classifier 212, White ¶28; As an example, consider the conditional task “call Mark when the scores are posted.” Also consider that the trigger condition of “the scores being posted” is satisfied while the user is in a meeting or while the user is on a conference call. Using knowledge of the user's current status (e.g., in a meeting, on a conference call), the engagement engine 220 may make a determination that although the condition associated with the conditional task has been satisfied, it would be more appropriate or relevant to the user to be engaged, notified, or reminded of the task action of “calling Mark” after the user's meeting or conference call, ¶71 and ¶81).
As per claim 13, White, Arnold, and Volkovs disclose as shown above with respect to claim 8. White further discloses wherein generating the at least one smart notification comprises generating a task for one or more users based upon an action associated with an actionable comment from the subset of comments (the application 204 is operative or configured to parse a content item 202 and identify tasks or evoke a third-party application to perform task-identification. In such cases, the application 204 can pass identified tasks and other extracted data (e.g., context information) to the condition classifier 212, White ¶28).
As per claim 14, White, Arnold, and Volkovs disclose as shown above with respect to claim 8. White further discloses wherein generating the at least one smart notification comprises generating a calendar event to address an actionable comment from the subset of comments (condition can be time based, White ¶36; calendar application, ¶45-¶48).
As per claim 15, White, Arnold, and Volkovs disclose as shown above with respect to claim 8. White further discloses wherein generating the at least one smart notification comprises at least one of: sending an email; sending a text message; sending an instant message (Consider as an example that a user receives or sends an email (content item 202) via an email application or takes meeting notes (content item 202) via a note-taking application. In some examples, the application 204 can make a call to the condition classifier 212 to parse a content item 202 (e.g., email, email string, meeting notes) and other data (e.g., metadata, context information) to identify tasks expressed in the item, such as a commitment stated by the user (e.g., “I will write the report,” “Bring in the cushions if it's supposed to rain later”) or a request expressly or implicitly agreed upon by the user (e.g., in a received text message: “If you get home before me, fire up the grill” or in a received email: “Can you pick up Ann?” and in a subsequent reply email: “Yes, unless Bob needs me to run the staff meeting.”). In other examples, the application 204 is operative or configured to parse a content item 202 and identify tasks or evoke a third-party application to perform task-identification. In such cases, the application 204 can pass identified tasks and other extracted data (e.g., context information) to the condition classifier 212, White ¶28).
As per claim 16, White discloses a system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising (computing systems, program modules, multiprocessor system, microprocessors, White ¶82-¶83; memory, data storage, ¶84):
Identifying the at least one comment included in the collaborative content, wherein the at least one comment is made in the collaborative content by the a first computing device of the plurality of computing devices (receives a content item 202 comprised of one or more natural language phrases, wherein the content item can be an electronic communication item (e.g., email, text message, instant message, meeting request, a voice message transcript), a calendar item, a task item, a document, a meeting transcript, or other content item in which a task can be explicitly encoded or expressed, White ¶26);
analyzing the at least one comment associated with the collaborative content to determine a content intent category for the at least one comment, wherein the content intent category is determined using a machine learning model trained to classify comment intent (perform natural language processing (NLP) on the content item 202 to semantically and/or contextually understand likely intents of a user, such as one or more intents to perform a task action, White ¶30; the condition classifier 212 is a machine learning model trained on a set of tasks and non-tasks, such that the machined learned model can determine whether a text string includes a commitment to perform a task action or a request to perform a task action, ¶31; conditioned tasks, ¶22; condition classifier to classify intents, ¶36-¶37) (Examiner notes the tasks, conditioned tasks, and non-tasks and the classification thereof equivalent to the intent category types. Examiner also notes that within tasks and non-tasks, there can also be categories);
based upon the content intent category, identifying the comment as an actionable comment requiring an action from a second computing device of the plurality of computing devices (perform task identification, White ¶28; conditioned tasks, White ¶22; see also example of commitment stated by the user, ¶28; The method 400 proceeds to DECISION OPERATION 410, where a determination is made as to whether the task is a conditional task or a non-conditional task. For example, the determination can be made based on whether the task includes a task action portion and a condition portion, wherein a task action portion identifies a task action and a condition portion identifies one or more conditions that are to be satisfied prior to executing the task action. As another example, the determination can be made when the task is a conditional based on an implicitly-identified trigger condition according to past user interactions, ¶77; when and how to engage a user, ¶79) (Examiner interprets the trigger for a conditional action as some sort of identification of an actionable comment requiring action from another collaborative user);
determining that the actionable comment includes a modification request to modify modifiable portion of the collaborative content that includes the actionable comment (The example operating environment 100 can be used to implement one or more of the components of an example conditional task system 200 described in FIG. 2, including components for providing automatic extraction and application of conditional tasks from content. According to an aspect, the terms “conditioned task” or “conditional task” as used herein describe a task action that is conditioned on a set of attributes, such as one or a combination of the occurrence of an event, an action, a person, time, or location. “I will take care of the assignments if Susan sends me the documents” is an example conditional task comprising a task action (take care of the assignments) and a condition (if Susan sends [the user] the documents) on which the task action depends, White ¶22; FIG. 7 illustrates one example of the architecture of a system for providing automatic extraction and application of conditional tasks from content as described above. Content developed, interacted with, or edited in association with the one or more components of the conditional task system 200 are enabled to be stored in different communication channels or other storage types. For example, various documents may be stored using a directory service 722, a web portal 724, a mailbox service 726, an instant messaging store 728, or a social networking site 730. One or more components of the conditional task system 200 are operative or configured to use any of these types of systems or the like for providing computing device state or activity based task reminders and automatic tracking of statuses of task-related activities, ¶99);
in response to determining that the actionable comment includes the modification request, automatically modifying modifiable portion of the collaborative content (The method 400 proceeds to OPERATION 422, where engagement occurs based on the task action semantic frame. In some examples, engagement comprises engaging an task action actor 226 to perform the task action or initiate the task action on behalf of the user, such as when the task action is a computer-implemented task (e.g., set an alarm, send a message, perform a transaction). In other examples, engagement comprises engaging an task action actor 226 embodied as a notification engine to provide a notification 306 to the user reminding the user of the task action, White ¶81);
in response to determining that the actionable comment does not include the modification request, determining an action associated with the actionable comment (Aspects of the example conditional task system 200 provide for automated detection of a conditional task, extraction of attributes that characterize a condition associated with a task action, use of information about the condition and context data to determine how to monitor for satisfaction of the condition and to determine when and how to engage the user about the task action, and notification of the user at an appropriate time and in an appropriate way when the condition is satisfied, White ¶23): and
automatically modifying the collaborative content based upon the determined action (the application 204 is operative or configured to parse a content item 202 and identify tasks or evoke a third-party application to perform task-identification. In such cases, the application 204 can pass identified tasks and other extracted data (e.g., context information) to the condition classifier 212, White ¶28; see also ¶64 and automated action functionalities, ¶72 and ¶81);
generating the at least one notification for at least one computing device of the plurality of computing devices of the plurality of collaborative users associated with the actionable comment (notification of the user at an appropriate time and in an appropriate way when the condition is satisfied, White ¶23; notification includes reminder of the task action, ¶63; see also ¶70) (Examiner notes the notifications generated as the equivalent to smart notifications); and
While White discloses how to automated the sending of notifications with alerts and reminders within a collaborative environment, White does not expressly disclose the obtaining collaborative content shared among a plurality of computing devices, wherein the collaborative content is a document, a spreadsheet, a slide presentation, an image, a video, or a webpage; presenting the at least one smart notification on a collaborative user interface to highlight the actionable comment using a graphical indicator when the at least one computing device of the plurality of computing devices associated with the actionable comment accesses the collaborative content that includes the actionable comment.
However, Arnold teaches
obtaining collaborative content shared among a plurality of computing devices, wherein the collaborative content is a document, a spreadsheet, a slide presentation, an image, a video, or a webpage, wherein the collaborative content includes a modifiable portion and at least one comment (detect interaction, third party content, Arnold ¶54; a third-party source is associated with or provides third-party content, examples of which are web pages, emails, other electronic messages, calendar events, digital content such as photos, videos, documents, and other digital multimedia, and other digital data associated with a particular third-party source, ¶63 and ¶190; web accessible content, ¶324- ¶325; The contextual hub system 116 can manage permissions granted to collaborating users of a contextual hub. For instance, the contextual hub system 116 can determine that particular users may view, annotate, and/or edit content within a contextual hub. The contextual hub system 116 may designate a managing user who has authority to assign and manage permissions for collaborating users. The contextual hub system 116 can limit a user to viewing rights with which the user may view but not annotate or edit content within the contextual hub. The contextual hub system 116 may also limit a user to annotation rights which enables the user to view and annotate content within the contextual hub; however, the user is unable to move, add, or delete content within the contextual hub. In at least one embodiment, the contextual hub system 116 may grant editing rights to one or more collaborating users with which the user may add, remove, or otherwise edit content within the contextual hub, ¶289);
presenting the at least one smart notification on a collaborative user interface to highlight the actionable comment using a graphical indicator when the at least one computing device of the plurality of computing devices associated with the actionable comment accesses the collaborative content that includes the actionable comment (The contextual hub system can detect a user interaction with a third-party source from the set of third-party sources by a first user from the group of users. For instance, the contextual hub system can detect that the first user has added an annotation (e.g., a comment or highlight) to a source within the contextual hub. The contextual hub system can provide a notification for display within the contextual hub that indicates the user interaction, Arnold ¶54; As further illustrated in FIG. 17B, the contextual hub system 116 enables a user to customize a message including the content. For instance, the communication element 1740 includes the message content element 1742. The contextual hub system 116 may automatically populate the message content element 1742 with a message. The user may change the automatically generated message within the message content element 1742. Furthermore, although FIG. 17B illustrates the inclusion of a link to the content within the message content element 1742, the contextual hub system 116 may utilize various other methods for communicating the content. For example, the contextual hub system 116 can generate a link to the content within the contextual hub. Thus, the selection of the link brings the selecting user to the contextual hub management graphical user interface wherein the selected content is highlighted or otherwise emphasized, ¶307; For instance, a client device can receive an annotation event and send a notification of the annotation event to the contextual hub system 116. In at least one embodiment, the client device presents options to add an annotation (e.g., a highlight or comment) to web-accessible content based on detecting that the user has right clicked the content. The client device may send, together with the notification of the annotation event, data relating to the annotation event including information identifying the annotating user, the location of the annotation, content associated with the annotation (e.g., text within a comment), and other data, ¶324).
Both the White and the Arnold references are analogous in that both are directed towards/concerned with collaboration management. Before the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Arnold’s ability to utilize different types of collaborative formats in White’s system to improve the system and method with reasonable expectation that this would result in a collaboration management system that is able to accommodate different content formats/types.
The motivation being that conventional systems suffer from several technological shortcomings that result in inaccurate, inefficient, and inflexible operation. For example, conventional systems often haphazardly and inaccurately organize web browsing windows and tabs. In particular, although conventional systems often offer tabbed browsing functionalities, many conventional systems rely on user to manage web browsing tabs and windows and require users to manually open new tabs, close tabs no longer needed, or group tabs within different browser windows. However, with the nature of web browsing and the large number of web sources that a user may typically access, the result of a user managing their own tabs is often a single browser window that includes are large number of randomly ordered tabs that are difficult to navigate and make it difficult for a user to locate specific web sources and content. Thus, conventional systems of managing a user's interaction with web sources are often inaccurate because they obscure meaningful organization (Arnold ¶3).
While White does disclose the use of software to encode for classification purposes (White ¶26); the combination of White and Arnold do not expressly disclose applying a comment encoder to extract and encode features of the at least one comment, and applying a context encoder to encode the modifiable portion of the collaborative content associated with the at least one comment, wherein at least one first hidden state is determined for the at least one comment, at least one second hidden state is determined for the modifiable portion of the collaborative content, and an attention operation is applied to the at least one first hidden state and the at least one second hidden state to determine at least one keyword indicating the comment intent.
However, Volkovs teaches applying a comment encoder to extract and encode features of the at least one comment, and applying a context encoder to encode the modifiable portion of the collaborative content associated with the at least one comment, wherein at least one first hidden state is determined for the at least one comment, at least one second hidden state is determined for the modifiable portion of the collaborative content, and an attention operation is applied to the at least one first hidden state and the at least one second hidden state to determine at least one keyword indicating the comment intent teaches (autoencoder comprises encoders, Volkovs ¶29; data encoded into abstract representations, ¶32; contextual encoding BiLSTM, ¶39; hidden layers are able to be connected, ¶38; wherein the context is from user review data, Fig. 4) (Examiner interprets the user reviews to be comments).
The White, Arnold, and Volkovs references are analogous in that both are directed towards/concerned with content recommendation/collaboration management. Before the time of the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to use Volkovs’ ability to utilize encoders in Arnold’s and White’s system to improve the system and method with reasonable expectation that this would result in a collaboration management system that is able to accommodate different content formats/types.
The motivation being that this could increase the quality of interactions between users and help users identify more relevant content which could need attention (Volkovs ¶18).
While White, Arnold and Volkovs disclose how to automated the sending of notifications with alerts and reminders, White and Arnold do not expressly disclose the notifications be a “smart notification.”
However, the Examiner asserts that the data identifying the notification is simply a label for the components and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art. Any differences related merely to the meaning and information conveyed through labels (i.e., the type of notification) which does not explicitly alter or impact the steps of the method does not patentably distinguish the claimed invention from the prior art in terms of patentability (MPEP 2144.04).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the notification to be smart notification since the specific type of notification does not functionally alter or relate to the steps of the method and merely labeling the information differently from that in the prior art does not patentably distinguish the claimed invention.
As per claim 17, White, Arnold, and Volkovs disclose as shown above with respect to claim 16. White further discloses wherein the content intent category includes: a modification intent, and information exchange intent, and a social communication intent (conditioned tasks, White ¶22; extraction of attributes that characterize a condition associated with a task action, ¶23; see also example of commitment stated by the user, ¶28; condition classifier to classify intents, ¶36-¶37).
The Examiner notes, under MPEP 2144.04, any differences related merely to the meaning and information conveyed through labels which does not explicitly alter or impact the functionality of the claimed invention, does not patentably distinguish the claimed invention from the prior art in terms of patentability. As such, the intent category being a “a modification intent, and information exchange intent, and a social communication intent” is simply a label for the plan and adds little, if anything, to the claimed acts or steps and thus does not serve to distinguish over the prior art.
As per claim 18, White, Arnold, and Volkovs disclose as shown above with respect to claim 16. White further discloses wherein automatically modifying the collaborative content comprises suggesting a specific modification to the content and applying the specific modification upon receiving approval for the modification from a user (The method 400 proceeds to OPERATION 422, where engagement occurs based on the task action semantic frame. In some examples, engagement comprises engaging an task action actor 226 to perform the task action or initiate the task action on behalf of the user, such as when the task action is a computer-implemented task (e.g., set an alarm, send a message, perform a transaction). In other examples, engagement comprises engaging an task action actor 226 embodied as a notification engine to provide a notification 306 to the user reminding the user of the task action, White ¶81).
As per claim 19, White, Arnold, and Volkovs disclose as shown above with respect to claim 16. White further discloses wherein automatically modifying the collaborative content comprises saving the collaborative content prior to performing the modification (FIG. 7 illustrates one example of the architecture of a system for providing automatic extraction and application of conditional tasks from content as described above. Content developed, interacted with, or edited in association with the one or more components of the conditional task system 200 are enabled to be stored in different communication channels or other storage types. For example, various documents may be stored using a directory service 722, a web portal 724, a mailbox service 726, an instant messaging store 728, or a social networking site 730. One or more components of the conditional task system 200 are operative or configured to use any of these types of systems or the like for providing computing device state or activity based task reminders and automatic tracking of statuses of task-related activities, White ¶99).
As per claim 20, White, Arnold, and Volkovs disclose as shown above with respect to claim 19. White further discloses wherein the set of operations further comprises: receiving an indication to undo the modification, and loading the saved collaborative content (FIG. 7 illustrates one example of the architecture of a system for providing automatic extraction and application of conditional tasks from content as described above. Content developed, interacted with, or edited in association with the one or more components of the conditional task system 200 are enabled to be stored in different communication channels or other storage types. For example, various documents may be stored using a directory service 722, a web portal 724, a mailbox service 726, an instant messaging store 728, or a social networking site 730. One or more components of the conditional task system 200 are operative or configured to use any of these types of systems or the like for providing computing device state or activity based task reminders and automatic tracking of statuses of task-related activities, White ¶99).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW B WHITAKER whose telephone number is (571)270-7563. The examiner can normally be reached on M-F, 8am-5pm, EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on (571) 272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW B WHITAKER/Primary Examiner, Art Unit 3629