Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This Final Office action is in reply to the Applicant amendment filed on 20 August 2025.
2. Claims 1, 9, 16 have been amended.
3. Claims 1, 3-20 are currently pending and have been examined. The Information Disclosure Statement filed 12 August 2025 has been considered by the Examiner. A signed copy is enclosed with this Office Action.
Response to Amendment
In the previous office action, Claims 1, 3-20 were rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter (abstract idea). Applicants have not amended the claims to provide statutory support and the rejection is maintained.
Due to Applicant’s submission of prior art disclosed in the Information Disclosure Statement filed 12 August 2025, this Final Office action has necessitated a new ground of rejection.
Response to Arguments
Applicant’s arguments filed 20 August 2025 have been fully considered but they are not persuasive. In the remarks regarding the 35 USC § 101 rejection for Claims 1, 3-20, Applicant argues that: (1) the claims are not directed to an abstract idea, and even if they were, they would amount to significantly more than the abstract idea. Examiner respectfully disagrees. Still commensurate to the two-part subject matter eligibility framework decision in the Federal court decision in Alice Corp. Pty. Ltd. V. CLS Bank International et al., (Alice), 2019 revised patent subject matter eligibility guidance (2019 PEG) and the October 2019 Update: Subject Matter Eligibility (“October 2019 Update), and the new “July 2024 Guidance Update on Patent Subject Matter Eligibility Examples, including on Artificial Intelligence”, and the Examiner details the maintained rejection under 35 U.S.C. 101 in the below rejection with further explanation. Applicant argues that as amended, Applicant states in general, “The claim recites a specific and practical application of machine learning techniques to improve the technological field of meeting data management” with comparisons of Examples 48, 40, and 41 of the 2019 Guidance (see Remarks/Arguments pages 1-7). However the Examiner respectfully disagrees. The specific technical aspects of Example 48, example claims 1 and 3 recite much more technical and statutory detail for improving speech separation/recognition. Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See Manual of Patent Examining Procedure (MPEP) 2111. Applicants’ broadly recited representative system Claim 1 is argued for Example 48, Claim 1’s analysis for ineligibility. The new limitation “detecting, using a neural network trained to recognize a meeting intent using a loss function to adjust one or more weights in the neural network in response to a prediction error made by the neural network during training on a dataset that includes meeting-related content…” does not put any limits on how the “detecting…trained to recognize..” are specifically defined. and when determining whether a claim simply recites a judicial exception with the words "apply it" ( or an equivalent), such as mere instructions to implement an abstract idea on a computer, Examiners may consider: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Here, there are no details about a particular neural network or how the neural network “detects… trained to recognize a meeting intent using a loss function to adjust one or more weights in the neural network in response to a prediction error made by the neural network during training on a dataset that includes meeting-related content, a meeting intent for a second meeting from the content”. The claim omits any details as to how the neural network solves a technical problem, and instead recites only the idea of a solution or outcome. Therefore, the limitation(s) represents not more than mere instructions to apply the judicial exception on a computer. It can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of computers. Applicants also recite “The claims are also similar to those patent eligible claims set forth in Examples 40 and 41. The Examiner respectfully disagrees in that the “amended claims” do not specifically “recite a technological means for solving a technological problem, namely inefficient storage and retrieval of meeting relationships” because there is no additional recitations of additional elements to support these statements. In addition, “method” Claims 9-15 still do not recite any computer architecture components for a proper statutory category of invention to support the step limitations, meaning a person/user is mentally performing the steps with pencil/pen to paper and not passing Step 1 of the analysis. “computer storage media” Claims 16-20 do not recite that the “media” is non-transitory and not passing Step 1 of the analysis. In at least paragraph 128 of the instant specification, “Computer-readable media can be any available media that can be accessed by computing device 1100 and includes both volatile and nonvolatile media, removable and non-removable media”. Transitory or volatile media is not a statutory category of invention. Although the specification can include both types of media, only the media or medium claims must be recited as non-transitory. Claims 16-20 should be amended to recite “One or more non-transitory computer storage media…”. For at least these reasons, the rejection is maintained. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. For at least these reasons, the rejection is maintained.
Applicant submits that: (2) the rejection for Claims 1, 3-20 under 35 U.S.C. 103 as being unpatentable over Bhattacharya et al. (Bhattacharya) (US 2021/0081494) in view of Moon et al. (Moon) (US 11,442,992), Applicants argue that: 2) does not teach or suggest in amended representative and broadly recited Claim 1: “determining that the first meeting is related to the second meeting; determining that the first meeting is related to the second meeting; …generating a relationship indication between the first meeting and the second meeting in a meeting-oriented knowledge graph, wherein the first meeting is represented as a first node in the knowledge graph, the second meeting is represented as a second node in the knowledge graph, and the relationship indication is an edge between the first node and the second node” [see Remarks pages 7-12]. The Examiner respectfully disagrees. Due to Applicant’s submission of prior art disclosed in the Information Disclosure Statement filed 12 August 2025, this Final Office action has necessitated a new ground of rejection. For further clarification and as seen below, additional citations from the maintained rejection under Bhattacharya in view of Moon and in further view of Herrin et al. (Herrin) (US 2019/0327362) are cited by the Examiner. It is noted that any citations to specific, pages, columns, paragraphs, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. The Examiner has a duty and responsibility to the public and to Applicant to interpret the claims as broadly as reasonably possible during prosecution. In re Prater, 415 F.2d 1 393, 1404-05, 162 USPQ 541, 550-51 (CCPA 1969).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) because the claimed invention is directed to a judicial exception (i.e., a law of nature, natural phenomenon, or an abstract idea) without significantly more. The claims as a whole recite certain grouping of an abstract idea and are analyzed in the following step process:
Step 1: Claims 1, 3-8 are each focused to a statutory category of invention, namely a “system” set. However “method” Claims 9-15 still do not recite any computer architecture components for a proper statutory category of invention to support the step limitations, meaning a person/user is mentally performing the steps with pencil/pen to paper. “computer storage media” Claims 16-20 do not recite that the “media” is non-transitory. In at least paragraph 128 of the instant specification, “Computer-readable media can be any available media that can be accessed by computing device 1100 and includes both volatile and nonvolatile media, removable and non-removable media”. Transitory or volatile media is not a statutory category of invention. Although the specification can include both types of media, only the media or medium claims must be recited as non-transitory. Claims 16-20 should be amended to recite “One or more non-transitory computer storage media…”. Despite this failure to still pass Step 1 on the analysis, the Examiner proceeds to the next steps.
Step 2A: Prong One: Claims 1, 3-20 recite limitations that set forth the abstract ideas, namely, the claims as a whole recite the claimed invention is directed to an abstract idea without significantly more. The claims recite steps for:
“receiving a content related to a first meeting, the content comprising natural language utterances by multiple attendees of the first meeting;
detecting, using a neural network trained to recognize a meeting intent using a loss function to adjust one or more weights in the neural network in response to a prediction error made by the neural network during training on a dataset that includes meeting-related content, a meeting intent for a second meeting from the content;
determining, through analysis of calendar data associated with an attendee of the first meeting, that the second meeting is scheduled;
in response to detecting the meeting intent for the second meeting from the content of the first meeting, determining that the first meeting is related to the second meeting; and
generating a relationship indication between the first meeting and the second meeting in a meeting-oriented knowledge graph, wherein the first meeting is represented as a first node in the knowledge graph, the second meeting is represented as a second node in the knowledge graph, and the relationship indication is an edge between the first node and the second node”
As seen above in bolded, the claims fall under the categories: (a) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (a loss function to adjust one or more weights); (b) Certain methods of organizing human activity –managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (all bolded); and (c) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (all bolded). See MPEP § 2106.04(a) II C. Hence, the claims are ineligible under Step 2A Prong one. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception.
Prong Two: Claims 1, 3-20: With regard to this step of the analysis (as explained in MPEP § 2106.04(d)), the judicial exception is not integrated into a practical application. Independent Claims 1 and 16 recite additional elements directed to “at least one computer processor; and one or more computer storage media storing computer-useable instructions that, when used by the at least one computer processor”. Therefore, the claims contain computer components that are cited at a high level of generality and are merely invoked as a tool to perform the abstract idea. Simply implementing an abstract idea on a computer is not a practical application of the abstract idea. It is notable that mere physicality or tangibility of an additional element or elements is not a relevant consideration in Step 2A Prong Two. As the Supreme Court explained in Alice Corp., mere physical or tangible implementation of an exception does not guarantee eligibility. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014) (“The fact that a computer ‘necessarily exist[s] in the physical, rather than purely conceptual, realm,’ is beside the point”). See also Genetic Technologies Ltd. v. Merial LLC, 818 F.3d 1369, 1377, 118 USPQ2d 1541, 1547 (Fed. Cir. 2016) (steps of DNA amplification and analysis are not “sufficient” to render claim 1 patent eligible merely because they are physical steps). Conversely, the presence of a non-physical or intangible additional element does not doom the claims, because tangibility is not necessary for eligibility under the Alice/Mayo test. Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 118 USPQ2d 1684 (Fed. Cir. 2016) (“that the improvement is not defined by reference to ‘physical’ components does not doom the claims”). See also McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102 (Fed. Cir. 2016), (holding that a process producing an intangible result (a sequence of synchronized, animated characters) was eligible because it improved an existing technological process). Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. “method” Claims 9-15 do not have any computer architecture components recited to support the step limitations and are thus not a statutory category of invention.
Step 2B: As explained in MPEP § 2106.05), Claims 1, 3-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea nor recites additional elements that integrate the judicial exception into a practical application. The additional elements of “at least one computer processor; and one or more computer storage media storing computer-useable instructions that, when used by the at least one computer processor”, etc. are generically-recited computer-related elements that amount to a mere instruction to “apply it” (the abstract idea) on the computer-related elements (see MPEP § 2106.05 (f) – Mere Instructions to Apply an Exception). These additional elements in the claims are recited at a high level of generality and are merely limiting the field of use of the judicial exception (see MPEP §2106.05 (h) – Field of Use and Technological Environment). There is no indication that the combination of elements improves the function of a computer or improves any other technology. Furthermore, the dependent claims are merely directed to the particulars of the abstract idea and likewise do not add significantly more to the above-identified judicial exception. The limitations of the claims do not transform the abstract idea that they recite into patent-eligible subject matter because the claims simply instruct the practitioner to implement the abstract idea using generally-recited computer components, and furthermore do not amount to an improvement to a computer or any other technology, and thus are ineligible. “method” Claims 9-15 do not have any computer architecture components recited to support the step limitations and are thus not a statutory category of invention.
The Examiner interprets that the steps of the claimed invention both individually and as an ordered combination result in Mere Instructions to Apply a Judicial Exception (see MPEP §2106.05 (f)). These claims recite only the idea of a solution or outcome with no restriction on how the result is accomplished and no description of the mechanism used for accomplishing the result. Here, the claims utilize a computer or other machinery (e.g., see Applicants’ published Specification ¶’s 3-6, 35-41, 128) regarding using existing computer processors as well as program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored. “” in its ordinary capacity for performing tasks (e.g., to receive, analyze, transmit and display data) and/or use computer components after the fact to an abstract idea (e.g., a fundamental economic practice and certain methods of organization human activities) and does not provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016)). Software implementations are accomplished with standard programming techniques with logic to perform connection steps, processing steps, comparison steps and decisions steps. These claims are directed to being a commonplace business method being applied on a general-purpose computer (see Alice Corp. Pty, Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 1357, 110 USPQ2d 1976, 1983 (2014)); Versata Dev. Group, Inc., v. SAP Am., Inc., 793 D.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) and require the use of software such as via a server to tailor information and provide it to the user on a generic computer. Based on all these, Examiner finds that when viewed either individually or in combination, these additional claim element(s) do not provide meaningful limitation(s) that raise to the high standards of eligibility to transform the abstract idea(s) into a patent eligible application of the abstract idea(s) such that the claim(s) amounts to significantly more than the abstract idea(s) itself. Accordingly, Claims 1, 3-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to a judicial exception (i.e. abstract idea exception) without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Broadly recited Claims 1, 3-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bhattacharya et al. (Bhattacharya) (US 2021/0081494) in view of Moon et al. (Moon) (US 11,442,992) and in further view of Herrin et al. (Herrin) (US 2019/0327362).
With regard to Claims 1, 9, 16, Bhattacharya teaches a system/computer-implemented method/one or more computer storage media comprising: at least one computer processor; and one or more computer storage media storing computer-useable instructions that, when used by the at least one computer processor (see at least paragraphs 90-105), cause the at least one computer processor to perform operations comprising:
receiving a content/transcript of natural language utterances (input; temporal component; message; utterances), made by a first attendee (“Barun”; sender) related/during to a first virtual meeting (The digital assistant service may receive the natural language input and apply one or more natural language processing models to it that have been trained to identify whether there is a meeting intent associated with the message. If a meeting intent is determined to be present in the natural language input, the digital assistant service may make a determination as to whether the natural language input includes a temporal component. The digital assistant service may also determine whether the natural language input includes a processing initiation conjunction that separates a first temporal component meeting block alternative and a second temporal meeting block alternative) the content comprising natural language utterances (input; temporal component; message; utterances) by multiple attendees “Barun”; “Pamela”; will figure out a time for us to meet) of the first meeting (see at least paragraphs 5, 19-27, 30);
detecting/identifying, using a neural network (Scoring sub-environment 404 includes supervised machine learning model 406 and neural network 408. Neural network 408 is one example of a supervised machine learning model that may be applied to the generated syntax trees. Other machine learning models may additionally or alternatively be utilized (e.g., statistical machine learning models, clustering models, etc.). The unsupervised machine learning models in scoring sub-environment 404 illustrate that the syntax trees that are generated for a given temporal component with a temporal ambiguity may be processed by one or more machine learning models that have been trained to identify a most relevant permutation for a given natural language input.), a meeting intent for a second meeting with a second person (“Barun”; “Pamela”; receiver) from the content/transcript (The digital assistant service may receive the natural language input and apply one or more natural language processing models to it that have been trained to identify whether there is a meeting intent associated with the message. If a meeting intent is determined to be present in the natural language input, the digital assistant service may make a determination as to whether the natural language input includes a temporal component. The digital assistant service may also determine whether the natural language input includes a processing initiation conjunction that separates a first temporal component meeting block alternative and a second temporal meeting block alternative) (see at least paragraphs 19-27, 30, 61);
determining, through analysis of calendar data associated with an attendee of the first meeting (the digital assistant service may analyze received email 105 and determine whether there is a specific command in the that message that it should respond to (e.g., “schedule meeting”, “add to my calendar”, etc.). In this example, there is no direct command. However, the digital assistant service may process the text with a natural language processing engine (e.g., via application of one or more semantic parsing models) and determine that the text “My assistant@[DIGITAL ASSISTANT], will figure out a time to meet” is a command to schedule a meeting between the sender and receiver of email 105 based on information in email 105), that the second meeting is scheduled (temporal ambiguity; temporal meeting block alternatives) (see at least paragraphs 19-27, 28-31);
identifying one or more parameters (utilizing artificial intelligence in association with digital assistants to process natural language inputs associated with events to identify temporal intent from language ambiguities) for the second meeting from content associated with the virtual meeting (see at least paragraphs 19-25, 28-31);
in response to identifying the intent, causing presentation of a meeting suggestion to the first attendee (“Baron”), the meeting suggestion including the first attendee and the second person as participants with a meeting characteristic based on the one or more parameters (The digital assistant service may receive the natural language input and apply one or more natural language processing models to it that have been trained to identify whether there is a meeting intent associated with the message. If a meeting intent is determined to be present in the natural language input, the digital assistant service may make a determination as to whether the natural language input includes a temporal component. The digital assistant service may also determine whether the natural language input includes a processing initiation conjunction that separates a first temporal component meeting block alternative and a second temporal meeting block alternative) (see at least paragraphs 19-31);
receiving an affirmation of the meeting suggestion (the digital assistant service may process the text with a natural language processing engine (e.g., via application of one or more semantic parsing models) and determine that the text “My assistant@[DIGITAL ASSISTANT], will figure out a time to meet” is a command to schedule a meeting between the sender and receiver of email 105 based on information in email 105) (see at least paragraphs 19-31);
in response to detecting/identifying/the meeting/an intent for the second meeting from the content/transcript of the first meeting/natural language utterance made by the first attendee during a virtual meeting(input; temporal component; message; utterances; The digital assistant service may also determine whether the natural language input includes a processing initiation conjunction that separates a first temporal component meeting block alternative and a second temporal meeting block alternative. The digital assistant service may determine whether there is a temporal ambiguity in the temporal component of the natural language input. If a temporal ambiguity exists, the digital assistant service may tag each word in the temporal component. The digital assistant service may tag words as temporal expressions, temporal ranges, operators, and conjunctions; In addition to identifying the temporal component “this week or next,” the digital assistant service may identify that the temporal component includes a processing initiation conjunction that separates a first temporal component meeting block alternative and a second temporal meeting block alternative. The processing initiation conjunction may comprise one of: “and”, “&”, “+”, “or”, a comma, and a semi-colon. Thus, in this example, the digital assistant service identifies that the temporal component “this week or next” includes the processing initiation conjunction “or” that separates a first temporal component meeting block alternative “this week” from a second temporal component meeting block alternative “next”. Although in this example there are only two temporal component meeting block alternatives in the temporal component, it should be understood that the mechanisms described herein may be applied to natural language inputs that include more than two temporal component meeting block alternatives in a temporal component (e.g., “Monday, Tuesday, or Wednesday”; “Wednesday, or next Tuesday or Thursday”); From operation 706 flow continues to operation 708 where a temporal ambiguity in the first temporal meeting block alternative is identified. The temporal ambiguity may relate to an operator needed to ground at least one of: a temporal range in the first temporal meeting block alternative, and a temporal expression in the first temporal meeting block alternative) (see at least paragraphs 19-25, 28-31, 85);
generating a/first relationship indication (visual indicator) between the first meeting and the second meeting in a meeting-oriented knowledge graph (syntax trees; syntax tree permutations), wherein the meeting-oriented knowledge graph relates attendees of the first meeting with the first meeting and attendees of the second meeting with the second meeting//includes a second relationship indication between a transcript of the virtual meeting (Meeting request sub-environment 102 includes computing device 104, which displays an email application user interface. Specifically, the email application user interface displays a composed email 105 from “Pamela” to “Barun”, with a Cc to “[DIGITAL ASSISTANT]”. The subject of the email is “Productive meeting” and the body of email 105 states: “Hi Barun—Thanks for meeting with me. I thought our meeting was very productive. Let's meet again this week or next. My assistant@[DIGIT ASSISTANT], will figure out a time for us to meet.” Email 105 was sent on May 20, 2019), (see at least paragraphs 19-25, 28-31, 95; FIG. 4);
Bhattacharya does not specifically teach as a first/second node; and the relationship indication is an edge between the first node and the second node wherein the first meeting is represented in a/the knowledge graph, the second meeting is represented in a/the knowledge graph. Moon teaches as a first/second node (multiple nodes); and the relationship indication is an edge (multiple edges) between the first node and the second node (multiple concept nodes; artificial neural network)/wherein the first meeting is represented in a/the knowledge graph (knowledge graph), the second meeting (the user input may comprise “direct me to my next meeting.” The assistant system 140 may use a calendar agent to retrieve the location of the next meeting. The assistant system 140 may then use a navigation agent to direct the user to the next meeting) is represented in the knowledge graph (knowledge graph), in analogous art of social-networking for the purposes of: “the social-networking system 160 may store one or more social graphs in one or more data stores 164. In particular embodiments, a social graph may include multiple nodes—which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes. The social-networking system 160 may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network via the social-networking system 160 and then add connections (e.g., relationships) to a number of other users of the social-networking system 160 whom they want to be connected to. Herein, the term “friend” may refer to any other user of the social-networking system 160 with whom a user has formed a connection, association, or relationship via the social-networking system 160; each of the first-party agents 250 or third-party agents 255 may be designated for a particular domain. As an example and not by way of limitation, the domain may comprise weather, transportation, music, etc. In particular embodiments, the assistant system 140 may use a plurality of agents collaboratively to respond to a user input. As an example and not by way of limitation, the user input may comprise “direct me to my next meeting.” The assistant system 140 may use a calendar agent to retrieve the location of the next meeting. The assistant system 140 may then use a navigation agent to direct the user to the next meeting” (see at least col. 8, lines 8-24; col. 13, line 7-col. 14, line 67; FIG. 10).
Bhattacharya does not specifically teach trained to recognize a meeting intent using a loss function to adjust one or more weights in the neural network in response to a prediction error made by the neural network during training on a dataset that includes meeting-related content. Moon teaches trained to recognize a meeting intent using a loss function (loss function; a supervised loss for generating the correct entity at the next turn) to adjust one or more weights (the knowledge graph may comprise a plurality of entities. Each entity may comprise a single record associated with one or more attribute values. The particular record may be associated with a unique entity identifier. Each record may have diverse values for an attribute of the entity. Each attribute value may be associated with a confidence probability. A confidence probability for an attribute value represents a probability that the value is accurate for the given attribute. Each attribute value may be also associated with a semantic weight. A semantic weight for an attribute value may represent how the value semantically appropriate for the given attribute considering all the available information) in the neural network in response to a prediction error made by the neural network during training on a dataset that includes meeting-related content (Error analysis; an ANN may be trained using training data. As an example and not by way of limitation, training data may comprise inputs to the ANN 1000 and an expected output. As another example and not by way of limitation, training data may comprise vectors each representing a training object and an expected label for each training object. In particular embodiments, training an ANN may comprise modifying the weights associated with the connections between nodes of the ANN by optimizing an objective function. As an example and not by way of limitation, a training method may be used (e.g., the conjugate gradient method, the gradient descent method, the stochastic gradient descent) to backpropagate the sum-of-squares error measured as a distances between each vector representing a training object (e.g., using a cost function that minimizes the sum-of-squares error). In particular embodiments, an ANN may be trained using a dropout technique. As an example and not by way of limitation, one or more nodes may be temporarily omitted (e.g., receive no input and generate no output) while training. For each training object, one or more nodes of the ANN may have some probability of being omitted. The nodes that are omitted for a particular training object may be different than the nodes omitted for other training objects (e.g., the nodes may be temporarily omitted on an object-by-object basis). Although this disclosure describes training an ANN in a particular manner, this disclosure contemplates training an ANN in any suitable manner) in analogous art of social-networking for the purposes of: “an ANN may be a feedforward ANN (e.g., an ANN with no cycles or loops where communication between nodes flows in one direction beginning with the input layer and proceeding to successive layers). As an example and not by way of limitation, the input to each node of the hidden layer 1020 may comprise the output of one or more nodes of the input layer 1010. As another example and not by way of limitation, the input to each node of the output layer 1050 may comprise the output of one or more nodes of the hidden layer 1040” (see at least col. 8, lines 8-24; col. 12, line 45-col. 13, line 67; col 27, lines 7-67; Col. 45, lines 14-Col. 46, line 44; Claims 1, 14; TABLE 4).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include conversational reasoning with knowledge graph paths for assistant systems as taught by Moon in the system of Bhattacharya, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Bhattacharya in view of Moon do not specifically teach determining that the first meeting is related to the second meeting. Herrin teaches determining that the first meeting is related to the second meeting (on a network server, the system will determine that a participant is attending a first meeting and has a second meeting scheduled at the same time by (1) determining that a participant is attending a first meeting when the participant is dialed into the meeting conference call from a phone associated with the participant (e.g., office phone, cell phone, VOW, Skype); or by physical detection of a participant's presence, such as location info from a participant device (pc, phone, etc.), RFID badge detection, etc.; and (2) determining that the participant has a second meeting scheduled at the same time; whether there are overlapping meetings scheduled in a calendar, back-to-back meetings in a calendar, and the participant is still attending earlier meeting, or the participant is detected attending one ad hoc meeting with a second meeting scheduled in calendar at same time) in analogous art of managing multiple meetings for the purposes of: “whether there are overlapping meetings scheduled in a calendar, back-to-back meetings in a calendar, and the participant is still attending earlier meeting, or the participant is detected attending one ad hoc meeting with a second meeting scheduled in calendar at same time”(see at least paragraphs 15, 16, 58, Abstract).
It would have been obvious to one of ordinary skill in the art at the time of the invention to include managing, monitoring and transcribing concurrent meetings and/or conference calls as taught by Herrin in the system of Moon and in the system of Bhattacharya, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
With regard to Claim 3, Bhattacharya teaches wherein the operations further comprise transcribing the first natural language utterance from the first meeting to a textual transcript and performing natural language processing of the textual transcript to detect the meeting intent (see at least paragraphs 5, 27, 82, 99).
With regard to Claim 4, Bhattacharya teaches wherein the operations further comprise associating the first meeting and the second meeting with a single meeting thread identification (see at least paragraphs 51-56, 64).
With regard to Claims 5, 12, 17, Bhattacharya teaches identifying an intent for a third meeting from the content related to the first meeting (see at least paragraphs 51-56);
generating a third relationship indication between the third meeting and the first meeting in the meeting-oriented knowledge graph (see at least paragraphs 51-56, 95);
generating a fourth relationship indication between the second meeting and third meeting in the meeting-oriented knowledge graph (see at least paragraphs 51-56, 95).
With regard to Claim 6, Bhattacharya teaches wherein the first meeting, the second meeting, and each attendee of the attendees of the first meeting are nodes in the meeting-oriented knowledge graph, and wherein relationships between the nodes are indicated by edges (see at least paragraphs 51-56; FIG. 4).
With regard to Claim 7, Bhattacharya teaches wherein the meeting-oriented knowledge graph relates a decision taken in the first meeting to the first meeting in the meeting-oriented knowledge graph (see at least paragraphs 51-56).
With regard to Claims 8, 13, Bhattacharya teaches wherein the operations further comprise generating a meeting analytic by traversing the meeting-oriented knowledge graph and outputting the meeting analytic through a graphical user interface (see at least paragraphs 51-56, 90; FIG.’s 1, 2, 4).
With regard to Claim 10, Bhattacharya teaches generating a second relationship indication between the transcript and the first meeting in the meeting-oriented knowledge graph (see at least paragraphs 51-56, 90; FIG.’s 1, 2, 4).
With regard to Claims 11, 20, Bhattacharya teaches assigning a common meeting thread identification to the first meeting and the second meeting (see at least paragraphs 20, 51-59, 90; FIG.’s 1, 2, 4).
With regard to Claim 14, 18, Bhattacharya teaches wherein the meeting analytic is a number of related meetings occurring before attendees in the related meetings made a decision (see at least paragraph 59).
With regard to Claims 15, 19, Bhattacharya teaches generating, from the meeting-oriented knowledge graph, a meeting tree that visually illustrates a relationship between the first meeting and the second meeting; and causing the meeting tree to be output for display (see at least paragraphs 20, 51-59, 90; FIG.’s 1, 2, 4).
With regard to Claim 18, Bhattacharya teaches generating a meeting analytic from the meeting-oriented knowledge graph and outputting the meeting analytic through a graphical user interface, wherein the meeting analytic is an amount of decisions made per meeting in a group of related meetings (see at least paragraphs 19-25, 28-31, 51-59).
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure:
Talieh et al. (US 11,315,569)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS L MANSFIELD whose telephone number is (571)270-1904. The examiner can normally be reached M-Thurs, alt. Fri. (9-6).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
THOMAS L. MANSFIELD
Examiner
Art Unit 3623
/THOMAS L MANSFIELD/Primary Examiner, Art Unit 3624