Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
Claims 1, 2, 10, 11, and 19 have been amended.
Claims 21-23 have been cancelled.
Response to Arguments
Applicant’s arguments and amendments with respect to the Section 112 rejections have been fully considered. The Section 112 rejections to the independent claims have been withdrawn in view of the amendments.
Applicant's arguments and amendments with respect to the Section 101 rejections have been fully considered but they are not persuasive. The rejection has been updated to address the amended claims. It is noted that the project and issue embedding does not provide a technological solution to a problem, but rather translates words into computer readable syntax that allow a general purpose and commercially available machine learning model to implement the recited abstract idea in the recited technological environment. Applicant argues that certain claimed steps cannot practicably be performed in the human mind and are not steps “used by human managers in project management.” Those arguments do not address the rejection as written. While Applicant cited claims include additional elements, those additional elements do not change the fact that the claims are directed to an abstract idea.
Applicant’s arguments with respect to Section 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Representative claim 1 recites “receiving, …, a first set of parameters of a first type, the first set of parameters describing an enterprise project; receiving a second set of parameters of a second type, the second set of parameters describing a set of issues associated with the enterprise project; generating the schema for the enterprise project based on the first set of parameters and the second set of parameters; … identify a set of suggested issues for the enterprise project, wherein the plurality of past issues are candidate issues for future recommendations; generating a project embedding vector based on the first set of parameters describing the enterprise project and the second set of parameters describing the set of issues associated with the enterprise project, applying the … model to the project … and to … each candidate issue in the first training set to generate an issue recommendation score for each candidate issue, ranking the candidate issues based on the issue recommendation scores, and identifying, based on the ranked candidate issues, a set of suggested issues for the enterprise project; receiving a first indication accepting one or more suggested issues of the set of suggested issues; modifying automatically the schema for the enterprise project to add parameters of the second type corresponding to the accepted one or more suggested issues; receiving a third set of parameters of a third type, the third set of parameters describing one or more questions associated with each issue associated with the enterprise project; applying a … model to the first set of parameters, the second set of parameters, and the third set of parameters associated with the enterprise project to identify a set of suggested questions for one or more issues associated with the enterprise project; performing a competency recommendation operation on the schema for the enterprise project modified automatically to add the parameters of the third type, wherein the competency recommendation operation … to the modified schema to generate a set of recommended competencies for one or more workflows, issues, or questions of the enterprise project.” Therefore, the claim as a whole is directed to “Project Management Analysis Processes”, which is an abstract idea because it is a method of organizing human activity, including commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). “Project Management Analysis Processes” is considered to be is a method of organizing human activity because the claimed process of identifying issues or risks and questions associated with those risk and standardizing and formatting that type of data is a process used by human managers in project management. See, for example, “11 items project managers should include in a Problem Identification and Tracking document” by Moira Alexander. Such human organized activities are regularly performed as part of project management functions in brainstorming for potential issues while planning a project. As such, claim 1 is directed to an abstract idea.
“Project Management Analysis Processes” is considered to be is a mental process because the claimed process of identifying issues or risks and questions associated with those risk and standardizing and formatting that type of data is a process performed in the minds of human managers in project management, including during brainstorming sessions when organizing and defining the scope of new projects.
This judicial exception is not integrated into a practical application. Claim 1 recites additional elements including a computing device, training a first machine-learned model using a first training set that includes for each of a plurality of past enterprise projects: (i) an indication of whether the past enterprise project was successfully completed, (ii) for each of a plurality of past issues associated with the past enterprise project, an indication of whether the past issue was successfully resolved, and (iii) for each of the plurality of past issues associated with the past enterprise project, an embedding vector generated based on (enterprise project description) parameters of the first type describing the past enterprise project and (issue) parameters of the second type describing the past issue associated with the past enterprise project, applying the trained first machine-learned model to the project embedding vector and to the embedding vector of each candidate issue in the first training set to generate an issue recommendation score for each candidate issue, ranking the candidate issues based on the issue recommendation scores, and identifying, based on the ranked candidate issues, a set of suggested issues for the enterprise project, and applying a second machine-learned model to the first set of parameters, the second set of parameters, and the third set of parameters associated with the enterprise project to identify a set of suggested questions, and applying a trained competency recommendation model to the modified schema. Claim 10 recites non-transitory computer readable storage medium and claim 19 recites computing device, a processor; and a non-transitory computer readable storage medium. These additional elements individually or in combination do not integrate the exception into a practical application. Rather, the additional element amount to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Even the recitation of training a model using and embedding vector (1) does not improve the functioning of a computer or other technology; (2) is not applied with any particular machine; (3) does not affect a transformation of a particular article to a different state; and (4) is not applied in any meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. See MPEP § 2106.05(a)–(c), (g), (h). In particular, the application of a first and second “machine-learned model” amounts to merely indicating a field of use or technological environment in which the judicial exception is performed. Although the additional element “machine-learned model” limits the identified judicial exceptions “identify a set of suggested issues for the enterprise project” and “identify a set of suggested questions for one or more issues associated with the enterprise project,” this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine-learned models) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h), and practical application analysis of Example 47, claim 2. Similarly, the Board has found that such recitations represent use of generic computing components (i.e., computing device and trained model) operating to perform generic computer functions of “inputting data, processing data, and outputting data.” See, for example, Appeal 2022-002111, Application 16/217,148, Decision, page 12. The use of vectors embeddings is not an improvement to a technological solution. Rather, the focus of claim 1, as a whole, does not purport to improve the computer elements themselves, but merely uses the generic elements as a tool to perform the abstract idea of organizing human activity based on fundamental economic practices. See 2019 Guidance, 84 Fed. Reg. at 55. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Claim 1 is directed to an abstract idea.
Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, any additional elements are merely being used to apply the abstract idea to a technological environment. That is, the additional elements do not solve a technical problem or provide a technical solution. Rather, the recitations of trained models amount to an attempt to insert a human organized activity into a technological environment. The Board has found that the technical improvements and advantages like those asserted do not concern an improvement to computer capabilities but instead relate to an alleged improvement in computer-based process of creating a schema for a project; that is, a process in which a computer is used as a tool in its ordinary capacity to receive, process, and output data. See, for example, Appeal 2022-002111, Application 16/217,148, Decision, page 14. Accordingly, claim 1 is ineligible.
Claims 10 and 19 recite substantially similar features to those recited in representative claim 1 and are rejected based on substantially the same reasons.
Dependent claim(s) 2-9, 11-18 and 20-23 merely further limit the abstract idea and are thereby considered to be ineligible.
Dependent claims 2, 11, and 20 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of using the first training set including the plurality of past enterprise projects, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 2, 11, and 20 are also non-statutory subject matter.
Dependent claims 3 and 12 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of training based on embedding vectors for each past enterprise project of the plurality of past enterprise projects, and the indication of whether the past enterprise project was successfully completed, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 3 and 12 are also non-statutory subject matter.
Dependent claims 4 and 13 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of the embedding vector for each past enterprise project is determined by determining a parameter embedding vector for each parameter of the first type included in the schema for the enterprise project, each parameter of the second type included in the enterprise project, and each parameter of the third type included in the enterprise project, and combining the parameter embedding vectors, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 4 and 13 are also non-statutory subject matter.
Dependent claims 5 and 14 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of the second set of parameters of the second type are received from a plurality of users associated with the enterprise project, and wherein the method further comprises: presenting the second set of parameters of the second type to a project administrator; and for each parameter of the second set of parameters of the second type: receiving an indication whether to accept or reject the parameter, and responsive to receiving an indication to accept the parameter, modifying the schema for the project to add the parameter of the second type; and wherein the first trained model is applied based at least on the first set of parameters of the first type and the accepted parameters from the second set of parameters of the second type, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 5 and 14 are also non-statutory subject matter.
Dependent claims 6 and 15 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of determining a relevancy score for each parameter of the second set of parameters, the relevancy score determined at least based on one of an output of a trained relevancy model applied based information associated with the parameter, and a number of users that provided parameters corresponding to a same issue as the parameter, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 6 and 15 are also non-statutory subject matter.
Dependent claims 7 and 16 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of the third set of parameters of the third type are received from a plurality of users associated with the enterprise project, and presenting the third set of parameters of the third type to a project administrator; and for each parameter of the third set of parameters of the third type: receiving an indication whether to accept or reject the parameter, and responsive to receiving an indication to accept the parameter, modifying the schema for the enterprise project to add the parameter of the third type; and wherein the second trained model is applied based at least on the first set of parameters of the first type, a subset of the second set of parameters of the second type, and the accepted parameters from the third set of parameters of the third type, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 7 and 16 are also non-statutory subject matter.
Dependent claims 8 and 17 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of generating a project embedding vector for the schema based at least in part on the first set of parameters of the first type and a subset of the second set of parameters of the second type; and applying the first trained model based on the project embedding vector, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 8 and 17 are also non-statutory subject matter.
Dependent claims 9 and 18 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of generating a project embedding vector for the schema based at least in part on the first set of parameters of the first type, a subset of the second set of parameters of the second type, and a subset of the third set of parameters of the third type; and applying the second trained model based on the project embedding vector, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 9 and 18 are also non-statutory subject matter.
Dependent claims 21-23 further limit the abstract idea of “Project Management Analysis Processes” by introducing the element of generating an embedding vector based on the first set of parameters describing the enterprise project and the second set of parameters describing the set of issues associated with the enterprise project; generating issue recommendation scores for each candidate issue of a set of candidate issues in the training set by comparing an embedding vector of the candidate issue with the embedding vector generated for the enterprise project; and ranking the set of candidate issues based on the generated issue recommendation scores, which does not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. Therefore, dependent claims 21-23 are also non-statutory subject matter.
Dependent claims 2-9, 11-18 and 20-23 also do not integrated into a practical application. The dependent claims recite no new additional elements. As such, any additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computing system is merely being used to apply the abstract idea to a technological environment. That is, the claims provide no practical limits or improvements to any technology. Accordingly, dependent claims 2-9, 11-18 and 20-23 are also ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2019/0057145 Huang et al. in view of U.S. Patent Application Publication No. 20130325763 to Cantor et al. and U.S. Patent Application Publication No. 20210397625 to Wan et al.
With regards to claims 1, 10, and 19, Huang et al. teaches:
a processor; and a non-transitory computer readable storage medium storing instruction for recommending parameters for a target schema, the instruction when executed by the processor cause the processor to (paragraph [0050]-[0054]):
receiving, at a computing device, a first set of parameters of a first type, the first set of parameters describing an enterprise project (Fig. paragraph [0019], “Embodiments of the present disclosure are based on the recognition that a query used to produce a result set can be used to generate custom knowledge graphs that fit a user's intention and models the implicit relationship among elements of the result set and important factors or concepts (hereinafter, “a set of factors”) extracted from the query. These custom knowledge graphs may be simpler and easier to analyze than other predefined knowledge graphs. Additionally, the ontology of these custom knowledge graphs, selected to fit the intent of a user and the relationships between a set of factors in the query and the elements of a result set generated from the query, enables information systems to algorithmically generate questions that can reduce or otherwise refine the scope of a query (e.g., reduce the size of the result set generated in response to executing the query) to provide more accurate results to users. Further, information systems can use the questions to incrementally learn and adapt to user preferences (e.g., the information system can, for example, learn the best way present a result set to a user).”);
receiving a second set of parameters of a second type, the second set of parameters describing a set of issues associated with the enterprise project (paragraph [0030], “At operation 130, the information system can determine whether the result set retrieved in operation 110 is reducible. A result set may be reducible when, for example, the result set includes more elements than the number of elements required to accurately respond to, or satisfy, a query. When a result set is reducible, the information system may not have sufficient information (e.g., specified in the query or in the metadata associated with the elements of the result set) to accurately respond to the query.”);
generating a new schema (e.g. template) for the enterprise project (knowledge graph) based on the first set of parameters (paragraph [0027], “A knowledge display template can be a data structure that indicates the display format or structure for a custom knowledge graph (hereinafter, “knowledge graph”) based on a determined intent and set of factors for a query.”);
applying a first trained model based on the schema for the project to identify a set of suggested issues for the project (paragraph [0028], “The information system can use training sentences and associated data or metadata to generate classification model using machine learning techniques. The classification model can then be used to select a knowledge display template for rendering a result set for a new natural language query.”);
receiving a first indication accepting one or more suggested issues of the set of suggested issues (paragraph [0040], “The incremental reasoning engine 250 can be configured to receive a knowledge display template selected by the KTC 255 and determine a set of questions, based on the knowledge display template, for reducing a result set.”; paragraph [0041], “In some embodiments, when a natural language query includes more than two factors, the incremental reasoning engine 250 (or the NLP 245) can generate a set of questions for identifying the two most important factors in the query. These two factors can be sued to selected a knowledge display template, as described herein.”);
modifying automatically the schema for the project to add parameters of the second type corresponding to the accepted one or more suggested issues (paragraph [0028], “In certain embodiments, each sentence can also include, or be annotated with, data or metadata indicating the intent and set of factors associated with a query corresponding to the sentence. In other embodiments, each sentence can include, or be annotated with, data or metadata indicating a particular knowledge graph or knowledge display template that should be used (e.g., based on the user's preference) to render a result set generated for a query corresponding to the sentence.”);
receiving a third set of parameters of a third type, the third set of parameters describing one or more questions associated with each issue associated with the project (paragraph [0040], “The incremental reasoning engine 250 can be configured to receive a knowledge display template selected by the KTC 255 and determine a set of questions, based on the knowledge display template, for reducing a result set. The set of factor used to select a knowledge display template can be used to guide the selection or determination of the set of questions.”; claim 14, “generating, based on the relationship, a question associated with the first factor to reduce a number of element in the set of candidate information; reducing the number of elements in the set of candidate information based on a response to the question to generate a reduced set of candidate information;”);
applying a second trained model based on the modified schema for the project to identify a set of suggested questions for one or more issues associated with the project (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence.”; paragraph [0045], “he word embedding component 325A, hidden layer 330A, CNN 335A and max pooling component 340A are substantially similar to, respectively, the embedding component 325B, hidden layer 330B, CNN 335B and max pooling component 340B. The received natural language query can therefore be processed in substantially the same way the training sentences to produce a set of features at max pooling component 340A.”);
receiving a second indication accepting one or more suggested questions of the set of suggested questions (paragraph [0045], “A knowledge display template 350 can be selected based on the determined similarity. In some embodiments, for example, the KTC 305 can select, as the knowledge display template 350, a knowledge display template associated with one of the sets of features produced by max pooling 340B that, for example, is most similar to the set of features produced by max pooling component 350A. Other selection criteria based on the determined similarity can be used.”); and
modifying automatically the schema for the project to add parameters of the third type corresponding to the accepted one or more suggested questions (paragraph [0044], “A knowledge display template 350 can be selected based on the determined similarity. In some embodiments, for example, the KTC 305 can select, as the knowledge display template 350, a knowledge display template associated with one of the sets of features produced by max pooling 340B that, for example, is most similar to the set of features produced by max pooling component 350A. Other selection criteria based on the determined similarity can be used.”; paragraph [0048], “The information system can analyze the metadata associated with each file to determine the relationships between the files, the meeting rooms where the files were discussed or accessed, and the topics associated with the files. These relationships are displayed on the knowledge graph 600. Although metadata information related to “user” is available, it is not displayed in the knowledge graph due to the selected knowledge display template. Since there are no files that directly satisfy the natural language query, the information system can generate questions, based on the selected knowledge display template, to refine the natural language query to reduce the result set. The information system can, for example, ask the user: are you sure the file was discussed in room D, or is the file related to Internet of Things (IoT)? The information system can select the questions based on their likelihood of reducing the result set.”),
wherein the first trained model is trained using a training set that includes for each of a plurality of past projects (paragraph [0028], “In some embodiments, selecting the knowledge display template can include training a classification model (e.g., a knowledge template classifier) using a set of training sentences. Each sentence can, for example, indicate a different way of querying the information system for particular data item. In certain embodiments, each sentence can also include, or be annotated with, data or metadata indicating the intent and set of factors associated with a query corresponding to the sentence. In other embodiments, each sentence can include, or be annotated with, data or metadata indicating a particular knowledge graph or knowledge display template that should be used (e.g., based on the user's preference) to render a result set generated for a query corresponding to the sentence. The information system can use training sentences and associated data or metadata to generate classification model using machine learning techniques. The classification model can then be used to select a knowledge display template for rendering a result set for a new natural language query.”), … and an embedding vector generated based on parameters of the first type describing the past project and parameters of the second type describing issues associated with the past project (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors. In some embodiments weights can be derived for the vectors themselves. The convolution neural network (CNN) 335B can receive the weighted vectors and execute one or more logical operations (e.g., filtering operations) to extract a set of features for each sentence and for the set of sentences. In some embodiments, the CNN 335B can extract or determine the most important sentence from the set of sentences or determine the most important words of a given sentence”) but fails to explicitly teach an indication of whether the past project was successfully completed. However, Cantor et al. teaches
receiving, at a computing device, a first set of (enterprise project description) parameters of a first type, the first set of parameters describing an enterprise project (paragraph [0034], “Referring to FIG. 1, an input for project estimation prediction may include a representation of a set of completed tasks 102, if any, belonging to the project to be estimated and a set of attributes associated to those tasks.”);
receiving a second set of (issue) parameters of a second type, the second set of parameters describing a set of issues associated with the enterprise project (paragraph [0034], “Another input may include a set of as-yet incomplete tasks (also referred to as unfinished tasks) 104 belonging to the project to be estimated and a set of attributes associated to those tasks”);
generating the schema for the enterprise project based on the first set of parameters and the second set of parameters (paragraph [0034], “An unfinished task 104, for example, may have attributes (shown at 106) such as the type, the owner, an initial estimate for completion, a due date, a priority status, and other such characteristics.”):
training a first machine-learned model using a first training set that includes for each of a plurality of past enterprise projects: (paragraph [0088], “As the project proceeds and progresses, the methodology of the present disclosure in one embodiment gains information about tasks and can begin to overcome the problems with user estimates using machine learning techniques. Machine learning can be deployed to predict task effort from the evidence that is obtained from already-completed similar tasks. An aspect of learning is determining what similar tasks are. The machine learner uses a training set of examples of completed tasks with their attributes including their actual completion times to build a prediction model.”)
(i) an indication of whether the past enterprise project was successfully completed (paragraph [0041], “The task estimator 108 in considering the one or more completed tasks, if any, for the project to be estimated, may specifically consider the date and time of work performed on the completed tasks.”),
(ii) for each of a plurality of past issues associated with the past enterprise project, an indication of whether the past issue was successfully resolved (paragraph [0075], “The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values. The information about the history of changes to the values of task attributes may include information such as the number of times that a task has been rescheduled, the number of times that a task has been considered completed and then considered not completed, the number of times that the owner of a task has changed, and/or the number of times that any other specific attribute or relationship has changed, among others.”), and
(iii) for each of the plurality of past issues associated with the past enterprise project, an embedding vector generated based on (enterprise project description) parameters of the first type describing the past enterprise project and (issue) parameters of the second type describing the past issue associated with the past enterprise project (paragraphs [0049]-[0074]);
wherein the plurality of past issues are candidate issues for future recommendations (paragraph [0075], “The task estimator 108 of the present disclosure may consider other kinds of information in addition to task attributes. Such other information may be received as input. For instance, the information input to the methodology of the present disclosure and considered in the learning may include information derived or computed from one or more task attributes. The information input to the method and considered in learning may also include information about the history of the task, including the history of changes to the values of task attributes, further including the absolute and relative sequence, timing, frequency, and other measures, properties, and/or patterns of the history of changes to these attribute values.”);
…
applying the trained first machine-learned model to the…. identify a set of suggested issues for the enterprise project (paragraph [0136], “A GUI may show a dashboard (e.g., FIG. 9) with an area that contains the issue diagnosis section. In one embodiment of the present disclosure, a problem or issue may be diagnosed by identifying patterns. The patterns represent common situations that arise in a project, e.g., in development. The diagnosing of the present disclosure may identify them from the data. The patterns in the data reflect common development issues that can arise in either the product or in the process by which it is developed.”);
receiving a first indication accepting one or more suggested issues of the set of suggested issues (paragraph [0153], “Continuing with the above example scenario, the next thing to do, now that they know what is gone wrong, is to try to find a workable solution. To do that, they may use the “what-if” analysis, e.g., shown in the dashboard (shown in FIG. 9).”);
modifying automatically the schema for the enterprise project to add (issue) parameters of the second type corresponding to the accepted one or more suggested issues (paragraph [0154], “What-if analysis in one embodiment of the present disclosure may simulate the effects of changing a predefined set of parameters on the predictions of delivery date distributions. A set of parameters that can be manipulated for What-If analysis may include, but is not limited to: delivery date by providing a specific data, team velocity by providing a change amount to the current team velocity, scope by deleting, adding or modifying scope commitments in the form of plan items, and other parameters.”);
receiving a third set of (questions) parameters of a third type, the third set of parameters describing one or more questions associated with each issue associated with the enterprise project (paragraph [0155], “The method may also include surfacing the new results of re-computations (the what-if exploration) to a user, for example, using the GUI. The method may further include offering the user a choice to (i) save the what-if exploration, (ii) discard it, (iii) continue with another what-if exploration, or (iv) globally commit as an expert assessment. If the user chooses (i) to continue with another what-if exploration, the steps may be repeated.”);
applying a second trained machine-learned model to the first set of parameters, the second set of parameters, and the third set of (questions) parameters associated with the enterprise project to identify a set of suggested questions for one or more issues associated with the enterprise project (paragraph [0155], “The method may further include recomputing the predictions of the likelihood of delivery based on the changed parameters. The method may also include re-running the pattern diagnosis method based on the changed parameters. The method may also include surfacing the new results of re-computations (the what-if exploration) to a user, for example, using the GUI. The method may further include offering the user a choice to (i) save the what-if exploration, (ii) discard it, (iii) continue with another what-if exploration, or (iv) globally commit as an expert assessment. If the user chooses (i) to continue with another what-if exploration, the steps may be repeated.”);
performing …one or more actions based on the schema for the enterprise project modified automatically to add the (questions) parameters of the third type (paragraph [0026], “It is intended to enable a project to adapt dynamically to changes in status and operating conditions, thereby allowing outdated plans and expectations to be replaced by new ones that are more attuned to changed circumstances.”; paragraph [0158], “If the user chooses (iv) to globally commit the what-if exploration as an expert assessment then all parameter changes may be committed globally (e.g., visible to all users) to the globally shared state such that any changes in the predictions, newly detected patterns and notifications will surface in the GUI view of all users.”).
This part of Cantor et al. is applicable to the system of Huang et al. as they both share characteristics and capabilities, namely, they are directed to modeling information systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Huang et al. to include the modeling based on past project success and parameterization as taught by Cantor et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Huang et al. in order to adapt dynamically to changes in status and operating conditions, thereby allowing outdated plans and expectations to be replaced by new ones that are more attuned to changed circumstances (see paragraph [0026] of Cantor et al.).
Huang et al. in view of Cantor et al. fails to explicitly teach the use of particular embedded vectors, but Wan et al. teaches
generating a project embedding vector based on the first set of parameters describing the enterprise project and the second set of parameters describing the set of issues associated with the enterprise project (paragraph [0006], “The collaborative filtering in relation to issues can use an issues similarity matrix. The issues similarity matrix can be generated by identifying, based on issues data, a predefined number of top keywords associated with the user using a term frequency-inverse document frequency, generating, based on issues data, a plurality of word vectors using a word embedding model, generating a plurality of issue vectors based on the identification of the predefined number of top keywords and the generated plurality of word vectors, and calculating a similarity between each pair of issue vectors to result in the issues similarity matrix.”; paragraph [0031], “For use by the application 210, the transactional schema 112 can be used to store application related tables such as users data, issues data and other data which is highly related with issues management business. As the application is running, there can be update/delete/add actions to the transactional data. A regular task can be scheduled to replicate the users data and issues data to another schema which is recommendation schema 114. In some cases, the two schemas 112, 114 can be hosted on different database nodes (in this case of a distributed database system) so as to ensure that performance of the application 210 is not impacted by the issue recommendation computing”);
applying the trained first machine-learned model to the project embedding vector and to the embedding vector of each candidate issue in the first training set to generate an issue recommendation score for each candidate issue (paragraph [0007], “The collaborative filtering in relation to users can use a users similarity matrix. The users similarity matrix can be generated by generating a plurality of user profile vectors and calculating a similarity between each pair of user profile vectors to result in the users similarity matrix. Each user profile vector can be generated by concatenating a user keywords vector with a user basic vector.”), ranking the candidate issues based on the issue recommendation scores (paragraph [0008], “The user keywords vector can be derived from a user actions log which comprises information comprising a user identification (ID), issue ID, a type of action associated with the issue, a timestamp for the action, and a duration of the action. Each issue can be scored based on the user actions.”), and identifying, based on the ranked candidate issues (paragraph [0037], “FIG. 3 is a diagram 300 illustrating information used to generate an issues profile 140. Issues data 310 can be replicated from the transaction schema 112 and can, include information/attributes such as Subject which is the subject of the issue, Category which is the category associated with the issue, and Content which provides additional information about the issue. Other attributes can be included, for example, submitter, date, priority etc.”; paragraph [0040], “For each issue, there is TF-IDF score for each word. These scores can be used to obtain, for example, the top 10 keywords 320 which are weighted as top 10, so each issue can have an associated top 10 keywords. It will be appreciated that different number of top keywords can be identified based on user preference.”),
performing a competency recommendation operation on the schema for the enterprise project modified automatically to add the parameters of the third type, wherein the competency recommendation operation comprises applying a trained competency recommendation model to the modified schema to generate a set of recommended competencies for one or more workflows, issues ,or questions of the enterprise project (paragraph [0035], “The recommendation schema 114 can also include hot issues which comprise analysis results from user actions data; which can characterize or otherwise reflect the top hot issues in the last few days. These hot issues can comprise, for example, ALS/CF retrieved issues and LR ranked issues. All of such issues can have their own table in the recommendation schema 114.”; paragraph [0036], “Finally with DeepFM model, there are recommended issues; and with SPARK Streaming, there are similar issues based on user clicks. These recommended and similar issues can be loaded into the memory of the in-memory database 110 for rapid access. The issues can be read and displayed in application UI. These were the final results of recommendation and included in the recommendation schema 114.”).
This part of Wan et al. is applicable to the system of modified Huang et al. as they both share characteristics and capabilities, namely, they are directed to modeling information systems for issue and predictions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of modified Huang et al. to include the modeling based on embedded project vectors such as user and issue vectors as taught by Wan et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify Huang et al. in order to accelerate the identification of related, closed issues is an important component in issue management systems as they allow for the leveraging of existing knowledge when addressing open issues which, in turn, allows for more rapid issue resolution (see paragraph [0002] of Wan et al.).
With regards to claims 2, 11, and 20, Huang et al. teaches: the first trained model and the second trained model are trained using a training set including a plurality of past enterprise project (paragraph [0020], “A core objective of machine learning techniques is to enable a machine to generalize, or make accurate predictions, from past experiences. The learning varies by the particular machine learning algorithm or technique used, but a common thread of the techniques is to ingest or analyze a large corpus of data (e.g., images, sentences, phrases, books, and other text or data) and build models or adapt algorithms to enable a machine to make statistically reliable or accurate predictions on new data.”; paragraph [0028], “In some embodiments, selecting the knowledge display template can include training a classification model (e.g., a knowledge template classifier) using a set of training sentences.”).
With regards to claims 3 and 12, Huang et al. teaches: the first trained model and the second trained model are trained based on embedding vectors for each past enterprise project of the plurality of past enterprise project, and an indication of whether the past enterprise project was successfully completed (paragraph [0044], “Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors.”).
With regards to claims 4 and 13, Huang et al. teaches: the embedding vector for each enterprise project is determined by determining a parameter embedding vector for each parameter of the first type included in the schema for the enterprise project, each parameter of the second type included in the project, and each parameter of the third type included in the enterprise project, and combining the parameter embedding vectors (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors. In some embodiments weights can be derived for the vectors themselves. The convolution neural network (CNN) 335B can receive the weighted vectors and execute one or more logical operations (e.g., filtering operations) to extract a set of features for each sentence and for the set of sentences.”).
With regards to claims 5 and 14, Huang et al. teaches: the second set of parameters of the second type are received from a plurality of users associated with the enterprise project (paragraph [0016], “Information systems rely on users to provide queries having sufficient logical and semantic elements to enable the systems to identify or locate requested information.”), and
wherein the method further comprises: presenting the second set of parameters of the second type to a project administrator (paragraph [0022], “The information system can receive the natural language query through a verbal input device (e.g., a microphone associated with the information system can receive a verbal request from a user to “please retrieve and display a file that I discussed with Bob last week”). The information system can also receive the natural language query through an interactive graphical user interface, a text input source, or any other method or device for interfacing with a user.”);
for each parameter of the second set of parameters of the second type: receiving an indication whether to accept or reject the parameter, and responsive to receiving an indication to accept the parameter, modifying the schema for the project to add the parameter of the second type (paragraph [0022], “The information system can receive the natural language query through a verbal input device (e.g., a microphone associated with the information system can receive a verbal request from a user to “please retrieve and display a file that I discussed with Bob last week”. The information system can also receive the natural language query through an interactive graphical user interface, a text input source, or any other method or device for interfacing with a user.”, see also paragraphs [0018], [0019], [0024]); and
wherein the first trained model is applied based at least on the first set of parameters of the first type and the accepted parameters from the second set of parameters of the second type (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors.”).
With regards to claims 6 and 15, Huang et al. teaches: presenting the second set of parameters to the project manager comprises:
determining a relevancy score for each parameter of the second set of parameters, the relevancy score determined at least based on one of an output of a trained relevancy model applied based information associated with the parameter, and a number of users that provided parameters corresponding to a same issue as the parameter (paragraph [0044], “Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors. In some embodiments weights can be derived for the vectors themselves. The convolution neural network (CNN) 335B can receive the weighted vectors and execute one or more logical operations (e.g., filtering operations) to extract a set of features for each sentence and for the set of sentences.”).
With regards to claims 7 and 16, Huang et al. teaches: the third set of parameters of the third type are received from a plurality of users associated with the enterprise project (paragraph [0016], “Information systems rely on users to provide queries having sufficient logical and semantic elements to enable the systems to identify or locate requested information.”), and
wherein the method further comprises: presenting the third set of parameters of the third type to a project administrator (paragraph [0022], “The information system can receive the natural language query through a verbal input device (e.g., a microphone associated with the information system can receive a verbal request from a user to “please retrieve and display a file that I discussed with Bob last week”. The information system can also receive the natural language query through an interactive graphical user interface, a text input source, or any other method or device for interfacing with a user.”, see also paragraphs [0018], [0019], [0024]); for each parameter of the third set of parameters of the third type:
receiving an indication whether to accept or reject the parameter, and responsive to receiving an indication to accept the parameter, modifying the schema for the enterprise project to add the parameter of the third type (paragraph [0022], “The information system can receive the natural language query through a verbal input device (e.g., a microphone associated with the information system can receive a verbal request from a user to “please retrieve and display a file that I discussed with Bob last week”. The information system can also receive the natural language query through an interactive graphical user interface, a text input source, or any other method or device for interfacing with a user.”, see also paragraphs [0018], [0019], [0024]); and
wherein the second trained model is applied based at least on the first set of parameters of the first type, a subset of the second set of parameters of the second type, and the accepted parameters from the third set of parameters of the third type (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence.”; paragraph [0045], “he word embedding component 325A, hidden layer 330A, CNN 335A and max pooling component 340A are substantially similar to, respectively, the embedding component 325B, hidden layer 330B, CNN 335B and max pooling component 340B. The received natural language query can therefore be processed in substantially the same way the training sentences to produce a set of features at max pooling component 340A.”).
With regards to claims 8 and 17, Huang et al. teaches: applying the first trained model comprises: generating a project embedding vector for the schema based at least in part on the first set of parameters of the first type and a subset of the second set of parameters of the second type (paragraph [0043], “The knowledge template classifier 305 can be substantially similar to the knowledge template classifier 255 (FIG. 2). The knowledge template classifier 305 can include word embedding component 325A and 325B, hidden layer 330A and 330B, convolutional neural network 335A and 335B, max pooling component 340A and 340B, and similarity component 345.”; paragraph [0044], “The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence”); and
applying the first trained model based on the determined project embedding vector (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors.”).
With regards to claims 9 and 18, Huang et al. teaches: applying the second trained model comprises: generating a project embedding vector for the schema based at least in part on the first set of parameters of the first type, a subset of the second set of parameters of the second type, and a subset of the third set of parameters of the third type (paragraph [0044], “A model for the knowledge template classifier can be trained by providing a set of questions (e.g., queries or sentences, as indicated by template request 320) to the word embedding component 325B. Each question can be a potential formulation of a natural language query that a user can issue to an information system. The word embedding component 325B can transform the each question into a vector, where an element of the vector includes one or more words from the sentence. The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors.”); and
applying the second trained model based on the determined project embedding vector (paragraph [0044], “The hidden layer 330B can the receive the vectors from the word embedding component 325B and, using a neural network having one or more layers, derive weights for elements in the vectors. In some embodiments weights can be derived for the vectors themselves. The convolution neural network (CNN) 335B can receive the weighted vectors and execute one or more logical operations (e.g., filtering operations) to extract a set of features for each sentence and for the set of sentences. In some embodiments, the CNN 335B can extract or determine the most important sentence from the set of sentences or determine the most important words of a given sentence. The max pooling component 340B can traverse the set of features extracted by the CNN 335B and extract (e.g., by executing a down-sampling or other filtering operations) the most important features from the set of features.”).
Claims 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2019/0057145 Huang et al. in view of U.S. Patent Application Publication No. 20130325763 to Cantor et al. and U.S. Patent Application Publication No. 20210397625 to Wan et al. as applied to claims 1-20, further in view of U.S. Patent Application Publication No. 20090024598 to Xie et al.
With regards to claims 21-23, modified Huang et al. fails to explicitly teach generating issue recommendation scores. However, Xie et al. teaches:
generating an embedding vector based on the first set of parameters describing the enterprise project and the second set of parameters describing the set of issues associated with the enterprise project (paragraph [0031], “program product embodiments of the present invention provide an integrated information retrieval (IR) framework where documents can be systematically and dynamically represented as vectors based on the document statistics 302, collection statistics 303 and relevance statistics 301 (see FIG. 3, for example and Equation (5)), which are captured by utilizing the language modeling technique.”);
generating issue recommendation scores for each candidate issue of a set of candidate issues in the training set by comparing an embedding vector of the candidate issue with the embedding vector generated for the enterprise project (paragraph [0045], “Thus according to various embodiments of the present invention, the initial ranking step (element 110, see FIG. 7) comprises estimating the query language model and each of the document language models. In some embodiments, the document language models can be estimated, indexed, and stored offline (such as in a memory device 22 or a data cache 23 thereof) in advance. Then, the language modeling kernel (Equation (5), for example) maps both the query language model and each of the document language models to a vector space, wherein the similarity value of each document language model to the query language model can be used as the retrieval status value (RSVi or iRSV) of the corresponding document.”); and
ranking the set of candidate issues based on the generated issue recommendation scores (paragraph [0048], “Thus according to various embodiments of the present invention, the learning stage (element 120, see FIG. 7) comprises: refining a query language model for the language-modeling kernel (Equation (5), for example) based on relevant documents. The refined (and/or newly calculated) language-modeling kernel then determines a revised vector space. Then, in the new vector space, the learning stage further comprises applying a language-model kernel-based machine learning algorithm (such as SVM, for example) over the feedback documents to find the optimal decision boundary (see element 503, FIG. 5); and finally, using the decision boundary 503 combined with the initial ranking (step 110, for example) to re-rank the documents (see generally, step 124, FIG. 7).”).
This part of Xie et al. is applicable to the system of Huang et al. as they both share characteristics and capabilities, namely, they are directed to modeling information systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of modified Huang et al. to include the modeling based on past project success as taught by Xie et al. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to modify modified Huang et al. in order to provide decision boundaries based on positive and negative feedback (see paragraph [0012] of Xie et al.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Joshua D Schneider whose telephone number is (571)270-7120. The examiner can normally be reached on Monday - Friday, 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on (571)272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.D.S./Examiner, Art Unit 3626
/JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626