DETAILED ACTION
This Final Office Action is in response Applicant communication filed on
11/07/2025. In Applicant’s amendment, claims 1, 11, and 22 were amended.
Claims 1, 3-4, and 6-22 are currently pending and have been rejected as follows.
Response to Amendments
Applicant’s amendments necessitated new grounds of rejection under 35 USC 103.
Response to Arguments
Applicant's prior art arguments have been fully considered but they are moot in light of the newly cited portions of the Kim reference provided below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4 and 6-22 are rejected under 35 USC 103 as being unpatentable over the teachings of
Bhattacharyya et al., US 20210065091 A1, hereinafter Bhattacharyya in view of
Bron et al., WO 2021051031 A1, hereinafter Bron, in view of
Kim et al., US 20150120782 A1, hereinafter Kim, in further view of
Chakravarthy et al., US 20230367992 A1, hereinafter Chakravarthy, in further view of
Sehra et al., US 20240020292 A1, hereinafter Sehra. As per,
Claims 1, 11
Bhattacharyya teaches
A system for varying optimization solutions using constraints, the system comprising: at least a processor; a memory communicatively connected to the at least a processor, the memory containing instructions configuring the processor to: /
A method for generating a market analysis plan, the method comprising: (Bhattacharyya [0419]-[0421])
[…];
[…];
[…];
[…];
receive at least a constraint wherein the at least a constraint is categorized using data multiplier wherein the data multipliers are configured to indicate relative importance of the at least a constraint; (Bhattacharyya [0119]; “While the fact-gathering questions can be used to determine specific facts and attributes of the target business, the domain-weighing questions can be used to determine the importance of various domains and variables” noting the data multipliers mapped to the weighing of variables; [0384] “The system may identify one or more challenges based on the retrieved problem-related data and user inputs. For example, the one or more challenges may include budget constraints”)
[…];
[…];
[…];
[…];
[…];
[…];
[…];
[…];
[…];
[…];
determine a visual element data structure as a function of the outlier process, (Bhattacharyya [0286] “The functions 3106 may be function areas corresponding to the processes 3104. The functions may be FPM functions described with respect to process decomposition 2. … an AI or machine learning algorithm is used to determine a function corresponding to a process”)
wherein determining the visual element data structure further comprises: generating a visual element describing the outlier process; and (Bhattacharyya [0093] “Once the system 100 has determined the scores (e.g., initial performance score of the target business, target business domain scores, benchmark performance score of the benchmark competitor, and benchmark competitor domain scores), the system 100 can provide a graphical representation of the results to the user via the website 201. The graphical representation may be in the form of a spider graph, comparative bar graph, or other suitable graphical representation” note the graphical representation generation and types)
displaying the visual element to a user. (Bhattacharyya [0093] “The graphical representation may allow the user (e.g., a business leader at the target business) to view and understand their business' performance and be able to compare the performance of the target business against the benchmark competitor” note the user viewing the visual element)
Bhattacharyya does not explicitly teach, Bron however in the analogous art of data analysis teaches
generate an interface query data structure, wherein: the interface query data structure configures a remote device to display an input field to a user; (Bron [0010] “provide an intuitive interface to allow the data scientist to generate a machine learning application without considerable programming experience. A chatbot is able to translate natural language into a structured representation of a machine learning solution using a conversational interface.” Note the intuitive conversational interface corresponding to an interface query data structure and suggesting an input field displayed to a user. See fig. 1. Note the User(s) 116 remote from the model composition engine 132.)
the interface query data structure configures the remote device to receive at least a user-input datum from the input field; (Bron [0062] “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104.” Note the inputs from a user)
generate an interface query data structure recommendation as a function of the at least a user-input datum, wherein the interface query data structure recommendation comprises at least a modification of data from a previously presented interface query data structure; (Bron [0101] “the second user input can specify a type of problem that the user would like to implement machine learning to solve. … the problems can be entered as native language speech or text … The technique can decipher the native language to understand the goals of the machine-learning model … The techniques will recognize one or more keywords in the native language speech to recommend or select a particular machine-learning algorithm.” Note the employed techniques to process native language speech user input to recommend a ML algorithm for the user’s objective corresponding to a modification from a previously presented interface query)
identify a plurality of nodes, using the interface query data structure, wherein the plurality of nodes comprises the user-input datum; (Bron fig. 4 noting the interactive bot system; [0127] “an analytic system may be integrated with a hot system. The analytic system can gather conversation logs and history, and determine information related to individual and/or aggregated end user conversations with a hot system as paths that include different nodes representing different stages or states of the conversations.” Noting the conversations/user inputs between the user and bot system as a plurality of nodes)
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Bhattacharyya’s solution optimizer to include interface queries and modification of user responses in view of Bron in an effort to provide an intuitive conversational interface to generate a machine learning application (see Bron ¶ [0010] & MPEP 2143G).
Bhattacharyya / Bron do not explicitly teach, Kim however in the analogous art of data analysis teaches
locate in the plurality of nodes an outlier cluster; (Kim fig. 14; [0114] “As shown in FIG. 14, an outlier cluster 1401 is identified relative to a main cluster 1402 in the topic network 1301.”)
wherein locating in the plurality of nodes an outlier cluster comprises: identifying a target process; (Kim [0112] “To accomplish noise reduction, in an example embodiment, the server uses a network community detection algorithm called Modularity to identify and filter these types of outlier clusters in the topic queries”)
inputting the target process into an impact metric machine learning model; (Kim [0121] “At block 1501, the server 100 applies a community-finding algorithm to the topic network to decompose the network into communities. Non-limiting examples of algorithms for finding communities include the Minimum-cut method, Hierarchical clustering, the Girvan-Newman algorithm, the Modularity algorithm referenced above, and Clique-based methods;” [0192])
inputting the plurality of nodes into the impact metric machine learning model; (Kim [0155] “The topic network visually illustrates relationships among the nodes a set of users (U.sub.T) each represented as a node in the topic network graph and connected by edges to indicate a relationship (e.g. friend or follower-followee, or other social media interconnectivity) between two users within the topic network graph. At block 1602, the server obtains a pre-defined degree or measure of internal and/or external interconnectedness (e.g. resolution) for use in defining the boundary between communities;” [0192])
determining an impact metric as a function of the training data from the impact metric machine learning model; (Kim [0156] “At block 1603, the server is configured to calculate scoring for each of the nodes (e.g. influencers) and edges according to the pre-defined degree of interconnectedness (e.g. resolution);” [0192])
identifying an external plurality of nodes, inputting the external plurality of nodes into the impact metric machine learning model, receiving an external impact metric from the impact metric machine learning model; and (Kim [0044] “the proposed system and methods can be used to determine that influencers in Topic A are also influencers in one or more other topics (e.g. Topic B, Topic C, etc.)” noting the external plurality of nodes indicated by Topic B and Topic C and the use of the system and methods for those topics as well; [0109] “their influencer score is often high enough to rank in the critical top-ten list” note the external impact metric)
determining an outlier cluster as a function of both the external metric and the impact metric, wherein the impact metric indicates higher aptitude in the attribute cluster than the external impact metric; (Kim [0064] “The server identifies and filters out outlier nodes within the topic network (block 306). The outlier nodes are outlier users that are considered to be separate from a larger population or clusters of users in the topic network. The set of outlier users or nodes within the topic network is represented by U.sub.O, where U.sub.O is a subset of U.sub.T;” fig. 13; [0109] “data from the topic network can be improved by removing problematic outliers;” [0110] “The nodes represent the set of users U.sub.T related to the topic McCafe. Some of the nodes 1302 or users are from the Philippines who are fans of a karaoke bar/cafe of the same name McCafe”;” [0111] “thus this sub-network 1302 is considered noise;” [0112] “the server uses a network community detection algorithm called Modularity to identify and filter these types of outlier clusters in the topic queries;” fig. 14; [0114] “As shown in FIG. 14, an outlier cluster 1401 is identified relative to a main cluster 1402 in the topic network 1301” note the cluster’s influencer score/impact metric high enough to rank but deemed noise for the target topic based on the result comparisons)
determine an outlier process as a function of the outlier cluster; (Kim [0113] “It will be appreciated that other types of clustering and community detection algorithms can be used to determine outliers in the topic network. The filtering helps to remove results that are unintended or sought after by a user looking for influencers associated with a topic”)
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Bhattacharyya’s solution optimizer and Bron’s interface to include an outlier process in view of Kim in an effort to improve the quality of the results (see Kim ¶ [0072] & MPEP 2143G).
Bhattacharyya / Bron / Kim do not explicitly teach, Chakravarthy however in the analogous art of data analysis teaches
training the metric machine learning model as a function of training data, wherein the training data comprises historical attribute clusters; (Chakravarthy [0027] “a system (e.g., an electronic system for determining repairs for resource transfers using neural network deep embedded clustering and/or the like) may be configured to train, based on historical data associated with historical resource transfers, a first machine learning model to determine, based on historical attributes of the historical resource transfers, clusters of the historical resource transfers”)
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Bhattacharyya’s solution optimizer, Bron’s interface, and Kim’s outlier process to include training data comprising historical attribute clusters in view of Chakravarthy in an effort to reduce consumption of computing resources in erroneous data processing (see Chakravarthy ¶ [0026] & MPEP 2143G).
Bhattacharyya / Bron / Kim / Chakravarthy do not explicitly teach, Sehra however in the analogous art of data analysis teaches
sanitize, via the processor, the training data, wherein sanitizing the training data comprises removing redundant historical attribute clusters from the training data; (Sehra [0031] “align the pre-processed data by enabling supervised learning to unify or de-duplicate records that have similar naming convention for a given attribute and standardized text”)
retraining the metric machine learning model as a function of the sanitized training data; (Sehra [0032] “enhance accuracy through a user feedback loop by learning from previous prediction and reducing errors in subsequent interactions”)
Before the effective filing date of the claimed invention, it would have been obvious for one of ordinary skill in the art to modify Bhattacharyya’s solution optimizer, Bron’s interface, Kim’s outlier process, and Chakravarthy’s historical clusters to include sanitizing and retraining the model in view of Sehra in an effort to effectively harmonize data with a target level of quality (see Sehra ¶ [0095] & MPEP 2143G).
Claim 3
Bhattacharyya teaches
wherein the constraint comprises at least a user-input. (Bhattacharyya [0384] “The system may identify one or more challenges based on the retrieved problem-related data and user inputs”)
Claim 4
Bhattacharyya teaches
wherein receiving the constraint comprises an interface query data structure wherein the interface query data structure is at least partially based on data describing attributes of a user that is retrieved from a database including categorical information correlated to a historical range of data. (Bhattacharyya [0079] “the system 100 can provide a self-assessment questionnaire to the user. The questionnaire may include questions about the target business and the target business' performance. In some embodiments, the system 100 can dynamically select the questions in the questionnaire as the user completes it. For example, the system 100 can select questions based on the completed answers to the introductory questions (provided in step 311) or to the questionnaire (provided in step 313). The questionnaire may include a series of questions related to the culture, technology, knowledge curation, data strategy, compliance, partner enablement, performance measurement, business processes, and other areas of business strategy;” [0163] “the assessment is based on performance drivers, without limitation, such as cost, quality, and time”)
Claim 6
Bhattacharyya teaches
wherein the impact metric indicates higher aptitude in the plurality of nodes than a population average. (Bhattacharyya [0097] “the system 100 can pre-process (e.g., reformat and clean) the first data 215. In some instances, the first data 215 may not be standardized and may include gaps in the data and/or outliers that can impact the accuracy of processing the first data 215. Embodiments of the disclosure include the system 100 pre-processing the first data 215 to manage outliners, handle missing data, and standardize the first data 215 to be on the same scale (e.g., 1-100, 1-10, etc.), fix structural errors, or a combination thereof”)
Claim 7
Bhattacharyya / Bron / Chakravarthy / Sehra do not explicitly teach, Kim however in the analogous art of data analysis teaches
wherein determining the outlier process as a function of the outlier cluster comprises: inputting an outlier cluster in an outlier process machine learning model; (Kim [0112] “To accomplish noise reduction, in an example embodiment, the server uses a network community detection algorithm called Modularity to identify and filter these types of outlier clusters in the topic queries;” [0192])
receiving an outlier process from the outlier machine learning model. (Kim fig. 14; [0114] “As shown in FIG. 14, an outlier cluster 1401 is identified relative to a main cluster 1402 in the topic network 1301;” [0192])
The rationales to modify/combine the teachings of Bhattacharyya / Bron / Chakravarthy / Sehra with/and the teachings of Kim are presented in the examining of claim 1 and incorporated herein.
Claim 8
Bhattacharyya teaches
wherein the memory contains instructions configuring the at least a processor to: determine a visual element as a function of the visual element data structure; and configure a user device to display the visual element to the user. (Bhattacharyya [0417] “The device may be an electronic device, such as a cellular phone, a tablet computer, a laptop computer, or a desktop computer. The device can include a software (e.g., a web browser to access website 201), a display, a touch screen, a transceiver, and storage. The display may be used to present a UI to the user, and the touch screen may be used to receive input from the user. The transceiver may be configured to communicate with the network. Storage may store and access data from the server computer, the database(s), or both”)
Claim 9
Bhattacharyya teaches
wherein the visual element comprises a remote display device is configured to display an input field to the user by a Graphical User Interface (GUI) defined as a point of interaction between the user and the remote display device. (Bhattacharyya [0417] “The device may be an electronic device, such as a cellular phone, a tablet computer, a laptop computer, or a desktop computer. The device can include a software (e.g., a web browser to access website 201), a display, a touch screen, a transceiver, and storage. The display may be used to present a UI to the user, and the touch screen may be used to receive input from the user. The transceiver may be configured to communicate with the network. Storage may store and access data from the server computer, the database(s), or both”)
Claim 10
Bhattacharyya teaches
wherein the visual element data structure categorizes the constraint. (Bhattacharyya [0080] “In step 313, the questions may be categorized into different types of questions, such as fact-gathering questions and domain-weighing questions. In some examples, a user may not be able to distinguish between a fact-gathering question and a domain-weighing question. For example, the fact-gathering questions and domain-weighing questions may be phrased in a similar manner and formatted to receive answers in a similar manner. The system 100 may associate each question with the appropriate category and save this categorization into the second database 207”)
Claim 12
Bhattacharyya teaches
wherein the user data comprises competitor data. (Bhattacharyya [0087] “the system 100 may determine information (e.g., including data) about one or more competitors in the same industry as the target business. In some embodiments, the industry may be one identified by the target company based on answers received from introductory questions (in step 311) and/or from a self-assessment questionnaire (step 313)”)
Claim 13
Bhattacharyya teaches
wherein the competitor data comprises data related to at least an action related to an associated market. (Bhattacharyya [0087] “the system 100 may determine a business is a competitor based on one or more attributes (e.g., size, demography, location, etc.) similar to the target business”)
Claim 14
Bhattacharyya teaches
wherein an interface query data structure is at least partially based on data describing attributes of a user that are retrieved from a database including categorical information correlated to a historical range of data. (Bhattacharyya [0079] “the system 100 can provide a self-assessment questionnaire to the user. The questionnaire may include questions about the target business and the target business' performance. In some embodiments, the system 100 can dynamically select the questions in the questionnaire as the user completes it. For example, the system 100 can select questions based on the completed answers to the introductory questions (provided in step 311) or to the questionnaire (provided in step 313). The questionnaire may include a series of questions related to the culture, technology, knowledge curation, data strategy, compliance, partner enablement, performance measurement, business processes, and other areas of business strategy;” [0163] “the assessment is based on performance drivers, without limitation, such as cost, quality, and time”)
Claim 15
Bhattacharyya teaches
wherein a remote display device is configured to display an input field to a user by a Graphical User Interface (GUI) defined as a point of interaction between the user and the remote display device. (Bhattacharyya [0417] “The device may be an electronic device, such as a cellular phone, a tablet computer, a laptop computer, or a desktop computer. The device can include a software (e.g., a web browser to access website 201), a display, a touch screen, a transceiver, and storage. The display may be used to present a UI to the user, and the touch screen may be used to receive input from the user. The transceiver may be configured to communicate with the network. Storage may store and access data from the server computer, the database(s), or both”)
Claim 16
Bhattacharyya teaches
wherein an achievement plan is iteratively updated as a function of an achievement machine learning model. ([0290] “The pain-point and solution data may be stored in a database and may be configured to be updated. For example, the pain-point and solution data includes knowledge and inputs from SME and the knowledge and inputs are updated over time, as more information is being provided to the pain-point and solution data;” [0291] “The SME's knowledge and input may be processed by AI or machine learning algorithms to advantageously increase the value of the knowledge and the input. For example, AI or machine learning algorithms may expand the applicability of the knowledge and input”)
Claim 17
Bhattacharyya teaches
wherein generating an achievement plan comprises generating at least an action item. (Bhattacharyya [0065] “Embodiments of this disclosure relate to a system for improving multiple areas, such as strategy, operations, risk management, and regulation compliance, of a target business … The system 100 may provide different functionalities to achieve these improvements.”)
Claim 18
Bhattacharyya teaches
wherein generating a goal report comprises a goal report machine learning model. (Bhattacharyya [0349] “FIG. 39 illustrates an exemplary problem statement UI 3900. As illustrated, the problem statement details 3910 may include an adopted problem statements 3912, goal 3914, description 3916, and network of influence 3918 (described in more detail below). The problem statement 3912 may reflect the adopted problem statement. In some embodiments, the user may provide the system 100 with the goal(s) and description(s). In some embodiments, the system may generate the goal(s) and description(s). In some embodiments, the system may update user-provided goal(s) and description(s);” [0317] “In some examples, an AI or machine learning algorithm is used to determine a solution KPI corresponding to an identified solution;” fig. 39 noting the KPIs 3926)
Claim 19
Bhattacharyya teaches
wherein identifying at least an improvement datum comprises comparing the at least a user-input data to a pre-defined threshold. (Bhattacharyya [0407] “f the determined desirability score disagrees with the received desirability score, then the system 100 may ask at step 5811 is less than or equal to an expected desirability score.”)
Claim 20
Bhattacharyya teaches
wherein the pre-defined threshold comprises data associated with an achievement plan. (Bhattacharyya [0407] “The expected desirability score may be based on the received desirability score (e.g., from a user) or a predetermined desirability index”)
Claim 21
Bhattacharyya teaches
wherein the visual element data structure categorizes the constraint. (Bhattacharyya [0080] “In step 313, the questions may be categorized into different types of questions, such as fact-gathering questions and domain-weighing questions. In some examples, a user may not be able to distinguish between a fact-gathering question and a domain-weighing question. For example, the fact-gathering questions and domain-weighing questions may be phrased in a similar manner and formatted to receive answers in a similar manner. The system 100 may associate each question with the appropriate category and save this categorization into the second database 207”)
Claim 22
Bhattacharyya teaches
wherein displaying the visual element to the user further comprises displaying a comparison of the outlier process to the target process. (Bhattacharyya [0093] “Once the system 100 has determined the scores (e.g., initial performance score of the target business, target business domain scores, benchmark performance score of the benchmark competitor, and benchmark competitor domain scores), the system 100 can provide a graphical representation of the results to the user via the website 201. The graphical representation may be in the form of a spider graph, comparative bar graph, or other suitable graphical representation” note the graphical representation generation including comparison to competitors)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230026782 A1; WO 2022056529 A1; Glen et al., Interactive Architectural Design with Diverse Solution Exploration, 2019.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED EL-BATHY whose telephone number is (571)270-5847. The examiner can normally be reached on M-F 8AM-4:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PATRICIA MUNSON can be reached on (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED N EL-BATHY/Primary Examiner, Art Unit 3624