DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. The amendment filed on 01/21/2026 has been received and fully considered.
3. Claims 1-15 are presented for examination.
Response to Arguments
4. Applicant's arguments filed 01/21/2026 have been fully considered but they are moot in view of the new grounds of rejection.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claim(s) 1-5, 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over Neema et al. (Architecture Exploration in the META Toolchain, 40 pages (2015)), in view of Hall et al. (USPG_PUB No. 2020/0302019 A1), further in view of Lecue et al. (USPG_PUB No. 2019/0325868 A1).
6.1 In considering claims 1 and 9, Neema et al. teaches a system for conversational dialog in engineering systems design, comprising:
a processor; and a memory having stored thereon modules executed by the processor (see fig.1, design space tool which includes the processor and memory), the modules comprising: a design bot configured to generate a design dashboard on a graphical user interface that presents a textual representation of system design view information with a rendering of system design view components (see the “Design space and manipulation tool” for the exploration and visualization of design space, allows the user to model optionality and composition of multiple components and component assemblies. Upper page 2, The OpenMETA tool chain provides unique capabilities in this respect and incorporates a comprehensive suite of methods for design space exploration such as discrete combinatorial design space exploration, parametric design analysis, simulation-based design metric evaluation, and dashboard for design space metric visualization. And at page 8, CyPhyML captures the concepts of design models in various CPS domains, specifies how these concepts are organized and related, and specifies the rules governing their composition. OpenMETA consists of a number of model interpreters and analysis tools, which can be used to generate system and analysis artifacts from system designs, and perform various structural and dynamic analyses), the dashboard …. configured to receive a plain text string conveying a user request for a system design view, the system design view comprising a view of system elements and properties of the system elements (see fig.9 at page 14, section 4.1: In the DESERT tool allows the user to manage constraints and see the corresponding configurations - the edit button makes it clear that plain strings may be received to request corresponding configurations; moreover there is a "view/select" option to choose components in figure 9 as seen in section 4.1, list element "View/ Select"; this does, however, not require a plain text input; Additionally the DSRefactorer shown in section 4.3, paragraph 1, allows to e.g. replace components with alternative design containers that have multiple choices; as seen in paragraph 3 the user selects components and the requests the generation of new design elements; Additionally according to section 4.4 the DSRefiner allows to generate a refined design space that can be reasoned with in the same way as the original design space as seen in figure 12; Additionally as seen in section 4.5 the DSCriticalityMeter allows for further refinement of components depending e.g. on number of configurations for a component, here the user may choose to refine any element e.g. according to said configurations; Additionally in section 4.6 a "Component Library Manager" is mentioned which "helps to discover and insert different instances of the same component types into an alternative design container"; as seen in section 5.0 second paragraph this can be used); retrieve system design view information from a design repository (see page 4, The generated configurations are subjected to dynamic analyses for evaluation against the secondary requirements. The result of these detailed system analyses in terms of valid design selections and reformulations must be incorporated into the original design space, which must be re-explored to generate a new set of valid design configurations. fig.9, design space results from applied constraints.); and generate a plain text string response to the user request conveying system design information relevant to the system design, the plain text response displayed in the dialog box (see fig.9, page 4, The generated configurations are subjected to dynamic analyses for evaluation against the secondary requirements. The result of these detailed system analyses in terms of valid design selections and reformulations must be incorporated into the original design space, which must be re-explored to generate a new set of valid design configurations.). While Neema et al. does not specially state that plain text in a dialog box feature is used and that wherein the design bot is further configured to: translate the plain text of the user request to a vectorized contextual user request using context defined for design activity goals with respect to elements of the system design, wherein the vectorized contextual user request extracts relevant context based on machine learning of previous user requests, it is noted that the Examiner that using plain text to communicate is well-known, as Neema et al. further provides for using text in the dialogs shown in fig. 9).
Nonetheless, Hall et al. teaches the use of text input (see para [0075], The AI-based interactive dialog bot training and communications system 100 may also provide speech-to-text or text-to-speech techniques, as well as other multimodal ways to create and interact with users—internal, external, or otherwise), including a design bot is configured to: translate the plain text of the user request to a vectorized contextual user request using context defined for design activity goals with respect to elements of the system design, wherein the vectorized contextual user request extracts relevant context based on machine learning of previous user requests (see para [0027] For example, the context preprocessor 106 may examine the text of the query (e.g., request 102) from a user or editor interacting with a dialog bot to extract useful features of the request 102. [0030] Once the communication or query has been assigned an embedding or numerical vector, the model-based action generator 112 and/or the memory-based action generator 114 may work in parallel to prepare a reply to the request 102. [0031] The model-based action generator 112 may take the embedded user query (e.g., numerical value/vector) and compare it to embedded values of action content. With the help of the content manager 110, the model-based action generator 112, for example, may find similarities with action content that editors have previously written. [0037] With regard to the contextual embedding model, user conversational content may be converted to a numeral vector or value, which may be used in a similarity comparison, as described above.). Hall further teaches the processor and memory of the claim (see fig.1, abstract, para [0023], [0025], the system may comprise a memory storing machine readable instructions. The system may also comprise a processor to execute the machine- readable instructions to receive a request via an artificial conversational entity).
Neema et al. and Hall et al. are analogous art because they are from the same field of endeavor and that the model analyzes by Hall et al. is similar to that of Neema et al. Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Hall et al with that of Neema et al. because Hall et al. teaches a reduction in development effort, improve functionality, enable cost and time effectiveness, and increase customer retention and engagement (see para [0022]).
But does not specically state that the system design information being configured as a knowledge graph. Lecue et al. teaches that the system design information being configured as a knowledge graph with vectorized nodes (see para [0006], The conversation knowledge graph includes a plurality of first nodes that each correspond a concept discussed in the conversation. A portion of the conversation knowledge graph is selected and merged with a domain knowledge graph into a merged knowledge graph based on the identification of the state change in the conversation. The domain knowledge graph includes a plurality of second nodes that each correspond to at least one of the features for each of the robotic agents. The merged knowledge graph includes a percentage matching value between a portion of the first nodes and a portion of the second nodes. Further para [0047], In some implementations, knowledge-graph embedding involves embedding the components (e.g., the nodes and the edges between them) of a knowledge graph into continuous vector spaces, to simplify the manipulation while preserving the inherent structure of the knowledge graph).
Neema et al., Hall et al., and Lecue et al. are analogous art because they are from the same field of endeavor and that the model analyzes by Lecue et al. is similar to that of Neema et al. and Hall et al. Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
6.2 As per claims 2 and 10, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach that wherein information stored in the design repository is formatted as vectorized objects, wherein the design bot is further configured to retrieve the system design information by comparing the vectorized user request with vectorized objects and retrieving objects with shortest distance to the vectorized request (see Hall et al. para [0029] The embedding model subsystem 108 may receive the communication or query from the context preprocessor 106 and associate a numerical vector to the preprocessed request context. [0031] The model-based action generator 112 may take the embedded user query (e.g., numerical value/vector) and compare it to embedded values of action content. Similarities may be determined using a similarity comparison technique. [0033] The memory-based action generator 114 may also take the embedded user query (e.g., numerical value/vector) and compare it to “lessons” stored in memory, including the content manager 110 or memory management subsystem 116. [0037] With regard to the contextual embedding model, user conversational content may be converted to a numeral vector or value, which may be used in a similarity comparison, as described above. [0044] If a close key cannot be found with the exact same action in the memory, a similarity between the running context with the closest key or shortest distance is above a threshold (such as 0.9) may be determined). Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
6.3 With regards to claims 3 and 11, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach that wherein the dialog box feature is configured to receive a voice command conveying a user request for a system design view (see Hall et al. para [0075], The AI-based interactive dialog bot training and communications system 100 may also provide multilingual support, which allows creation of and interaction with dialog bots in a global platform. The AI-based interactive dialog bot training and communications system 100 may also provide speech-to-text or text-to-speech techniques, as well as other multimodal ways to create and interact with users—internal, external, or otherwise. Smart integration may also give dialog bot ability to provide informed responses based on a wealth of various data sources, such as existing customer website, documents, various databases, 3.sup.rd party ticketing systems, social media, etc.), the system further comprising: an automatic speech recognition component configured to convert the voice command to digital text data (see Hall et al. para [0075], The AI-based interactive dialog bot training and communications system 100 may also provide multilingual support, which allows creation of and interaction with dialog bots in a global platform. The AI-based interactive dialog bot training and communications system 100 may also provide speech-to-text or text-to-speech techniques, as well as other multimodal ways to create and interact with users—internal, external, or otherwise. Smart integration may also give dialog bot ability to provide informed responses based on a wealth of various data sources, such as existing customer website, documents, various databases, 3.sup.rd party ticketing systems, social media, etc.); and a natural language understanding component configured to extract linguistic meaning of the user request from the digital text data (see Hall et al. para [0075], In some examples, natural language processing (NLP) may provide human-like conversations and understanding. The AI-based interactive dialog bot training and communications system 100 may also provide dialog bots with interactive user interfaces that provide a seamless user experience. [0028] In some examples, the context preprocessor 106 may extract context of the query using a variety of data processing techniques. One technique may include caching, which allows the context preprocessor 106 to “look” at request 102 and extract key components of the query. It should be appreciated that natural language processing (NLP), or other data processing techniques, may also be used to parse the query. For example, NLP may be used to analyze, understand, and derive meaning from human language from the query. In other words, NLP may leverage AI to enable the context preprocessor 106 to provide or extract context from the request 102. It should be appreciated that NLP may involve defining entities and a variety of NLP and non-NLP techniques may be employed at the AI-based interactive dialog bot training and communications system 100.); wherein the design bot is further configured to retrieve the system design view data based on the linguistic meaning of the user request (see Hall et al. para [0028] In some examples, the context preprocessor 106 may extract context of the query using a variety of data processing techniques. One technique may include caching, which allows the context preprocessor 106 to “look” at request 102 and extract key components of the query. It should be appreciated that natural language processing (NLP), or other data processing techniques, may also be used to parse the query. For example, NLP may be used to analyze, understand, and natural derive meaning from human language from the query. In other words, NLP may leverage AI to enable the context preprocessor 106 to provide or extract context from the request 102. It should be appreciated that NLP may involve defining entities and a variety of NLP and non-NLP techniques may be employed at the AI-based interactive dialog bot training and communications system 100.). Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
6.4 Regarding claims 4 and 12, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach the multimodal dialog manager configured to construct a dialog structure in a logical container as elements for mapping contextualization using a machine learning process that records received data requests and predicts which design activity context relates to the respective data request according to a probability distribution (see Neema et al. section 3.1; further Hall et al. para [0075], The AI-based interactive dialog bot training and communications system 100 may also provide speech-to-text or text-to-speech techniques, as well as other multimodal ways to create and interact with users—internal, external, or otherwise. Smart integration may also give dialog bot ability to provide informed responses based on a wealth of various data sources, such as existing customer website, documents, various databases, 3.sup.rd party ticketing systems, social media, etc. see further [0100]-[0101]). Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
6.5 As per claims 5 and 13, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach that wherein the dialog structure comprises: a set of contexts, each context representing a design activity context (see Neema et al. fig.6, 9 and section 3.3-3.3.1, Contextual Non-linear OCL Constraints Contextual Non-linear constraints are written textually in OCL format and are associated with the container it contains in the context with which it must be satisfied. Figure 3 depicts an example of the Context constraint.), wherein each context groups a set of subgoals, each subgoal being an element in a context and reflecting a single step of a use case (see Neema et al. fig.6, 9 and section 3.3), and each context comprising a set of slot values as candidate values for each subgoal, the slot values being global for the context for sharing among the subgoals of the same context (see Neema et al. fig.6, 9, section 4.6, Supporting META Tools Several other interpreter components exist in OpenMETA that are associated with DSE. The Component Authoring Tool provides importing capability from various domains (e.g. CAD, Modelica) into OpenMETA tools. After the component library is populated the Component Library Manager helps to discover and insert different instances of the same component types into an alternative design container. Once components and subsystems are composed in a design space and design configurations are exported the Master Interpreter automates the translation of all designs into executable domain specific models.). Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
6.6 With regards to claim 8, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach the multimodal dialog manager configured to construct a dialog structure in a logical container as elements for mapping contextualization using a rule-based learning process that records received data requests and applies defined rules based on recognized user intent or system entity (see Neema et al. fig.2, page 6, CyPhyML captures the concepts of design models in various CPS domains, specifies how these concepts are organized and related, and specifies the rules governing their composition. OpenMETA consists of a number of model interpreters and analysis tools, which can be used to generate system and analysis artifacts from system designs, and perform various structural and dynamic analyses. Any DSML requires precise specification of the language’s syntax and semantics. Figure 2 provides a simplified view of the design space part of the CyPhyML metamodel. As shown, the central modeling element in the language is called a DesignContainer). Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Lecue et al. with that of Neema et al. and Hall et al. because Lecue et al. teaches allowing communications with ease (see para [0046]).
7. Claim(s) 6-7, 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Neema et al. (Architecture Exploration in the META Toolchain, 40 pages (2015)), in view of Hall et al. (USPG_PUB No. 2020/0302019 A1), further in view of Lecue et al. (USPG_PUB No. 2019/0325868 A1), in further view of Sequeira et al. (USPG_PUB No. 2020/0320435).
7.1 As per claims 6-7 and 14-15, the combined teachings of Neema et al., Hall et al., and Lecue et al. teach that wherein the dialog structure further comprises: for each context, a subgoal probability distribution specifying how likely for each subgoal in the context is to be selected (see Hall et al. para [0035] selector 118 may select a response from the above rankers according to their confidence score. Typically, the response 120 with the highest weighted score will be selected. See further Neema et al. table 2, page 19-20, fig.14-15, page 24, Next the detailed analysis is performed for these fully-specified component assemblies. Let’s assume that after analysis, configurations #2 and #3 were selected. Next, we select the cfg2 and cfg3 CWC configuration models in GME and invoke the Design Space Refinement Tool to generate a new refined design space that includes only these two design configurations), wherein the dialog structure further comprises: a context probability distribution for the entire dialog structure specifying how likely that any one context is to be selected (Neema et al. table 2, page 19-20; Hall para [0035] The response selector 118 may select a response from the above rankers according to their confidence score. Typically, the response 120 with the highest weighted score will be selected. Also see Neema et al. fig.14-15). While the term probability distribution is stated in the combinational references, Hall provides for using scores which are often derived from probability distributions to make his selection, as would clearly be understood to a person of skilled in the art.
Nevertheless, Sequeira et al. provides for making selection of actions according with a probability distribution (see para [0018], RL agent has knowledge of is to perform a particular action given a state, in this case of select an action according to a probability distribution). Neema et al., Hall et al., Lecue et al., and Sequeira et al. are analogous art because they are from the same field of endeavor and that the model analyzes by Sequeira et al. is similar to that of Neema et al., Hall et al., and Lecue et al. Therefore, it would have been obvious to a person of skilled in the art at the time of filing of the applicant’s invention to combine the method of Sequeira et al. with that of Neema et al., Hall et al., and Lecue et al. because Sequeira et al. provides for the evaluation of model accuracy (see para [0083]).
Conclusion
8. Claims 1-15 are rejected and THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDRE PIERRE-LOUIS whose telephone number is (571) 272-8636. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, EMERSON C PUENTE can be reached at 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDRE PIERRE LOUIS/Primary Patent Examiner, Art Unit 2187 February 17, 2026, 2025