Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 24, 2026 has been entered.
REMARKS
On pages 6-8, Applicant argues Qu et al. (Qu hereafter, US 20240420180 A1) in view of More et al. (Generating UML Diagrams from Natural Language Specifications, 2012) do not described the argued claim 1. Applicant argues Qu describes models for predicting customer behavior…The customer data described in Qu is not the engineering or operations intent recited in the claims…Because Qu does not disclose or suggest the claimed type of input and indeed relies on a completely different class of data, the combination of references relied upon in the Office Action does not teach each and every limitation of the claim and, at least for this reason, fails to establish that Qu anticipates or renders obvious the required input structure. Applicant’s argument is not persuasive as discussed below. For example, the instant merely provides exemplary disclosure of the limitation of “engineering intent…an engineering workflow.” Further, the instant specification does not limit the argued limitations to any engineering discipline or operations. Therefore, the disclosure as cited in the previous Office Action reasonably describes the argued limitation under the broadest reasonable interpretation (BRI). Qu describes product marketing and promotion systems 140 include, but are not limited to, automatic targeted advertisement generation (e.g., paper and/or electronic targeted mailings, web-based advertisements, and native promotions within a software application such as a native application or a web application) and business intelligence reports (e.g., for guiding sales agents during sales calls), as will be discussed in more detail below. The disclosure under BRI is applicable to business engineering or operations.
Applicant further asserts the Requirement analysis to Provide Instant Diagrams (RAPID) tool described in More is just that - a tool. This tool assists requirements analysts with faster, more consistent class-model creation from textual specifications. The description of the Provide Instant Diagrams (RAPID) tool is reasonably directed to business engineering or operations under BRI.
Lastly, Applicant asserts “assuming arguendo that RAPID were applied to Qu, there is no teaching, suggestion, or motivation that the resulting output would be a controlled data format output comprising an ontological engineering intent representation interpretable by tools in an engineering workflow, as required by claim 1. Qu produces predictive customer-behavior models, not ontological engineering intent. More produces UML diagrams, not ontological representations of engineering intent suitable for engineering workflows as recited. The combination fails to meet this limitation of claim 1.” Applicant’s argument is not persuasive because the crux of the argued limitations have been addressed above.
PENDING MATTERS
Claims 1-15, filed February 24, 2026, are examined on the merits.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6, 10, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu et al. (Qu hereafter, US 20240420180 A1) in view of More et al. (Generating UML Diagrams from Natural Language Specifications, 2012).
Claim 1, Qu discloses a computer-implemented method for formalizing an uncontrolled data format input, comprising the steps:
obtaining by an input interface an uncontrolled data format input ([0042], e.g. the system may have access to non-textual data 217 regarding the customer, such as numerical data. These non-textual data 217 may include data collected from public information provided by the customer (e.g., on a website), from third-party data sources, from public sources, and the like, and may also include data collected directly from the customer during sign-up);
determining by an input processing algorithm an embedding of the obtained uncontrolled data format input in an embedding space resulting from a plurality of preprocessed embeddings ([0096], e.g. the customer feature embeddings may be computed based on the descriptions available through the advertising service and the customer feature embedding), and wherein the embedding is represented by an embedding vector, wherein the embedding vector comprises a plurality of vector entries specifying the embedding ([0043], e.g. the feature extractor 218 converts data into a format suitable for inclusion in the customer feature embedding 216 (or feature vector). These conversions may include, for example, normalizing input data values into specified ranges and/or applying mathematical operations to the input data values (e.g., converting input values such as revenue or company size to a normalized log scale ranging from 0 to 1), converting multiple choice responses to a one-hot encoding);
formalizing by an output processing algorithm a controlled data format output using the embedding ([0043], e.g. the feature extractor 218 converts data into a format suitable for inclusion in the customer feature embedding 216 (or feature vector). These conversions may include, for example, normalizing input data values into specified ranges and/or applying mathematical operations to the input data values (e.g., converting input values such as revenue or company size to a normalized log scale ranging from 0 to 1), converting multiple choice responses to a one-hot encoding); and
outputting by an output interface the controlled data format output ([0035], e.g. output data (e.g., predictions) computed by the system 210 may be stored in one or more memory circuits of one or more computing devices, where the intermediate results and the output predictions may be computed based on the data input are computed using one or more processing circuits of the one or more computing devices).
However, Qu does not disclose input comprising engineering or operations intent provided as natural language input …by transforming the uncontrolled data format input to the controlled data format output, wherein the controlled data format output comprises an ontological engineering intent representation interpretable by tools in an engineering workflow.
More discloses input comprising engineering or operations intent (Abstract, e.g. “Requirement analysis to Provide Instant Diagrams (RAPID)” is a desktop tool to assist requirements analysts and Software Engineering students to analyze textual requirements, finding core concepts and its relationships, and extraction UML diagrams) provided as natural language input (page 21, Section 2.2, e.g. OpenNLP POS tagger (lexical) takes the English text as input and outputs the corresponding POS tags for each word; On the other hand, OpenNLP Chunkier (syntactic) chunks the sentence into phrases (Noun phrase, verb phrase, etc.) according to English language grammar)…by transforming the uncontrolled data format input to the controlled data format output, wherein the controlled data format output comprises an ontological engineering intent representation interpretable by tools in an engineering workflow (page 20, Section 2.1, e.g. normalizing NL requirements to remove ambiguous requirements and identify incomplete requirements, page 23, Section 3, e.g. RAPID can open textual requirements from different sources including words documents (DOC), text files (TXT), rich text files (RTF), and hypertext document (HTML). The UML diagrams are visually represented, and Figure 2, e.g. UML Diagrams).
More discloses RAPID tool assists analysts by providing an efficient and fast way to produce the class diagram from their requirements. It supports a good interaction with users by providing a modern and human-centered user interface (page 19, Section 1). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by More to improve the system of Qu. Therefore, it would have been obvious for one of ordinary skill in the art to use the system of Qu with the RAPID tool of More. The benefit would be to provide an efficient and fast way to produce the class diagram from their requirements and support a good interaction with users by providing a modern and human-centered user interface.
Claim 2, Qu as modified discloses the uncontrolled data format input comprises engineering or operations content ([0042], e.g. the system may have access to non-textual data 217 regarding the customer, such as numerical data. These non-textual data 217 may include data collected from public information provided by the customer (e.g., on a website), from third-party data sources, from public sources, and the like, and may also include data collected directly from the customer during sign-up).
Claim 3, Qu as modified discloses determining by the input processing algorithm the embedding of the obtained uncontrolled data format input comprises:
identifying the engineering or operations content in the uncontrolled data format input ([0093], e.g. apply these onboarded customer product propensities to a product filter 533 to identify highest propensity products for the onboarded customer); and
determining the embedding based on the identified engineering or operations content ([0043], e.g. the feature extractor 218 converts data into a format suitable for inclusion in the customer feature embedding 216 (or feature vector). These conversions may include, for example, normalizing input data values into specified ranges and/or applying mathematical operations to the input data values (e.g., converting input values such as revenue or company size to a normalized log scale ranging from 0 to 1), converting multiple choice responses to a one-hot encoding).
Claim 4, Qu as modified discloses the uncontrolled data format input comprises a natural language input or an uncontrolled text input ([0027], e.g. Types of input data 120 include, but are not limited to, descriptions of the customer (e.g., in text), customer behavior data (e.g., interactions with a platform), and customer financial data (e.g., volume of transactions, size of transactions, geographic distribution of transactions, etc.)).
Claim 5, Qu as modified discloses determining, by the input processing algorithm, the embedding of the obtained uncontrolled data format input comprises:
identifying preprocessed embeddings in the embedding space, which are similar to the embedding with respect to a predetermined metric ([0037], e.g. Each of the propensities 212 represents a degree of product fit between the product and the given customer (e.g., a likelihood, probability, or other numerical metric)); and
determining the embedding based on the similar preprocessed embeddings ([0043], e.g. the feature extractor 218 converts data into a format suitable for inclusion in the customer feature embedding 216 (or feature vector). These conversions may include, for example, normalizing input data values into specified ranges and/or applying mathematical operations to the input data values (e.g., converting input values such as revenue or company size to a normalized log scale ranging from 0 to 1), converting multiple choice responses to a one-hot encoding).
Claim 6, Qu as modified discloses the predetermined metric comprises a semantically meaningful metric ([0037], e.g. Each of the propensities 212 represents a degree of product fit between the product and the given customer (e.g., a likelihood, probability, or other numerical metric)).
Claim 10, Qu as modified discloses formalizing the controlled data format output using the embedding vector comprises mapping the embedding vector to the controlled data format output ([0053], e.g. the mapping of the outputs to propensities for corresponding products).
Claim 15 is directed to a device having the same steps as claim 1. The claim is similarly rejected under the same rationale as claim 1, supra.
Claim(s) 7-9, 11, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu et al. (Qu hereafter, US 20240420180 A1) in view of More et al. (Generating UML Diagrams from Natural Language Specifications, 2012), as applied to claims 1-6, 10, and 15 above, in further view of Mandt et al. (Mandt hereafter, US 20180157644 A1).
Claim 7, Qu discloses the claimed method except for the limitation of “a dimension of the embedding vector is larger than 10.” Mandt discloses “a dimension of the embedding vector is larger than 10” ([0025], e.g. the dynamic embedding model 104 would include 100 pairs of word embedding matrices 1051-100 and context embedding matrices 1061-100 comprising a total of 100 million word embedding vectors and 100 million context embedding vectors).
Mandt discloses the method 700 allows the analysis module 102 to consider all time steps while remaining efficient ([0053]). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Mandt to improve the method of Qu. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Qu with the embedding vector of Mandt. The benefit would be to have the method remain efficient.
Claim 8, Qu as modified discloses “the dimension of the embedding vector is larger than 50” (Mandt, [0025], e.g. the dynamic embedding model 104 would include 100 pairs of word embedding matrices 1051-100 and context embedding matrices 1061-100 comprising a total of 100 million word embedding vectors and 100 million context embedding vectors).
Claim 9, Qu as modified discloses “the dimension of the embedding vector is larger than 100” (Mandt, [0025], e.g. the dynamic embedding model 104 would include 100 pairs of word embedding matrices 1051-100 and context embedding matrices 1061-100 comprising a total of 100 million word embedding vectors and 100 million context embedding vectors).
Claim 11, Qu as modified discloses the output processing algorithm comprises an output machine learning model that formalizes the controlled data format output (Mandt, Abstract, e.g. a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process).
Claim 12, Qu as modified discloses the input processing algorithm further comprises an input machine learning model that determines the embedding (Mandt, Abstract, e.g. a machine learning data model that associates words with corresponding usage contexts over a window of time, according to a diffusion process).
Claim(s) 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu et al. (Qu hereafter, US 20240420180 A1), as applied to claims 1-6, 10, and 15 above, in further view of Nelson et al. (Nelson hereafter, US 20240394291 A1).
Claim 13, Qu discloses the claimed invention except for the input machine learning model is based on a Large Language Model. Nelson discloses the input machine learning model is based on a Large Language Model ([0005], e.g. Embedding vectors are generated for each domain-specific term using a pre-trained large language model (trained on non-domain-specific content, such as internet content) using descriptive text gathered for each term).
Nelson discloses These improved vectors can be further used to improve the accuracy of semantic search systems, which can then be used to improve the accuracy of Generative AI “grounding” (e.g., the Retrieval Augmentation Generation (RAG) model) systems that provide factual content for Generative AI from domain-specific databases. Additional techniques for further improving the accuracy of Generative AI systems are disclosed ([0005]). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Nelson to improve the method of Qu. Therefore, it would have been obvious for one of ordinary skill in the art to use the method of Qu as modified with the embedding vector of Nelson. The benefit would be to improve the accuracy of semantic search systems.
Claim 14, Qu as modified discloses input machine learning model and/or the output machine learning model comprises a neural network (Nelson, [0070], e.g. the embeddings at 430 can include other calculations or computational methods including combinations optimized by machine learning or neural networks).
RELEVANT PRIOR ART
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Li et al. (Deep semantic mining of big multimedia data advertisements based on needs ontology construction, 2022) discloses in order to improve user query intent recognition and analysis in e-commerce platform, we propose an intent recognition method based on ontology and matter mapping. We try to introduce the needs ontology combined with BERT-CRF model into semantic mining to solve the problem of inefficient accurate recommendation due to the lack of needs ontology support. From the intention classification of the given commodity type and the index of each model test, the results are good. We conducted extensive experiments using the GoodsKG corpus and obtained an accuracy improvement of 3.3% compared to the base model BERT. It also proved that the method has substantial application value and will also provide a good reference for large data analysis. Through ontology construction, the cutting-edge deep learning technology is combined with multimedia computing, and large-scale multimedia advertising data is deeply semantically mined, which enriches the knowledge discovery calculation methods of multimedia data and enhances the perception of multimedia data (Abstract).
CONCLUSION
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Patent applicants with problems or questions regarding electronic images that can be viewed in the Patent Application Information Retrieval system (PAIR) can now contact the USPTO's Patent Electronic Business Center (Patent EBC) for assistance. Representatives are available to answer your questions daily from 6 am to midnight (EST). The toll free number is (866) 217-9197. When calling please have your application serial or patent number, the type of document you are having an image problem with, the number of pages and the specific nature of the problem. The Patent Electronic Business Center will notify applicants of the resolution of the problem within 5-7 business days. Applicants can also check PAIR to confirm that the problem has been corrected. The USPTO's Patent Electronic Business Center is a complete service center supporting all patent business on the Internet. The USPTO's PAIR system provides Internet-based access to patent application status and history information. It also enables applicants to view the scanned images of their own application file folder(s) as well as general patent information available to the public.
For all other customer support, please call the USPTO Call Center (UCC) at 800-786-9199. The USPTO's official fax number is 571-272-8300.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to C. Dune Ly, whose telephone number is (571) 272-0716. The examiner can normally be reached on Monday-Friday from 8 A.M. to 4 PM ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Neveen Abel-Jalil, can be reached on 571-270-0474.
/Cheyne D Ly/
Primary Examiner, Art Unit 2152
3/7/2026