Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,939

GENERALIZED ENTERPRISE CATALOG CLASSIFICATION FOR SHORTHAND ITEMDESCRIPTORS

Non-Final OA §101§103§112§DP
Filed
Aug 02, 2024
Examiner
FRUNZI, VICTORIA E.
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Punchh Inc.
OA Round
1 (Non-Final)
24%
Grant Probability
At Risk
1-2
OA Rounds
4y 3m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
68 granted / 284 resolved
-28.1% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
50 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 284 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to Application No. 18/792939, filed on 8/2/2024. Claims 1-20 are currently pending and have been examined. Claims 1-20 have been rejected as follows. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 13, 14, 17, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 13, 14, 17, and 18 recite the limitation "the second model”. There is insufficient antecedent basis for this limitation in the claim. Claim 1 from which claims 13, 14, 17 and 18 depend, recites a first model and the instant specification in at least [0026] and [0030] provide a plurality of potential models which could be the first and second models of the claims. Thereby, the lack of proper antecedent basis for “the second model” renders the claims 13, 14, 17, and 18 indefinite. It is unclear to which model the second model refers to in the claims or if the second model was intended to be drafted as the first model in claim 1 for which there is antecedent basis. For the purposes of claim interpretation, the model recited in the claims is interpreted as the first model in view of the dependency as best understood by the examiner. Clarification is needed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 20: an item descriptor normalization module for receiving a first descriptor of an item [corresponding acts or structure is found in [0026 and 0075] of the instant specification] Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1-15 and 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-14 of U.S. Patent No. 12182844 in view of Green (US20050288920). Instant Application 17339302 US PAT 12182844 1. A non-transitory computer readable storage medium comprising stored instructions, the instructions when executed by one or more processors causing the one or more processors to perform a method comprising: receiving a first descriptor of an item, the first descriptor comprising an ordered sequence of characters; inputting the first descriptor into a first model, the first model configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters; receiving, as output from the first model, a second descriptor of the item comprising the adjacent characters and one or more characters inserted adjacent in the ordered sequence; and determining, based on the second descriptor, an identification of an item of an enterprise catalog corresponding to the second descriptor. 1. A non-transitory computer readable storage medium comprising stored instructions, the instructions when executed by one or more processors causing the one or more processors to perform operations, the instructions comprising instructions to: receive a shorthand descriptor of an item; (interpreted as first descriptor of an item) input the shorthand descriptor into a first model, the first model being a probabilistic model configured to predict characters for insertion between adjacent characters in input text; receive, as output from the first model, a normalized descriptor of the item ( interpreted as second descriptor of an item) corresponding to the shorthand descriptor, wherein the first model determines the normalized descriptor by: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the shorthand descriptor, the plurality of candidate item descriptors including adjacent characters of the shorthand descriptor and one or more additional characters inserted between the adjacent characters; identifying a candidate item descriptor with a highest probability of matching the shorthand descriptor relative to the plurality of candidate item descriptors; and assigning the candidate item descriptor with the highest probability of matching the shorthand descriptor as the normalized descriptor; determine one or more categories corresponding to the normalized descriptor; input the normalized descriptor and the one or more categories into a second model, wherein the second model is trained on data corresponding to an enterprise catalog using a supervised learning process; and receive as output from the second model an identification of an item included in the enterprise catalog corresponding to the normalized descriptor. 2. The non-transitory computer readable storage medium of Claim 1, wherein the method further comprises: determining one or more categories corresponding to the second descriptor; wherein determining the identification further comprises: inputting the second descriptor and the one or more categories into a second model, the second model trained on item descriptors of an enterprise catalog and categories of the item descriptors; and receive, as output from the second model, the identification of the item. [continued from claim 1] determine one or more categories corresponding to the normalized descriptor; input the normalized descriptor and the one or more categories into a second model, wherein the second model is trained on data corresponding to an enterprise catalog using a supervised learning process; and receive as output from the second model an identification of an item included in the enterprise catalog corresponding to the normalized descriptor. 3. The non-transitory computer-readable storage medium of claim 1, wherein determining the second descriptor of further comprises: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the first descriptor; identify a candidate item descriptor with a highest probability of matching the first descriptor relative to the plurality of candidate item descriptors; and assign the candidate item descriptor with the highest probability of matching the first descriptor as the second descriptor. [continued from claim 1] receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the shorthand descriptor, the plurality of candidate item descriptors including adjacent characters of the shorthand descriptor and one or more additional characters inserted between the adjacent characters; identifying a candidate item descriptor with a highest probability of matching the shorthand descriptor relative to the plurality of candidate item descriptors; and assigning the candidate item descriptor with the highest probability of matching the shorthand descriptor as the normalized descriptor; 4. The non-transitory computer-readable storage medium of claim 3, wherein the first model is a probabilistic model, and wherein the method further comprises: receiving, as output from the probabilistic model, the plurality of candidate item descriptors, each candidate item descriptor comprising adjacent characters of the first descriptor and one or more characters inserted between the adjacent characters. [continued from claim 1] receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the shorthand descriptor, the plurality of candidate item descriptors including adjacent characters of the shorthand descriptor and one or more additional characters inserted between the adjacent characters; 5. The non-transitory computer-readable storage medium of claim 4, wherein each plurality of candidate item descriptor output by the probabilistic model is a normalized item descriptor having a probability of corresponding to the first descriptor that exceeds a threshold probability. 2. The non-transitory computer-readable storage medium of claim 1, wherein the plurality of candidate item descriptors output by the probabilistic model are normalized item descriptors having a probability of corresponding to the shorthand descriptor that exceeds a threshold probability. 6. The non-transitory computer-readable storage medium of claim 4, wherein the first model comprises: a first language model corresponding to a domain configured to predict one or more first normalized item descriptors for shorthand descriptors, the one more first normalized item descriptors comprising adjacent characters of the first descriptor and corresponding to the domain, and a second language model configured to predict one or more second normalized item descriptors for shorthand descriptors, the one or more second normalized item descriptors comprising adjacent characters of the first descriptor and corresponding to a corrected formatting of the first descriptor, and wherein determining the plurality of candidate item descriptors further comprises: determine the plurality of candidate item descriptors using normalized item descriptors output by the first and second language models based on the adjacent characters of the first descriptor. 3. The non-transitory computer-readable storage medium of claim 1, wherein the first model includes: a first language model corresponding to a domain configured to predict one or more first normalized item descriptors for shorthand descriptors, the one more first normalized item descriptors including adjacent characters of the shorthand descriptors and corresponding to the domain, and a second language model configured to predict one or more second normalized item descriptors for shorthand descriptors, the one or more second normalized item descriptors including adjacent characters of the shorthand descriptors and corresponding to a corrected formatting of the shorthand descriptors, and wherein the instructions to determine the plurality of candidate item descriptors further comprise instructions to: determining the plurality of candidate item descriptors using normalized item descriptors output by the first and second language models based on the adjacent characters of the shorthand descriptor. 7. The non-transitory computer-readable storage medium of claim 3, wherein determining the one or more categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor. 4. The non-transitory computer-readable storage medium of claim 1, wherein the instructions to determine the one or more categories comprise instructions to: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor. 8. The non-transitory computer-readable storage medium of claim 7, wherein the method further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the one or more context categories for the candidate item descriptor to the one or more context categories for other candidate item descriptors of the plurality of candidate item descriptors; and determining, based on the comparison of the one or more context categories, a likelihood of the candidate item descriptor matching the first descriptor; and responsive to the determined likelihood exceeding a verification threshold criterion, select the candidate item descriptor as input to the second model. 5. The non-transitory computer-readable storage medium of claim 4, wherein the instructions further comprise instructions to: for each candidate item descriptor of the plurality of candidate item descriptors: compare the one or more context categories for the candidate item descriptor to the one or more context categories for other candidate item descriptors of the plurality of candidate item descriptors; and determining, based on the comparison of the one or more context categories, a likelihood of the candidate item descriptor matching the shorthand descriptor; and responsive to the determined likelihood exceeding a verification threshold criterion, select the candidate item descriptor for input to the second model. 9. The non-transitory computer-readable storage medium of claim 8, wherein identifying the candidate item descriptor with the highest probability of matching the first descriptor further comprises: determine, for each candidate item descriptor of the plurality of candidate item descriptors, a probability that the one or more context categories of the candidate item descriptor matches a context category corresponding to the first descriptor; and select the candidate item descriptor with the highest probability of matching the first descriptor as the candidate item descriptor having the highest probability that the one or more context categories of the candidate item descriptor matches the context category corresponding to the first descriptor. 6. The non-transitory computer-readable storage medium of claim 5, wherein the instructions to identify the candidate item descriptor with the highest probability of matching the shorthand descriptor comprise instructions to: determine, for each candidate item descriptor of the plurality of candidate item descriptors, a probability that the one or more context categories of the candidate item descriptor matches a context category corresponding to the shorthand descriptor; and select the candidate item descriptor with the highest probability of matching the shorthand descriptor as the candidate item descriptor having the highest probability that the one or more context categories of the candidate item descriptor matches the context category corresponding to the shorthand descriptor. 10. The non-transitory computer-readable storage medium of claim 9, wherein selecting the candidate item descriptor with the highest probability of matching the first descriptor further comprises: for a candidate item descriptor of the plurality of candidate item descriptors: determine a first context category for a first term of the candidate item descriptor and a second context category for a second term of the candidate item descriptor; and compare the first context category and second context category to a matching context; and select the candidate item descriptor with the highest probability of matching the first descriptor based on the comparison of the first context category and second context category. 7. The non-transitory computer-readable storage medium of claim 6, wherein the instructions to select the candidate item descriptor with the highest probability of matching the shorthand descriptor comprise instructions to: for a candidate item descriptor of the plurality of candidate item descriptors: determine a first context category for a first term of the candidate item descriptor and a second context category for a second term of the candidate item descriptor; and compare the first and second categories to determine if the first and second categories correspond to a matching context; and select the candidate item descriptor with the highest probability of matching the shorthand descriptor based on the comparison of the first and second categories of the candidate item descriptor. 11. The non-transitory computer-readable storage medium of claim 7, wherein the knowledge base of information is a triplestore. 8. The non-transitory computer-readable storage medium of claim 4, wherein the knowledge base of information is a triplestore. 12. The non-transitory computer-readable storage medium of claim 7, wherein the knowledge base is an unsupervised model comprising a plurality of clusters of non- normalized item descriptors corresponding to a plurality of context categories, and wherein determining the one or more context categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: input the candidate item descriptor into the unsupervised model; and receive the one or more context categories as output from the unsupervised model. 9. The non-transitory computer-readable storage medium of claim 4, wherein the knowledge base is an unsupervised model including a plurality of clusters of non-normalized item descriptors corresponding to a plurality of context categories, and wherein the instructions to determine the one or more context categories comprise instructions to: for each candidate item descriptor of the plurality of candidate item descriptors: input the candidate item descriptor into the unsupervised model; and receive the one or more context categories as output from the unsupervised model. 13. The non-transitory computer-readable storage medium of claim 1, wherein the second model outputs probabilities that the second descriptor corresponds to a set of items in the enterprise catalog, and wherein the method further comprises determine the identification of the item based on the item having a highest probability of corresponding to the second descriptor relative to other items from the set of items in the enterprise catalog. 10. The non-transitory computer-readable storage medium of claim 1, wherein the second model outputs probabilities that the normalized descriptor corresponds to a set of items in the enterprise catalog, and wherein the instructions further comprise instructions to: determine the identification of the item based on the item having a highest probability of corresponding to the normalized descriptor relative to other items from the set of items in the enterprise catalog. 14. The non-transitory computer-readable storage medium of claim 1, wherein the second model is trained by: receiving a training first item descriptor; inputting the training first item descriptor into the second model; receiving, as output from the second model, a training second item descriptor corresponding to the first item descriptor; determining one or more categories corresponding to the training second item descriptor; and training the second model using the training second item descriptor and the one or more categories corresponding to the training second item descriptor. 11. The non-transitory computer-readable storage medium of claim 1, wherein the instructions to train the second model comprise instructions to: receive a training shorthand item descriptor; input the training shorthand item descriptor into the second model; receive, as output from the second model, a training normalized item descriptor corresponding to the shorthand item descriptor; determine one or more categories corresponding to the training normalized item descriptor; and train the second model using the training normalized item descriptor and the one or more categories corresponding to the training normalized item descriptor. 15. The non-transitory computer-readable storage medium of claim 1, wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items of the enterprise catalog; and provide the customer recommendation to a client device. 12. The non-transitory computer-readable storage medium of claim 1, wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items included in the enterprise catalog; and provide the customer recommendation to a client device. 16. The non-transitory computer-readable storage medium of Claim 1, wherein the first descriptor is a shorthand descriptor and the second descriptor is a normalized descriptor. 17. The non-transitory computer-readable storage medium of Claim 1, wherein the second model is a supervised model. 18. The non-transitory computer-readable storage medium of Claim 1, wherein the second model is an unsupervised model. Claims 19-20 recite parallel language to claim 1 Claims 13-14 recite parallel language to claim 1 The US PAT 12182844 of claim 1 does not expressly disclose the element of an ordered sequence of characters. However, Green discloses shown in Figure 5 as "structured content" and [0010] The invention is applicable to structured content such as business forms or product descriptions as well as to more open content such as information searches outside of a business context. In such applications, the invention provides a system for semantic transformation that works and scales. [0087] As discussed above, much content relating to product descriptions and other structured content is not free-flowing sentences, but is an abbreviated structure called a `noun phrase`. Noun phrases are typically composed of mixtures of nouns (N), adjectives (A), and occasionally prepositions (P). The mixtures of nouns and adjectives may be nested. This example shows the ordered sequence of abbreviated terms then converted to unabbreviated terms. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the descriptors of US PAT 12182844 to include an ordered sequence of characters, as taught in Greem in order to assist with finding an item through a more easily searchable schema (see [0073]). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claims 1-5 and 7-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claims 1-18 are a computer readable medium, claim 19 is a method, and claim 20 is a system. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. Step 2A Prong 1: The independent claims (1, 19 and 20 ) recite: Claim 1: A non-transitory computer readable storage medium comprising stored instructions, the instructions when executed by one or more processors causing the one or more processors to perform a method comprising: receiving a first descriptor of an item, the first descriptor comprising an ordered sequence of characters; inputting the first descriptor into a first model, the first model configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters; receiving, as output from the first model, a second descriptor of the item comprising the adjacent characters and one or more characters inserted adjacent in the ordered sequence; and determining, based on the second descriptor, an identification of an item of an enterprise catalog corresponding to the second descriptor. Claim 19: A method for classifying shorthand item descriptors in accordance with an enterprise catalog, the method comprising: receiving a first descriptor of an item, the first descriptor comprising an ordered sequence of characters; inputting the first descriptor into a first model, the first model configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters; receiving, as output from the first model, a second descriptor of the item comprising the adjacent characters and one or more characters inserted adjacent in the ordered sequence; determining, based on the second descriptor, an identification of an item of an enterprise catalog corresponding to the second descriptor. Claim 20: A system for classifying shorthand descriptors in accordance with an enterprise catalog, the system comprising: an item descriptor normalization module for receiving a first descriptor of an item; a first model for receiving the first descriptor as input, first model being configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters, and outputting a second descriptor of the item, the second descriptor of the item comprising the adjacent characters and one or more characters inserted between adjacent characters of the ordered sequence of characters; a catalog matching model for determining, based on the second descriptor, an identification of an item of an enterprise catalog corresponding to the second descriptor. These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for determining an item from an enterprise catalog based on received and generated descriptors for the purposes of processing a transaction at a point of sale (set forth in specification [005]). The steps under its broadest reasonable interpretation specifically fall under sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination. Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of Claim 1: A non-transitory computer readable storage medium comprising stored instructions, the instructions when executed by one or more processors causing the one or more processors to perform a method comprising: Claim 20: A system for classifying shorthand descriptors in accordance with an enterprise catalog, the system comprising: an item descriptor normalization module The additional elements emphasized above are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. The limitations do not impose any meaningful limits on practicing the abstract idea, and therefore do not integrate the abstract idea into a practical application – MPEP 2106.05(f). Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component. Even when considered as an ordered combination, the additional elements of claims 1, 19, and 20 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1, 19, and 20 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05). As such, independent claims 1, 19, and 20 are ineligible. Dependent claims 2-5 and 7-18 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1, 19 and 20 without significantly more. Claim 2 recites wherein the method further comprises: determining one or more categories corresponding to the second descriptor; wherein determining the identification further comprises: inputting the second descriptor and the one or more categories into a second model, the second model trained on item descriptors of an enterprise catalog and categories of the item descriptors; and receive, as output from the second model, the identification of the item. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 3 recites wherein determining the second descriptor of further comprises: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the first descriptor; identify a candidate item descriptor with a highest probability of matching the first descriptor relative to the plurality of candidate item descriptors; and assign the candidate item descriptor with the highest probability of matching the first descriptor as the second descriptor. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 4 recites wherein the first model is a probabilistic model, and wherein the method further comprises: receiving, as output from the probabilistic model, the plurality of candidate item descriptors, each candidate item descriptor comprising adjacent characters of the first descriptor and one or more characters inserted between the adjacent characters. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 5 recites wherein each plurality of candidate item descriptor output by the probabilistic model is a normalized item descriptor having a probability of corresponding to the first descriptor that exceeds a threshold probability. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 7 recites wherein determining the one or more categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 8 recites wherein the method further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the one or more context categories for the candidate item descriptor to the one or more context categories for other candidate item descriptors of the plurality of candidate item descriptors; and determining, based on the comparison of the one or more context categories, a likelihood of the candidate item descriptor matching the first descriptor; and responsive to the determined likelihood exceeding a verification threshold criterion, select the candidate item descriptor as input to the second model. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 9 recites wherein identifying the candidate item descriptor with the highest probability of matching the first descriptor further comprises: determine, for each candidate item descriptor of the plurality of candidate item descriptors, a probability that the one or more context categories of the candidate item descriptor matches a context category corresponding to the first descriptor; and select the candidate item descriptor with the highest probability of matching the first descriptor as the candidate item descriptor having the highest probability that the one or more context categories of the candidate item descriptor matches the context category corresponding to the first descriptor. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 10 recites wherein selecting the candidate item descriptor with the highest probability of matching the first descriptor further comprises: for a candidate item descriptor of the plurality of candidate item descriptors: determine a first context category for a first term of the candidate item descriptor and a second context category for a second term of the candidate item descriptor; and compare the first context category and second context category to a matching context; and select the candidate item descriptor with the highest probability of matching the first descriptor based on the comparison of the first context category and second context category. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 11 recites wherein the knowledge base of information is a triplestore. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 12 recites wherein the knowledge base is an unsupervised model comprising a plurality of clusters of non- normalized item descriptors corresponding to a plurality of context categories, and wherein determining the one or more context categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: input the candidate item descriptor into the unsupervised model; and receive the one or more context categories as output from the unsupervised model. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 13 recites wherein the second model outputs probabilities that the second descriptor corresponds to a set of items in the enterprise catalog, and wherein the method further comprises determine the identification of the item based on the item having a highest probability of corresponding to the second descriptor relative to other items from the set of items in the enterprise catalog. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 14 recites wherein the second model is trained by: receiving a training first item descriptor; inputting the training first item descriptor into the second model; receiving, as output from the second model, a training second item descriptor corresponding to the first item descriptor; determining one or more categories corresponding to the training second item descriptor; and training the second model using the training second item descriptor and the one or more categories corresponding to the training second item descriptor. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 15 recites wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items of the enterprise catalog; and provide the customer recommendation to a client device. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. The additional element of the client device is recited at a high level of generality and does not integrate the judicial exception into a practical application. Claim 16 recites wherein the first descriptor is a shorthand descriptor and the second descriptor is a normalized descriptor. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 17 recites wherein the second model is a supervised model. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. Claim 18 recites wherein the second model is an unsupervised model. The limitation merely further limits the abstract idea and does not further integrate the judicial exception into a practical application. For these reasons claims 1-5 and 7-20 are rejected under 35 USC 101. Note to Subject Matter Eligibility Claim 6 recites wherein the first model comprises: a first language model corresponding to a domain configured to predict one or more first normalized item descriptors for shorthand descriptors, the one more first normalized item descriptors comprising adjacent characters of the first descriptor and corresponding to the domain, and a second language model configured to predict one or more second normalized item descriptors for shorthand descriptors, the one or more second normalized item descriptors comprising adjacent characters of the first descriptor and corresponding to a corrected formatting of the first descriptor, and wherein determining the plurality of candidate item descriptors further comprises: determine the plurality of candidate item descriptors using normalized item descriptors output by the first and second language models based on the adjacent characters of the first descriptor. While the claim recites an abstract idea directed to descriptors of items that are part of an enterprise catalog, the claim recites additional limitations that integrate the abstract idea into a practical application. Like that of “Subject Matter Eligibility Examples, Example 42”, the claims recite specifically, “the one or more second normalized item descriptors comprising adjacent characters of the first descriptor and corresponding to a corrected formatting of the first descriptor”. The limitation addresses the practical application of standardizing the formatting of the descriptor in order to allow the determination of the plurality of candidate item descriptors using normalized item descriptors output by the first and second language models based on the adjacent characters of the first descriptor. Therefore, the abstract idea is integrated into a practical application and thereby not rejected under 35 USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 15, 16, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 20050288920) in view of Darr (US 7698170). Regarding claim 1, Green discloses: A non-transitory computer readable storage medium comprising stored instructions, the instructions when executed by one or more processors [0056] computer based tool causing the one or more processors to perform a method comprising: [0440] storage medium receiving a first descriptor of an item, the first descriptor comprising an ordered sequence of characters; (shown in Figure 5 as "structured content") [0010] The invention is applicable to structured content such as business forms or product descriptions as well as to more open content such as information searches outside of a business context. In such applications, the invention provides a system for semantic transformation that works and scales. [0087] As discussed above, much content relating to product descriptions and other structured content is not free-flowing sentences, but is an abbreviated structure called a `noun phrase`. Noun phrases are typically composed of mixtures of nouns (N), adjectives (A), and occasionally prepositions (P). The mixtures of nouns and adjectives may be nested. inputting the first descriptor into a first model, the first model configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters; [0366] In the filter phrases step as shown in FIG. 7, an SME reviews this phrase data and determines which phrases should be translated. Once the SME has determined which phrases to translate, then a professional translator and/or machine tool translates the phrases (FIGS. 8-9) from the source language, here English, to the target language, here Spanish, using any associated classification information. A SOLx user interface could be used to translate the phrases, or the phrases are sent out to a professional translator as a text file for translation. The translated text is returned as a text file and loaded into SOLx. The translated phrases become the translation dictionary that is then used by the machine translation system. receiving, as output from the first model, a second descriptor of the item comprising the adjacent characters and one or more characters inserted adjacent in the ordered sequence; and (shown in Figure 5 as "normalized content") [0362] In this example, various forms of the word resistor that appear on the original content, for example "RES" or RESS" have been normalized to the form "resistor". The same is true for "W" being transformed to "watt" and "MW" to "milliwatt". Separation was added between text items, for example, "1/4 W" is now "1/4 watt" or "750 HM" is now "75 ohm". Punctuation can also be added or removed, for example, "RES,35.7" is now "resistor 35.7". Not shown in the screenshot: the order of the text can also be standardized by the normalization rules. For example, if the user always want a resistor description to of the form: [0363] resistor <ohms rating><tolerance><watts rating> [0364] the normalization rules can enforce this standard form, and the normalized content would reflect this structure. [0365] Another very valuable result of the normalization step can be to create a schematic representation of the content. In the phrase analysis step, as illustrated, the user is looking for the phrases in the now normalized content that still need to be translated to the target language. determining, based on the second descriptor, an identification of an item[…] corresponding to the second descriptor. [0073] The present invention is based, in part, on the recognition that some content, including business content, often is not easily searchable or analyzable unless a schema is constructed to represent the content. There are a number of issues that a computational system must address to do this correctly. These include: deducing the "core" item; finding the attributes of the item; and finding the values of those attributes. While Green discloses the determination of an item and its related attributes, the reference does not disclose: […] of an enterprise catalog However Darr teaches: […] of an enterprise catalog [Col. 11 40-50] Using a product detail knowledge database (such as contained in the domain model schema 252) and/or the normalized catalog database 253 that specify various product feature details for each transaction item, the server 211 invokes the normalizer process 202 to map or transform the generic product descriptors of the first data set (e.g., 216, 218) into a second data set (e.g., 254) that specifies additional details and/or features for the item of interest, such as more detailed product descriptor information. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items in Green to include an enterprise catalog, as taught in Darr, in order to efficiently generate retail recommendations which reduces programming complexity and expense, which may be readily adapted to new parts or part descriptions, and/or which is compatible with different configuration systems ([Col. 2 lines 60-67]). Regarding claim 19, Green discloses: A method for classifying shorthand item descriptors in accordance with an enterprise catalog, the method comprising: receiving a first descriptor of an item, the first descriptor comprising an ordered sequence of characters; (shown in Figure 5 as "structured content") [0010] The invention is applicable to structured content such as business forms or product descriptions as well as to more open content such as information searches outside of a business context. In such applications, the invention provides a system for semantic transformation that works and scales. [0087] As discussed above, much content relating to product descriptions and other structured content is not free-flowing sentences, but is an abbreviated structure called a `noun phrase`. Noun phrases are typically composed of mixtures of nouns (N), adjectives (A), and occasionally prepositions (P). The mixtures of nouns and adjectives may be nested. inputting the first descriptor into a first model, the first model configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters; [0366] In the filter phrases step as shown in FIG. 7, an SME reviews this phrase data and determines which phrases should be translated. Once the SME has determined which phrases to translate, then a professional translator and/or machine tool translates the phrases (FIGS. 8-9) from the source language, here English, to the target language, here Spanish, using any associated classification information. A SOLx user interface could be used to translate the phrases, or the phrases are sent out to a professional translator as a text file for translation. The translated text is returned as a text file and loaded into SOLx. The translated phrases become the translation dictionary that is then used by the machine translation system. receiving, as output from the first model, a second descriptor of the item comprising the adjacent characters and one or more characters inserted adjacent in the ordered sequence; and (shown in Figure 5 as "normalized content") [0362] In this example, various forms of the word resistor that appear on the original content, for example "RES" or RESS" have been normalized to the form "resistor". The same is true for "W" being transformed to "watt" and "MW" to "milliwatt". Separation was added between text items, for example, "1/4 W" is now "1/4 watt" or "750 HM" is now "75 ohm". Punctuation can also be added or removed, for example, "RES,35.7" is now "resistor 35.7". Not shown in the screenshot: the order of the text can also be standardized by the normalization rules. For example, if the user always want a resistor description to of the form: [0363] resistor <ohms rating><tolerance><watts rating> [0364] the normalization rules can enforce this standard form, and the normalized content would reflect this structure. [0365] Another very valuable result of the normalization step can be to create a schematic representation of the content. In the phrase analysis step, as illustrated, the user is looking for the phrases in the now normalized content that still need to be translated to the target language. determining, based on the second descriptor, an identification of an item[…] corresponding to the second descriptor. [0073] The present invention is based, in part, on the recognition that some content, including business content, often is not easily searchable or analyzable unless a schema is constructed to represent the content. There are a number of issues that a computational system must address to do this correctly. These include: deducing the "core" item; finding the attributes of the item; and finding the values of those attributes. While Green discloses the determination of an item and its related attributes, the reference does not disclose: […] of an enterprise catalog However Darr teaches: […] of an enterprise catalog [Col. 11 40-50] Using a product detail knowledge database (such as contained in the domain model schema 252) and/or the normalized catalog database 253 that specify various product feature details for each transaction item, the server 211 invokes the normalizer process 202 to map or transform the generic product descriptors of the first data set (e.g., 216, 218) into a second data set (e.g., 254) that specifies additional details and/or features for the item of interest, such as more detailed product descriptor information. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items in Green to include an enterprise catalog, as taught in Darr, in order to efficiently generate retail recommendations which reduces programming complexity and expense, which may be readily adapted to new parts or part descriptions, and/or which is compatible with different configuration systems ([Col. 2 lines 60-67]). Regarding claim 20, Green discloses A system for classifying shorthand descriptors in accordance with an enterprise catalog, the system comprising: an item descriptor normalization module [0330] The NorTran (Normalization/Translation) server is designed to support this paradigm. FIG. 3 represents a high-level architecture of the NorTran platform 300. Each module is discussed below as it relates to the normalization/classification process. for receiving a first descriptor of an item; (shown in Figure 5 as "structured content") [0010] The invention is applicable to structured content such as business forms or product descriptions as well as to more open content such as information searches outside of a business context. In such applications, the invention provides a system for semantic transformation that works and scales. [0087] As discussed above, much content relating to product descriptions and other structured content is not free-flowing sentences, but is an abbreviated structure called a `noun phrase`. Noun phrases are typically composed of mixtures of nouns (N), adjectives (A), and occasionally prepositions (P). The mixtures of nouns and adjectives may be nested. a first model [0032-0035] for receiving the first descriptor as input, first model being configured to predict one or more characters for insertion between adjacent characters of the ordered sequence of characters, and outputting a second descriptor of the item, (shown in Figure 5 as "normalized content") the second descriptor of the item comprising the adjacent characters and one or more characters inserted between adjacent characters of the ordered sequence of characters; [0366] In the filter phrases step as shown in FIG. 7, an SME reviews this phrase data and determines which phrases should be translated. Once the SME has determined which phrases to translate, then a professional translator and/or machine tool translates the phrases (FIGS. 8-9) from the source language, here English, to the target language, here Spanish, using any associated classification information. A SOLx user interface could be used to translate the phrases, or the phrases are sent out to a professional translator as a text file for translation. The translated text is returned as a text file and loaded into SOLx. The translated phrases become the translation dictionary that is then used by the machine translation system. And see [00362-00365] for examples […] an identification of an item […] corresponding to the second descriptor [0073] The present invention is based, in part, on the recognition that some content, including business content, often is not easily searchable or analyzable unless a schema is constructed to represent the content. There are a number of issues that a computational system must address to do this correctly. These include: deducing the "core" item; finding the attributes of the item; and finding the values of those attributes. While Green discloses the determination of an item and its related attributes, the reference does not disclose: a catalog matching model for determining, based on the second descriptor, an identification of an item of an enterprise catalog […]. However Darr teaches: a catalog matching model for determining, based on the second descriptor, an identification of an item of an enterprise catalog […]. [Col. 11 40-50] Using a product detail knowledge database (such as contained in the domain model schema 252) and/or the normalized catalog database 253 that specify various product feature details for each transaction item, the server 211 invokes the normalizer process 202 to map or transform the generic product descriptors of the first data set (e.g., 216, 218) into a second data set (e.g., 254) that specifies additional details and/or features for the item of interest, such as more detailed product descriptor information. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items in Green to include a catalog matching model for determining, based on the second descriptor, an identification of an item of an enterprise catalog […], as taught in Darr, in order to efficiently generate retail recommendations which reduces programming complexity and expense, which may be readily adapted to new parts or part descriptions, and/or which is compatible with different configuration systems ([Col. 2 lines 60-67]). Regarding claim 15, Green in view of Darr teaches the limitations set forth above. While Green discloses the determination of an item and its related attributes, the reference does not disclose: wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items of the enterprise catalog; and provide the customer recommendation to a client device. However Darr discloses: wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items of the enterprise catalog; and provide the customer recommendation to a client device. [Col. 5 lines 60-Col. 6 lines 15] To obtain more meaningful predictions from the order history 12 (which is typically referenced to generic product descriptors, such as the SKUs), the order history data 12 may be mapped or otherwise transformed into normalized transaction history data 30 that provides more detailed information identifying with greater specificity the attributes of the purchased products identified in the order history 12. The transformation module 22 performs the normalization by using the normalized catalog 28 and the domain model schema 24 to transform the native order history data 12 into a normalized transaction history 30. Data mining techniques may then be used to identify attribute-based associations contained in the normalized transaction history 30 and to generate rules 36 which the recommendation engine 38 selects from when determining what items to recommend, given a recommendation context 16. In particular, the analytics engine 34 generates attribute-based rules 36 using the normalized transaction history data 30 generated by the order history transformation module 22, and the mined rules 36 are used by the recommendation engine 38 to make recommendations to a user. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items in Green to include wherein the instructions further comprise instructions to: determine, using the identified item, a customer recommendation for one or more items of the enterprise catalog; and provide the customer recommendation to a client device, as taught in Darr, in order to efficiently generate retail recommendations which reduces programming complexity and expense, which may be readily adapted to new parts or part descriptions, and/or which is compatible with different configuration systems ([Col. 2 lines 60-67]). Regarding claim 16, Green in view of Darr teaches the limitations set forth above. Green further discloses: wherein the first descriptor is a shorthand descriptor (shown in Figure 5 as "structured content" and [0379] A second area 1304, in this example, functions as the normalization workbench interface and is used to perform the various configuration processes such as replacing various abbreviations and expressions with standardized terms or, in the illustrated example, defining a parse tree. ) and the second descriptor is a normalized descriptor. (shown in Figure 5 as "normalized content"). Claims 2-5, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 20050288920) in view of Darr (US 7698170) in further view of Verma (US 11367116). Regarding claim 2, Green in view of Darr teaches the limitations set forth above. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: determining one or more categories corresponding to the second descriptor; wherein determining the identification further comprises: inputting the second descriptor and the one or more categories into a second model, the second model trained on item descriptors of an enterprise catalog and categories of the item descriptors; and receive, as output from the second model, the identification of the item. However Verma teaches: determining one or more categories corresponding to the second descriptor; wherein determining the identification further comprises: inputting the second descriptor and the one or more categories into a second model, the second model trained on item descriptors of an enterprise catalog and categories of the item descriptors; (Verma, col 15, lines 22-28, " the matching engine 536 may receive input from a user via the administrative graphical user interface. The input may modify or confirm, for example, a match- class descriptor for combination of first and second item. For example, a user may manually override suggested match classes for individual item combinations. The manual overrides may provide data that can be used to re-tune attribute weights via supervised learning. For example, the reviewed matches may provide match scores, weights, attribute-type indicators, etc., as inputs to a machine learning algorithm, which may use the input to re-tune the model for the category of items. In some instances, the input may store a new match-class descriptor for the combination of items in the matching database, so that the manually programmed version may be used instead of the machine learning generated version") and receive, as output from the second model, the identification of the item. (Verma, col 4, lines 25-43, "the technology may identify attributes of items to determine corollary or matching items. For instance, the technology may combine algorithmic item matching with user assessments to build a catalog of similar items. The technology may automatically provide matching items based on a match class representing the level of similarity of the items. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include determining one or more categories corresponding to the second descriptor; wherein determining the identification further comprises: inputting the second descriptor and the one or more categories into a second model, the second model trained on item descriptors of an enterprise catalog and categories of the item descriptors; and receive, as output from the second model, the identification of the item, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 3, Green in view of Darr teaches the limitations set forth above. Green further discloses [0368] The SOLx system can also then provide an estimation of the quality of the translation result (FIG. 10). Good translations would then be loaded into the run-time localization system for use in the source system architecture. Bad translations would be used to improve the normalization grammars and rules, or the translation dictionary. The grammars, rules, and translation dictionary form a model of the content. Once the model of the content is complete, a very high level of translations are of good quality. [0403] The CSE 1720 is a system initially not under GUI 1716 control that identifies terms and small text strings that repeat often throughout the data set and are good candidates for the initial normalization process. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: wherein determining the second descriptor of further comprises: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the first descriptor; identify a candidate item descriptor with a highest probability of matching the first descriptor relative to the plurality of candidate item descriptors; and assign the candidate item descriptor with the highest probability of matching the first descriptor as the second descriptor. However Verma teaches: wherein determining the second descriptor of further comprises: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the first descriptor; identify a candidate item descriptor with a highest probability of matching the first descriptor relative to the plurality of candidate item descriptors; and assign the candidate item descriptor with the highest probability of matching the first descriptor as the second descriptor. [Col. 13 lines 24-27] "a match score from classification probabilities for each match class may be computed using: score=100p.sub.1+80p.sub.2+65p.sub.3+40p.sub.4, where p.sub.i indicates the probability that the combination of items is in class i, where the subscripts indicate match classes of descending strength" Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein determining the second descriptor of further comprises: receiving a plurality of candidate item descriptors as output from the first model, each of which correspond to the first descriptor; identify a candidate item descriptor with a highest probability of matching the first descriptor relative to the plurality of candidate item descriptors; and assign the candidate item descriptor with the highest probability of matching the first descriptor as the second descriptor, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 4, Green in view of Darr in further view of Verma teaches the limitations set forth above. Green further discloses [0368] The SOLx system can also then provide an estimation of the quality of the translation result (FIG. 10). Good translations would then be loaded into the run-time localization system for use in the source system architecture. Bad translations would be used to improve the normalization grammars and rules, or the translation dictionary. The grammars, rules, and translation dictionary form a model of the content. Once the model of the content is complete, a very high level of translations are of good quality. [0403] The CSE 1720 is a system initially not under GUI 1716 control that identifies terms and small text strings that repeat often throughout the data set and are good candidates for the initial normalization process. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: wherein the first model is a probabilistic model, and wherein the method further comprises: receiving, as output from the probabilistic model, the plurality of candidate item descriptors, each candidate item descriptor comprising adjacent characters of the first descriptor and one or more characters inserted between the adjacent characters. However Verma teaches: wherein the first model is a probabilistic model, [Col. 12 lines 65-67] For example, the attribute-type indicator(s) and attribute value(s) for a combination of a first and second item may be input into a trained classifier, such as a random forest multi-class classifier to determine the probabilities. For instance, a random forest multi-class classifier may show up to a twenty percent increase in accuracy over other methods in a four class classification. and wherein the method further comprises: receiving, as output from the probabilistic model, the plurality of candidate item descriptors, each candidate item descriptor comprising adjacent characters of the first descriptor and one or more characters inserted between the adjacent characters. [Col. 13 lines 24-27] "a match score from classification probabilities for each match class may be computed using: score=100p.sub.1+80p.sub.2+65p.sub.3+40p.sub.4, where p.sub.i indicates the probability that the combination of items is in class i, where the subscripts indicate match classes of descending strength" Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein the first model is a probabilistic model, and wherein the method further comprises: receiving, as output from the probabilistic model, the plurality of candidate item descriptors, each candidate item descriptor comprising adjacent characters of the first descriptor and one or more characters inserted between the adjacent characters, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 5, Green in view of Darr in further view of Verma teaches the limitations set forth above. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: wherein each plurality of candidate item descriptor output by the probabilistic model is a normalized item descriptor having a probability of corresponding to the first descriptor that exceeds a threshold probability. However Verma teaches: wherein each plurality of candidate item descriptor output by the probabilistic model is a normalized item descriptor having a probability of corresponding to the first descriptor that exceeds a threshold probability. [Col. 15 lines 20-30] In some implementations, at 406, the matching engine 536 may determine match-class descriptor based on a threshold level of the normalized match score for the combination of first item and second item. For example, the match score and corresponding match-class descriptor may be determined for a particular combination of items, such as is described in reference to FIG. 1. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein each plurality of candidate item descriptor output by the probabilistic model is a normalized item descriptor having a probability of corresponding to the first descriptor that exceeds a threshold probability, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 13, Green in view of Darr in further view of Verma teaches the limitations set forth above. Green further discloses [0368] The SOLx system can also then provide an estimation of the quality of the translation result (FIG. 10). Good translations would then be loaded into the run-time localization system for use in the source system architecture. Bad translations would be used to improve the normalization grammars and rules, or the translation dictionary. The grammars, rules, and translation dictionary form a model of the content. Once the model of the content is complete, a very high level of translations are of good quality. [0403] The CSE 1720 is a system initially not under GUI 1716 control that identifies terms and small text strings that repeat often throughout the data set and are good candidates for the initial normalization process. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: wherein the second model outputs probabilities that the second descriptor corresponds to a set of items in the enterprise catalog, and wherein the method further comprises determine the identification of the item based on the item having a highest probability of corresponding to the second descriptor relative to other items from the set of items in the enterprise catalog. However Verma teaches: wherein the second model outputs probabilities that the second descriptor corresponds to a set of items in the enterprise catalog, and wherein the method further comprises determine the identification of the item based on the item having a highest probability of corresponding to the second descriptor relative to other items from the set of items in the enterprise catalog. [Col. 13 lines 24-27] "a match score from classification probabilities for each match class may be computed using: score=100p.sub.1+80p.sub.2+65p.sub.3+40p.sub.4, where p.sub.i indicates the probability that the combination of items is in class i, where the subscripts indicate match classes of descending strength" Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein the second model outputs probabilities that the second descriptor corresponds to a set of items in the enterprise catalog, and wherein the method further comprises determine the identification of the item based on the item having a highest probability of corresponding to the second descriptor relative to other items from the set of items in the enterprise catalog, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 14, Green in view of Darr in further view of Verma teaches the limitations set forth above. Green further discloses [0368] The SOLx system can also then provide an estimation of the quality of the translation result (FIG. 10). Good translations would then be loaded into the run-time localization system for use in the source system architecture. Bad translations would be used to improve the normalization grammars and rules, or the translation dictionary. The grammars, rules, and translation dictionary form a model of the content. Once the model of the content is complete, a very high level of translations are of good quality. [0403] The CSE 1720 is a system initially not under GUI 1716 control that identifies terms and small text strings that repeat often throughout the data set and are good candidates for the initial normalization process. While the combination teaches the determination of item descriptors as normalized content and the determination of items from a catalog for a customer, the combination does not expressly disclose: receiving a training first item descriptor; inputting the training first item descriptor into the second model; receiving, as output from the second model, a training second item descriptor corresponding to the first item descriptor; determining one or more categories corresponding to the training second item descriptor; and training the second model using the training second item descriptor and the one or more categories corresponding to the training second item descriptor. However Verma teaches: receiving a training first item descriptor; inputting the training first item descriptor into the second model; receiving, as output from the second model, a training second item descriptor corresponding to the first item descriptor; determining one or more categories corresponding to the training second item descriptor; and training the second model using the training second item descriptor and the one or more categories corresponding to the training second item descriptor. [Shown in Figures 3-4 In some implementations, at 302, the matching engine 536 may train a multi-class classifier using attribute-type indicator(s) and attribute value(s) as independent features on match classes (e.g., using match-class descriptors). For example, an example method for training a machine learning model is described in reference to FIG. 4.] Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include receiving a training first item descriptor; inputting the training first item descriptor into the second model; receiving, as output from the second model, a training second item descriptor corresponding to the first item descriptor; determining one or more categories corresponding to the training second item descriptor; and training the second model using the training second item descriptor and the one or more categories corresponding to the training second item descriptor, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Claims 7-11 are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 20050288920) in view of Darr (US 7698170) in further view of Verma (US 11367116). Regarding claim 7, Green in view of Darr in further view of Verma teaches the limitations set forth above, but does not expressly disclose: wherein determining the one or more categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor. However Hertz teaches wherein determining the one or more categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor. [0134] Data Acquisition, Transformation and Interlinking—The following describes one exemplary manner of implementing the SCAR system. SCAR accesses a plurality of data sources and obtains/collects electronic data representing documents including textual content as source data, this is referred to as the acquisition and curation process. Such collected and curated data is then used to build the knowledge graph. Data Source and Acquisition—In this exemplary implementation, the data used covers a variety of industries, including Financial & Risk (F&R), Tax & Accounting, Legal, and News. Each of these four major data categories can be further divided into various sub-categories. [0216] FIG. 15 is a flowchart of a method 1500 for identifying supply chain relationships. The first step 1502 provides for accessing a Knowledge Graph data store comprising a plurality of Knowledge Graphs, each Knowledge Graph related to an associated entity and including a first Knowledge Graph associated with a first company and comprising supplier-customer data. In the second step 1504 electronic documents are received by an input from a plurality of data sources via a communications network, the received documents comprise unstructured text. The third step 1506 performs, by a preprocessing interface, one or more of named entity recognition, relation extraction, and entity linking on the received electronic documents. In the fourth step 1508 the preprocessing interface generates a set of tagged data. The fifth step 1510 provides for the parsing of the electronic documents by the preprocessing interface into sentences and identification of a set of sentences with each identified sentence having at least two identified companies as an entity-pair. In step 1512 a pattern-matching module performs a pattern-matching set of rules to extract sentences from the set of sentences as supply chain evidence candidate sentences. Next in step 1514, a classifier adapted to utilize natural language processing on the supply chain candidate sentences calculates a probability of a supply-chain relationship between an entity-pair associated with the supply chain evidence candidate sentences. Finally, in step 1516 an aggregator aggregates at least some of the supply chain evidence candidates based on the calculated probability to arrive at an aggregate evidence score for a given entity-pair, wherein a Knowledge Graph associated with at least one company from the entity-pair is updated based on the aggregate evidence score. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Green in view of Darr in further view of Verma to include wherein determining the one or more categories further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the candidate item descriptor to a knowledge base of information corresponding to a domain; and determine, based on the comparison, one or more context categories for the candidate item descriptor, as taught in Hertz, in order to process the large volumes of available data to detect indications of relationship and aggregate these indications across data sources ([0022]). Regarding claim 8, Green in view of Darr in view of Verma in further view of Hertz teaches the limitations set forth above. While Green in view of Darr does not disclose, Verma further discloses: wherein the method further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the one or more context categories for the candidate item descriptor to the one or more context categories for other candidate item descriptors of the plurality of candidate item descriptors; and determining, based on the comparison of the one or more context categories, a likelihood of the candidate item descriptor matching the first descriptor; and responsive to the determined likelihood exceeding a verification threshold criterion, select the candidate item descriptor as input to the second model. [Col. 15 lines 20-30] In some implementations, at 406, the matching engine 536 may determine match-class descriptor based on a threshold level of the normalized match score for the combination of first item and second item. For example, the match score and corresponding match-class descriptor may be determined for a particular combination of items, such as is described in reference to FIG. 1. and [Col. 12 lines 25-40] Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein the method further comprises: for each candidate item descriptor of the plurality of candidate item descriptors: compare the one or more context categories for the candidate item descriptor to the one or more context categories for other candidate item descriptors of the plurality of candidate item descriptors; and determining, based on the comparison of the one or more context categories, a likelihood of the candidate item descriptor matching the first descriptor; and responsive to the determined likelihood exceeding a verification threshold criterion, select the candidate item descriptor as input to the second model, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 9, Green in view of Darr in view of Verma in further view of Hertz teaches the limitations set forth above. While Green in view of Darr does not disclose, Verma further discloses: wherein identifying the candidate item descriptor with the highest probability of matching the first descriptor further comprises: determine, for each candidate item descriptor of the plurality of candidate item descriptors, a probability that the one or more context categories of the candidate item descriptor matches a context category corresponding to the first descriptor; and select the candidate item descriptor with the highest probability of matching the first descriptor as the candidate item descriptor having the highest probability that the one or more context categories of the candidate item descriptor matches the context category corresponding to the first descriptor. [Col. 12 lines 65-67] For example, the attribute-type indicator(s) and attribute value(s) for a combination of a first and second item may be input into a trained classifier, such as a random forest multi-class classifier to determine the probabilities. For instance, a random forest multi-class classifier may show up to a twenty percent increase in accuracy over other methods in a four class classification. and [Col. 15 lines 20-30] In some implementations, at 406, the matching engine 536 may determine match-class descriptor based on a threshold level of the normalized match score for the combination of first item and second item. For example, the match score and corresponding match-class descriptor may be determined for a particular combination of items, such as is described in reference to FIG. 1. and [Col. 12 lines 25-40] Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein identifying the candidate item descriptor with the highest probability of matching the first descriptor further comprises: determine, for each candidate item descriptor of the plurality of candidate item descriptors, a probability that the one or more context categories of the candidate item descriptor matches a context category corresponding to the first descriptor; and select the candidate item descriptor with the highest probability of matching the first descriptor as the candidate item descriptor having the highest probability that the one or more context categories of the candidate item descriptor matches the context category corresponding to the first descriptor, as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 10, Green in view of Darr in view of Verma in further view of Hertz teaches the limitations set forth above. While Green in view of Darr does not disclose, Verma further discloses: wherein selecting the candidate item descriptor with the highest probability of matching the first descriptor further comprises: for a candidate item descriptor of the plurality of candidate item descriptors: determine a first context category for a first term of the candidate item descriptor and a second context category for a second term of the candidate item descriptor; and compare the first context category and second context category to a matching context; and select the candidate item descriptor with the highest probability of matching the first descriptor based on the comparison of the first context category and second context category. [Col. 12 lines 65-67] For example, the attribute-type indicator(s) and attribute value(s) for a combination of a first and second item may be input into a trained classifier, such as a random forest multi-class classifier to determine the probabilities. For instance, a random forest multi-class classifier may show up to a twenty percent increase in accuracy over other methods in a four class classification. and [Col. 15 lines 20-30] In some implementations, at 406, the matching engine 536 may determine match-class descriptor based on a threshold level of the normalized match score for the combination of first item and second item. For example, the match score and corresponding match-class descriptor may be determined for a particular combination of items, such as is described in reference to FIG. 1. and [Col. 12 lines 25-40] Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the collection of items identified by their descriptors in Green in view of Darr to include wherein selecting the candidate item descriptor with the highest probability of matching the first descriptor further comprises: for a candidate item descriptor of the plurality of candidate item descriptors: determine a first context category for a first term of the candidate item descriptor and a second context category for a second term of the candidate item descriptor; and compare the first context category and second context category to a matching context; and select the candidate item descriptor with the highest probability of matching the first descriptor based on the comparison of the first context category and second context category., as taught in Verma, in order to optimizes the accuracy and efficiency of determining corollary or matching items [Col. 4 lines 45-47]. Regarding claim 11, Green in view of Darr in view of Verma in further view of Hertz teaches the limitations set forth above, but does not disclose: wherein the knowledge base of information is a triplestore. However Hertz teaches: wherein the knowledge base of information is a triplestore. [0031] The system of the first embodiment may also be characterized in one or more of the following ways. The system may further comprise a user interface adapted to receive an input signal from a remote user-operated device, the input signal representing a user query, wherein an output is generated for delivery to the remote user-operated device and related to a Knowledge Graph associated with a company in response to the user query. The system may further comprise a query execution module adapted to translate the user query into an executable query set and execute the executable query set to generate a result set for presenting to the user via the remote user-operated device. The system may further comprise a graph-based data model for describing entities and relationships as a set of triples comprising a subject, predicate and object and stored in a triple store. The graph-based data model may be a Resource Description Framework (RDF) model. The triples may be queried using SPARQL query language. The system may further comprise a fourth element added to the set of triples to result in a quad. The system may further comprise a machine learning-based algorithm adapted to detect relationships between entities in an unstructured text document. The classifier may predict a probability of a relationship based on an extracted set of features from a sentence. The extracted set of features may include context-based features comprising one or more of n-grams and patterns. The system may further comprise wherein updating the Knowledge Graph is based on the aggregate evidence score satisfying a threshold value. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Green in view of Darr in further view of Verma to include wherein the knowledge base of information is a triplestore, as taught in Hertz, in order to process the large volumes of available data to detect indications of relationship and aggregate these indications across data sources ([0022]). Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Green (US 20050288920) in view of Darr (US 7698170) in further view of Misra (US 20210182912). Regarding claims 17-18, Green in view of Darr teaches the limitations set forth above, but does not expressly disclose: wherein the second model is a supervised model. wherein the second model is an unsupervised model. However Misra: wherein the second model is a supervised model [0072] The exemplary techniques can reduce time, errors, and/or cost associated in creating labeled data sets for supervised predictive models. wherein the second model is an unsupervised model [0022] In one or more embodiments, unsupervised models can be utilized to create feature sets that allow the creation of automated labels for large sets of documents, including textual documents Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Green in view of Darr to include wherein the second model is a supervised model or wherein the second model is an unsupervised model, as taught in Misra, in order to reduce time, errors, and/or cost associated in creating labeled data sets ([0072]). Subject Matter Free of Prior Art Claims 6 and 12 are determined to be free of prior art. Claim 6 is rejected as it is subject to the double patenting rejection and it depends from rejected claim 1. However, claims 6 is patent subject matter eligible and free of prior art. Claim 12 is free of prior art, however is rejected under 35 USC 101 and the double patenting rejection and as is depends from rejected claims 1 and 7. The closest prior art of record for claim 6 was determined to be Verma (cited above) which discloses in col 15, lines 22-28, " the matching engine 536 may receive input from a user via the administrative graphical user interface. The input may modify or confirm, for example, a match- class descriptor for combination of first and second item. For example, a user may manually override suggested match classes for individual item combinations. The manual overrides may provide data that can be used to re-tune attribute weights via supervised learning. For example, the reviewed matches may provide match scores, weights, attribute-type indicators, etc., as inputs to a machine learning algorithm, which may use the input to re-tune the model for the category of items. In some instances, the input may store a new match-class descriptor for the combination of items in the matching database, so that the manually programmed version may be used instead of the machine learning generated version" and col 4, lines 25-43, "the technology may identify attributes of items to determine corollary or matching items. For instance, the technology may combine algorithmic item matching with user assessments to build a catalog of similar items. The technology may automatically provide matching items based on a match class representing the level of similarity of the items. However, the reference nor combination nor combination of references was determined to teach the claimed invention. The closest prior art for claim 12 was determined to be Misra (cited above) which discloses [0072] The exemplary techniques can reduce time, errors, and/or cost associated in creating labeled data sets for supervised predictive models and [0022] In one or more embodiments, unsupervised models can be utilized to create feature sets that allow the creation of automated labels for large sets of documents, including textual documents. However, the reference nor combination nor combination of references was determined to teach the claimed invention. The closest NPL was found to be “A Clustering-Based Combinatorial Approach to Unsupervised Matching of Product Titles” which discloses (Abstract) The constant growth of the e-commerce industry has rendered the problem of product retrieval particularly important. As more enterprises move their activities on the Web, the volume and the diversity of the product-related information increase quickly. These factors make it difficult for the users to identify and compare the features of their desired products. Recent studies proved that the standard similarity metrics cannot effectively identify identical products, since similar titles often refer to different products and vice-versa. Other studies employed external data sources (search engines) to enrich the titles; these solutions are rather impractical mainly because the external data fetching is slow. In this paper we introduce UPM, an unsupervised algorithm for matching products by their titles. However, the reference nor combination nor combination of references was determined to teach the claimed invention. Therefore, none of the cited references disclose or render obvious each and every feature of the claimed invention and the claimed invention is determined to be free of the prior art. Although individually the claimed features could be taught, any combination of references would teach the claimed limitations using a piecemeal analysis, since references would only be combined and deemed obvious based on knowledge gleaned from the applicant's disclosure. Such a reconstruction is improper (i.e., hindsight reasoning). See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). The examiner emphasizes that it is the interrelationship of the limitations that renders these claims free of the prior art/additional art. Therefore, it is hereby asserted by the Examiner that, in light of the above, that the claims are free of prior art as the references do not anticipate the claims and do not render obvious any further modification of the references to a person of ordinary skill in art. Relevant Art Not Cited Szarvas (US 10909442) discloses Using the text sections as input, a machine learning model which includes respective portions corresponding to the different perspectives is trained to reconstruct the input using intermediary descriptors learned from the input. An indication that a second text source is recommended with respect to a first text source is generated using a set of the learned descriptors and transmitted. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. VICTORIA E. FRUNZI Primary Examiner Art Unit TC 3689 /VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 3/17/2026
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561733
DYNAMICALLY PRESENTING AUGMENTED REALITY CONTENT GENERATORS BASED ON DOMAINS
2y 5m to grant Granted Feb 24, 2026
Patent 12524795
SINGLE-SELECT PREDICTIVE PLATFORM MODEL
2y 5m to grant Granted Jan 13, 2026
Patent 12518309
SYSTEMS AND METHODS FOR REDUCING PERSONALIZED REAL ESTATE COLLECTION SUGGESTION DELAYS VIA BATCH GENERATION
2y 5m to grant Granted Jan 06, 2026
Patent 12417481
SYSTEMS AND METHODS FOR AUTOMATING CLOTHING TRANSACTION
2y 5m to grant Granted Sep 16, 2025
Patent 11810156
SYSTEMS, METHODS, AND DEVICES FOR COMPONENTIZATION, MODIFICATION, AND MANAGEMENT OF CREATIVE ASSETS FOR DIVERSE ADVERTISING PLATFORM ENVIRONMENTS
2y 5m to grant Granted Nov 07, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
24%
Grant Probability
48%
With Interview (+23.8%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 284 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month