Prosecution Insights
Last updated: April 19, 2026
Application No. 18/479,737

SPARSE INTENT CLUSTERING THROUGH DEEP CONTEXT ENCODERS

Non-Final OA §101§102§103§112§DP
Filed
Oct 02, 2023
Examiner
NGUYEN, LOAN T
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Adp Inc.
OA Round
2 (Non-Final)
65%
Grant Probability
Favorable
2-3
OA Rounds
4y 1m
To Grant
88%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
223 granted / 343 resolved
+10.0% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
30 currently pending
Career history
373
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 343 resolved cases

Office Action

§101 §102 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This application claims the benefit of priority under 35 U.S.C. § 120 as a continuation of U.S. Patent Application No. 16/983,146, filed August 3, 2020. This communication is responsive to amendment filed on 09/03/2025. Status of claims: Claims 1-20 were canceled. Claims 21, 27 and 33 are amended. Claims 21-40 are presented for examination. Response to Arguments Applicant’s arguments with respect to the amended and newly added limitations have been considered in analyzing of the rejection below. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., Inre Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); Inre Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a jomt research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP $§ 706.02(1)(1) - 706.02(1)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 21-40 of the application are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-8, 10-14, and 16-20 of US patent no. 11/775,408 B2 with lesser limitations. Since the difference of the claimed subject matters in scope is deminims and unrelated to the overall aesthetic appearance of the claims being compared. In addition, it is obvious for an ordinary skilled person in the art at the time the invention was made to remove limitations from the claims for the purpose to extend a broader intentional usage for his/hers invention. Although the claims at issue are not identical, they are not patentably distinct from each other. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Step 1: Claims 1, 33 and 39 are directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Step 2A, Prong One: The claims recite the following limitations directed to an abstract idea: “determine…; identify…; identify…” are processes that, under its broadest reasonable interpretation, covers a mental process as a form of evaluation or judgement, but for the recitation of generic computer components. That is nothing in the claim element precludes the steps from practically being performed in a human mind. For example, the limitations “determine…; identify…; identify…”, in the context of the claim encompasses one can manually or mentally with the aid of pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Step 2A, Prong Two: This judicial exception is not integrated into a practical application. The claim recites the additional element: “receive…; transmit...”, which represent(s) an extra solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presenting of collected and analyzed data. (See MPEP 2106.05(g). At Step 2B: This judicial exception is not integrated into a practical application. The claim recites the additional elements: “receive…; transmit...”. These are identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. “a memory; memory, non-transitory computer-readable storage medium”, which are recited at a high level of generality such that they amount to on more than mere instructions to apply the exception using a generic component. (see MPEP 2106.05(f)). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)). “a trained neutral network” is a mere implementation using a computer. It is at best generally linking the abstract idea to a particular field of use or technological environment of machine learning (see MPEP 2106.05(h). Looking at the claims as a whole does not change this conclusion and the claims appear to be ineligible under 101. As per claims 23-27, 31, 34-38 and 40, the claims recite the limitation(s), These are identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. The addition limitations are identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334; i. … transmitting data over a network, …Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350. Claims 28-30, recite the limitation(s), which are under its broadest reasonable interpretation, covers a mental process as a form of evaluation or judgement, but for the recitation of generic computer components. That is nothing in the claim element precludes the steps from practically being performed in a human mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out the invention. Claims 21-40 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 21, 33 and 39: recite the amended limitation “a trained neural network", which is not mention in the specification. Appropriate correction is required. Claims 21, 29, 33 and 39: recite the amended limitation “multi-dimensional vector space”, which is not mention in the specification. Appropriate correction is required. Claims 21, 28-29, 33 and 39: recite the amended limitation “a/cosine similarity metric(s)”, which is not mention in the specification. Appropriate correction is required. All dependent claims are rejected for the same reasons given in their respective parent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 21, 33 and 39 are directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. “determine …”, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. “identify …; identify … ” is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting “processor and memory”, nothing in the claim element precludes the steps from practically being performed in a human mind. For example, but for the “processor and memory” language, “identify”, in the context of these claims encompasses the user mentally, or manually with the aid of pen and paper, using machines learning to identify a vector cluster labeled with an intent that matches the intent of the electronic report. If a claim limitation, under its broadest reasonable interpretation, covers mental processes but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion). Accordingly, the claim recites an abstract idea. Step 2A, Prong Two: This judicial exception is not integrated into a practical application. The additional elements: “receive…”, amount to data-gathering steps which is considered to be insignificant extra-solution activity, (See MPEP 2106.05(g)). “transmit…display…”, represents an extra-solution activity because it is a mere nominal or tangential addition to the claim, a mere generic transmission and presentation of collected and analyzed data. (See MPEP 2106.05 (g)). Step 2B: Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activities identified above, which include the data-gathering, and presenting steps, are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II) (i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAPE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPO2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPO2d at 1092- 93). The claim is not patent eligible. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the obtaining and displaying steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims, as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not affect an improvement to the functioning of a computer itself; and the claim do not move beyond a general link of the use of an abstract idea to a particular technological environment. - The dependent claims 22-32, 34-38 and 40 when analyzed and each taken as a whole are held to be patent ineligible under 35 USC 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea. Claims 22-31, The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The insignificant extra-solution activities identified above, which include the data-gathering, and presenting steps, are recognized by the courts as well-understood, routine, and conventional activities when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (See MPEP 2106.05(d)(II) (i) Receiving or transmitting data over a network, e.g., using the Internet to gather data, buySAPE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPO2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); (v) Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPO2d at 1092- 93). The claim is not patent eligible. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the obtaining and displaying steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Same rationale applies to claims 34-38. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21-22, 24, 28-34, 36, and 39-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jayaraman et al., (US 10,459,962), hereinafter “Jayaraman”. As per claim 21, Jayaraman discloses a system, comprising: one or more processors coupled with memory, the one or more processors configured to: - receive, from a client device, an instruction to create an electronic report based on a title and a description (col. 31 lines 33-42, learned paragraph vectors can be used as inputs into other supervised learning models, such as sentiment prediction models, wherein paragraph vectors are used as input with a corresponding sentiment label as output, and other metrics such as cosine similarity and nearest neighbors clustering algorithms can be applied to paragraph vectors to find or group paragraphs on similar topics within the corpus of paragraphs; and col. 34 lines 42-56, a first subset of fields of the incident report and additional incident reports in a corpus of incident reports is used to generate a first encoder produces word vectors for words present in the specified first subset of fields, wherein the first subset of fields includes the “SHORT DESCRIPTION” (which may represent a user-provided “title” for the incident report), “PROBLEM DESCRIPTION,” “CATEGORY,” and “RESOLUTION” fields; - determine, via a trained neural network, an intent of the electronic report based on the title and the description (col. 33 lines 12-44, identify clusters of related incident reports within the database, wherein determines matching paragraph vectors in database according to specific criteria, wherein a user creates a new incident report in the system, where the vector representation encodes a contextual meaning; and col. 34 lines 42-56, how text strings from a first subset of the fields can be used to generate word vectors for an incident report while one or more additional subsets of the fields can be used to generate one or more paragraph vectors for the incident report); - identify a plurality of vector clusters that group, via the trained neural network, a plurality of vectors of features of electronic reports based on cosine similarity metrics computed between the plurality of vectors in a multi-dimensional vector space, wherein each of the plurality of vector clusters is labeled with a corresponding intent based on corresponding vectors of the plurality of vectors (col. 33 lines 12-44, identify clusters of related incident reports within the database, wherein determines matching paragraph vectors in database according to specific criteria; and col. 34 lines 42-56, how text strings from a first subset of the fields can be used to generate word vectors for an incident report while one or more additional subsets of the fields can be used to generate one or more paragraph vectors for the incident report; col. 35 line38-40, generate clusters of incident reports based on the word and paragraph vectors; col. 33 lines 29-44, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database; col. 34 lines 3-65, a first subset of fields of the incident report and additional incident reports in a corpus of incident reports is used to generate a first encoder produces word vectors for words present in the specified first subset of fields, wherein SELECTIVE GENERATION OF WORD AND PARAGRAPH VECTORS by training a neural network that includes the encoder as part of in input layer, wherein the word vectors are multi-dimensional vectors that represent words in a corpus of text and that are embedded in a semantically-encoded vector space; and col. 33 lines 39-51, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database, to identify relevant incident reports related to the input incident report): - identify a vector cluster of a plurality of vector clusters having a labeled intent that matches the intent determined for the electronic report (col. 33 lines 12-44, identify clusters of related incident reports within the database, wherein determines matching paragraph vectors in database according to specific criteria; and col. 34 lines 42-56, how text strings from a first subset of the fields can be used to generate word vectors for an incident report while one or more additional subsets of the fields can be used to generate one or more paragraph vectors for the incident report), wherein the plurality of vector clusters is grouped according to cosine similarities between vectors generated via a neural network from features of electronic reports stored in a database (col. 35 line38-40, generate clusters of incident reports based on the word and paragraph vectors; and col. 33 lines 29-44, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database; and col. 34 lines 3-65, a first subset of fields of the incident report and additional incident reports in a corpus of incident reports is used to generate a first encoder produces word vectors for words present in the specified first subset of fields, wherein SELECTIVE GENERATION OF WORD AND PARAGRAPH VECTORS by training a neural network that includes the encoder as part of in input layer, wherein the word vectors are multi-dimensional vectors that represent words in a corpus of text and that are embedded in a semantically-encoded vector space; and col. 33 lines 39-51, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database, to identify relevant incident reports related to the input incident report); and - transmit, to the client device, data to cause the client device to display one or more features from the vector cluster for inclusion in the electronic report (col. 33 lines 12-26, a user creates a new incident report in the system, wherein the input incident may have been typed into a web interface by a user and at a minimum would include a short problem description of an incident. This short problem description (and/or some other field(s) of the incident report) is passed to ANN and produce a paragraph vector representation of the new incident text, where the vector representation encodes a contextual meaning). As per claim 22, Jayaraman discloses the one or more processors are further configured to: generate the plurality of vector clusters from the electronic reports previously generated in response to instructions from the client device that provides the instruction to create the electronic report (col. 35 line38-40, generate clusters of incident reports based on the word and paragraph vectors; col. 33 lines 29-44, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database; and col. 33 lines 12-26, a user creates a new incident report in the system, wherein the input incident may have been typed into a web interface by a user and at a minimum would include a short problem description of an incident. This short problem description (and/or some other field(s) of the incident report) is passed to ANN and produce a paragraph vector representation of the new incident text, where the vector representation encodes a contextual meaning). As per claim 24, Jayaraman discloses the one or more processors are further configured to: generate the plurality of vectors via the trained neural network comprising an autoencoder neural network neural network (abstract and col. 34 lines 3-65, SELECTIVE GENERATION OF WORD AND PARAGRAPH VECTORS by training a neural network that includes the encoder as part of in input layer, wherein the word vectors are multi-dimensional vectors that represent words in a corpus of text and that are embedded in a semantically-encoded vector space). As per claim 28, Jayaraman discloses the one or more processors are further configured to: cluster the plurality of vectors into the plurality of vector clusters based on the cosine similarity metrics being greater than or equal to a threshold (col. 22 lines 51-52, W8=0.56137012; and col. 33 lines 29-44, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database). As per claim 29, Jayaraman discloses the one or more processors are further configured to: project the plurality of vectors into the multi-dimensional vector space having a predetermined number of dimensions (abstract , col. 33 lines 29-44, a cosine similarity (or any other similarity metric) could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database; col. 34 lines 3-65, SELECTIVE GENERATION OF WORD AND PARAGRAPH VECTORS by training a neural network that includes the encoder as part of in input layer, wherein the word vectors are multi-dimensional vectors that represent words in a corpus of text and that are embedded in a semantically-encoded vector space); and cluster the plurality of vectors projected into the multi-dimensional vector space according to the cosine similarity metrics to create the plurality of vector clusters (col. 35 line38-40, generate clusters of incident reports based on the word and paragraph vectors; col. 33 lines 12-44, a cosine similarity could be determined between the aggregate vectors in order to identify clusters of related incident reports within the database, wherein, a user creates a new incident report in the system, wherein the vector representation encodes a contextual meaning). As per claim 30, Jayaraman discloses the one or more features comprise at least one of a field or a filter ( col. 32 lines 40-49 and col. 34 lines 42-65, using a user filter to generate a first encoder produce word vectors for words present in the specified first subset of fields). As per claim 31, Jayaraman discloses the one or more processors are further configured to: receive, from the client device, a selection of a first feature from the one or more features (col. 26 lines 34-40, a word co-occurrence matrix can be decomposed into two much smaller matrices, each containing vector representations of words; col. 32 lines 40-49 and col. 34 lines 42-65, using a user filter to generate a first encoder produce word vectors for words present in the specified first subset of fields); and create the electronic report with the first feature that is selected col. 26 lines 34-40, a word co-occurrence matrix can be decomposed into two much smaller matrices, each containing vector representations of words; col. 32 lines 40-49 and col. 34 lines 42-65, using a user filter to generate a first encoder produce word vectors for words present in the specified first subset of fields) As per claim 32, Jayaraman discloses the one or more processors are further configured to: generate the plurality of vector clusters from the electronic reports previously generated by a same user of the client device in response to the instruction to create the electronic report (col. 35 line38-40, generate clusters of incident reports based on the word and paragraph vectors; and col. 33 lines 12-44, a user creates a new incident report in the system, wherein the input incident may have been typed into a web interface by a user and at a minimum would include a short problem description of an incident). As per claims 33-34 and 36, are method claims, which are corresponding the system of claims 21-22 and 24. Therefore, they are rejected under the same rational of claims 21-22 and 24 above. As per claims 39-40, are non-transitory computer-readable medium claims, which are corresponding the method claims 21-22. Therefore, they are rejected under the same rational of claims 21-22 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 23, 25-27, 35, and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Jayaraman and in view of Liu (US 2020/0089808 A1). As per claim 23, Jayaraman discloses the invention as claimed, except “the vectors comprising compressed float vectors”. Meanwhile, Liu discloses generate, via the trained neural network, the plurality of vectors comprising compressed float vectors (par. 0030], the semantic-signature-generator model comprises a neural network, and par. [0058], the vector representation in the whole vector space may be partitioned into multi-clusters and use a binary code to denote the cluster id. The quantization may then be done for the float point vectors that fall into each cluster. The binary code of cluster id plus the binary code of the quantization result are used as the converted final binary hashing signatures). Therefore, one having ordinary skill in the art would have been obvious before the effective filing date of the claimed invention to have modified the system of Jayaraman to include the features as disclosed by Liu in order to access and retrieve search results in a faster more efficient manner. As per claim 25, Jayaraman discloses the invention as claimed, except “generate the vectors based on one or more multi-dimensional float vectors comprising floating point numbers with fractional parts that represent the electronic reports stored in the database”. Meanwhile, Liu discloses generate the plurality of vectors based on one or more multi-dimensional float vectors comprising floating point numbers with fractional parts that represent the electronic reports stored in a database (par. [0031], a vector representation of an inventory listing may be generated by a deep learning model that projects a sequence symbols, such as phrases, sentences, etc., into a multi-dimensional vector space, sometimes referred to as a semantic space; and par. [0058], the vector representation in the whole vector space may be partitioned into multi-clusters and use a binary code to denote the cluster id. The quantization may then be done for the float point vectors that fall into each cluster. The binary code of cluster id plus the binary code of the quantization result are used as the converted final binary hashing signatures). Therefore, one having ordinary skill in the art would have been obvious before the effective filing date of the claimed invention to have modified the system of Jayaraman to include the features as disclosed by Liu in order to access and retrieve search results in a faster more efficient manner. As per claim 26, Jayaraman discloses the invention as claimed, except “the features of the electronic reports stored in the database into one or more binary vectors”. Meanwhile, Liu discloses the features of the electronic reports stored in a database into one or more binary vectors (par. [0031], a deep neural network (DNN) or a convolutional neural network (CNN), may be performed using store behavioral data, such as click-through and purchase data for an online marketplace. Thus, vector representations having a close relationship will be semantically relevant based on stored behavioral data); and input the one or more binary vectors into the trained neural network to generate the plurality of vectors (pars. [0032], [0058]-[0059], The set of binary numbers may be generated as a binary hashing signature of the vector representation, a search query binary hashing signature of the search query vector representation and an inventory binary hashing signature of the inventory vector representation will have a close distance). Therefore, one having ordinary skill in the art would have been obvious before the effective filing date of the claimed invention to have modified the system of Jayaraman to include the features as disclosed by Liu in order to access and retrieve search results in a faster more efficient manner. As per claim 27, Jayaraman discloses the invention as claimed, except “encode the features into the one or more binary vectors via one-hot encoding a number of fields corresponding to the features”. Meanwhile, Liu discloses encode the features into the one or more binary vectors via one-hot encoding a number of fields corresponding to the features (pars. [0032]-[0033] and [0035]). Therefore, one having ordinary skill in the art would have been obvious before the effective filing date of the claimed invention to have modified the system of Jayaraman to include the features as disclosed by Liu in order to access and retrieve search results in a faster more efficient manner. As per claims 35 and 37-38, are method claims, which are corresponding the system of claims 23 and 25-27. Therefore, they are rejected under the same rational of claims 23 and 25-27 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN T NGUYEN whose telephone number is (571)-270-3103. The examiner can normally be reached on Monday from 10:00 am - 6:00 pm, Thursday-Friday from 10:00 am - 2:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached on (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-270-4103. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 1/24/2026 /LOAN T NGUYEN/Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Oct 02, 2023
Application Filed
Mar 03, 2025
Non-Final Rejection — §101, §102, §103
Jul 10, 2025
Interview Requested
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Sep 03, 2025
Response Filed
Feb 03, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602364
Scalable Object Storage
2y 5m to grant Granted Apr 14, 2026
Patent 12536370
ARBITRARY SIZE CONTENT ITEM GENERATION
2y 5m to grant Granted Jan 27, 2026
Patent 12517792
PROVIDING STATUS OF DATA STORAGE OPERATIONS WITHIN AN INFORMATION MANAGEMENT SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12517952
SEMI-STRUCTURED DATA DECOMPOSITION
2y 5m to grant Granted Jan 06, 2026
Patent 12511256
MULTI-SERVICE BUSINESS PLATFORM SYSTEM HAVING CUSTOM OBJECT SYSTEMS AND METHODS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
65%
Grant Probability
88%
With Interview (+23.5%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 343 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month