Prosecution Insights
Last updated: April 19, 2026
Application No. 17/743,996

APPARATUS FOR AUTOMATIC POSTING ACCEPTANCE

Non-Final OA §101§103
Filed
May 13, 2022
Examiner
SENSENIG, SHAUN D
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Stynt, Inc.
OA Round
11 (Non-Final)
14%
Grant Probability
At Risk
11-12
OA Rounds
5y 2m
To Grant
31%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
58 granted / 400 resolved
-37.5% vs TC avg
Strong +17% interview lift
Without
With
+16.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
29 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
31.4%
-8.6% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
18.0%
-22.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 400 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Status This action is in response to papers filed on 2/9/2026. Claims 1 and 11 have been amended. Claims 8, 9, 18, and 19 have been cancelled. No claims have been added. Claims 1-7, 10-17, and 20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 10-17, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claims are directed to a process (method as introduced in Claim 11) and/or an apparatus (Claim 1), thus Claims 1-7, 10-17, and 20 fall within one of the four statutory categories. See MPEP 2106.03. Step 2A, Prong 1: The claimed invention recites an abstract idea according to MPEP §2106.04. The independent claims which recite the following claim limitations as an abstract idea, are underlined below. Claims 1 and 11 recite (as represented by the language of Claim 1): receive a plurality of postings from a plurality of hosting [sources] websites for job listings, wherein receiving each posting comprises: communicating a retrieval request for the posting to at least a hosting [source] website for job listings; and receiving the posting from the at least a hosting [source] website for job listings in response to the retrieval request; extract relevant data from each posting of the plurality of postings, wherein extract relevant data from each posting comprises: identifying unnecessary data within the posting based on the retrieval request; and extracting the relevant data from the posting based on an identification of the unnecessary data; for each posting of the plurality of postings, determine at least an acceptance datum, as a function of the extracted relevant data, wherein the at least an acceptance datum comprises a required set of qualifications for automatic posting acceptance that includes a narrower set of qualifications than the posting and determining the acceptance datum comprises: retrieving, from at least a candidate acceptance database, a plurality of candidate acceptances and associated keywords, wherein each candidate acceptance of the plurality of candidate acceptances is (i) associated with at least a keyword of the plurality of keywords; and (ii) represents a candidate datum of at least a current employee; for each candidate acceptance of the plurality of candidate acceptances, comparing the relevant extracted data from the posting with the at least a keyword associated with the candidate acceptance; selecting at least a candidate acceptance representing a candidate datum of at least a current employee, as a function of the comparison between the plurality of keywords and the relevant extracted data from the posting; and determining the at least an acceptance datum for the posting from the selected at least a candidate acceptance; receive, from a user interface, a user input comprising a plurality of candidate datums; for each candidate datum of the plurality of candidate datums, extract the candidate datum from the user input; for each candidate datum of the plurality of candidate datums, classify the candidate datum using a classifier machine learning model to the at least an acceptance datum for a matching posting of the at least a plurality of postings, wherein classifying each candidate datum comprises: training the classifier machine learning model using classifier training data comprising a plurality of data entries correlating candidate datum elements as inputs to acceptance datum elements associated with the matching posting as outputs, wherein the candidate datum elements comprise wage index elements derived from the candidate datum, wherein the wage index elements comprise a geographical wage index comprising a range of compensation calculated as a function of a geographic area; and applying the trained classifier machine learning model to the candidate datum to generate, for each candidate datum, a corresponding acceptance datum for the matching posting; for each candidate datum of the plurality of candidate datums, determine an acceptance compatibility score between the candidate datum and each posting of the plurality of postings, wherein determining the acceptance compatibility score comprises: filtering compatibility training data using a training data classifier; training a compatibility machine learning model using the filtered compatibility training data, wherein the filtered compatibility training data comprises a plurality of data entries correlating acceptance datum elements and posting data elements as inputs to acceptance compatibility score elements as outputs; and applying the trained compatibility machine learning model to the acceptance datum for the candidate datum and the extracted relevant data from each posting to generate the acceptance compatibility score for each posting of the plurality of postings; for each candidate datum of the plurality of candidate datums, display in real time, using a graphical user interface, the acceptance compatibility score for each posting of the plurality of postings on a graphical user interface; and where a candidate datum of the plurality of candidate datums is classified to an acceptance datum of a posting of the plurality of postings: automatically accept a candidate associated with the classified candidate datum by automatically notifying, using a remote device, an employer associated with the posting that the candidate is eligible to be automatically hired; and upload [save] the classified candidate datum and the relevant data from the associated posting to the candidate acceptance database. The underlined claim limitations as emphasized above, as drafted, recite a process that, under its broadest reasonable interpretation covers the performance of commercial interactions (including advertising, marketing or sales activities or behaviors; business relations) in the form of recruiting (including, but not limited to employment recruiting as discussed in Applicant’s specification). Other than reciting a computer implementation, nothing in the claim elements precludes the step from encompassing the performance of commercial or legal interactions which represents the abstract idea of certain methods of organizing human activity. But for the recitation of generic implementation of computer system components, the claimed invention merely recites a process for comparing candidate data to job data to determine matches and accept candidates. Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite additional elements such as: an apparatus for automatic posting acceptance, wherein the apparatus comprises: at least a processor; and a memory communicatively connected to the at least a processor, the memory containing instructions; [receiving, extracting, and retrieving data] by the at least a processor; [automatically accepting and notifying entities] by the at least a processor; websites [for hosting data]; candidate acceptance database [for retrieving, storing, and uploading data]; classifier machine learning model and machine learning process trained using training data configured to correlate the classified candidate datum inputs to acceptance datum elements associated with the matching posting as outputs (including using a training data classifier for filtering data for comparison to the corelated data); applying the trained classifier machine learning model to the candidate datum to generate acceptance datum for a matching posting and an acceptance compatibility score output; user interfaces (including graphical user interfaces); and using a remote device [for notifications]. In particular, the additional elements cited above beyond the abstract idea are recited at a high-level of generality and simply equivalent to a generic recitation and basic functionality that amount to no more than mere instructions to apply the judicial exception using generic computer technology components. Accordingly, since the specification describes the additional elements in general terms, without describing the particulars, the additional elements may be broadly but reasonably construed as generic computing components being used to perform the judicial exception (see specification at [0032]; [0052]; [0063]; [0067]). Additionally, the process for training the model (including the “training data classifier”) and applying the model are described in the specification at a high level of generality (see specification at [0020]; [0036]; etc.) and merely provide a tool for training the classifier, regardless of the types of data used for training or the model. The classifiers are described as performing generic steps that would be performed by classifiers and merely applied to the recited/selected data. These claimed additional elements merely recite the words “apply it" (or an equivalent) with the judicial exception, or merely include instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e)). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea. Step 2B: The claims do not include additional elements, individually or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept at Step 2B. Thus, the claim is not patent eligible. Dependent Claims: Claims 2-7, 10, 12-17, and 20 recite further elements related to the candidate and job matching steps of the parent claims. These activities fail to differentiate the claims from the related activities in the parent claims and fail to provide any material to render the claimed invention to be significantly more than the identified abstract ideas, as outlined below. Claims 2 and 12 recite “wherein accepting the candidate is determined using a machine learning model”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. The machine learning used in these steps is recited at a high-level of generality and is only nominally and generically recited as a tool for performing these steps. Therefore, it does not integrate the abstract idea into a practical application or provide an inventive concept. Claims 3 and 13 recite “wherein accepting the candidate includes a confirmation process”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Claims 4 and 14 recite “wherein the processor is further configured to determine an acceptance classification datum using fuzzy logic”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Specifying that the processor is used to perform the ranking does not integrate the abstract idea into a practical application or provide an inventive concept. Additionally, it is noted that, reciting the use of fuzzy logic to perform these steps does not necessarily tie the claimed invention to a technological environment in a meaningful manner, therefore, it does not integrate the abstract idea into a practical application or provide an inventive concept. Claims 5 and 15 recite “wherein the candidate is automatically accepted as a function of a verification process”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Claims 6 and 16 recite “wherein the candidate is automatically accepted as a function of a geographic datum”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Claims 7 and 17 recite “wherein the candidate is automatically accepted as a function of a wage range verification”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Claims 10 and 20 recite “wherein the at least a processor is further configured to rank candidates as function of the acceptance compatibility score”, which further narrows the abstract ideas of the parent claims, but does not lead toward eligibility. Specifying that the processor is used to perform the ranking does not integrate the abstract idea into a practical application or provide an inventive concept. The claims do not provide any new additional limitations or meaningful limits beyond abstract idea that are not addressed above in the independent claims therefore, they do not integrate the abstract idea into a practical application nor do they provide significantly more to the abstract idea. Thus, after considering all claim elements, both individually and as a whole, it has been determined that the claims do not integrate the judicial exception into a practical application or provide an inventive concept. Therefore, Claims 2-7, 10, 12-17, and 20 are ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7, 10-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Champaneria (Pub. No. US 2019/0019159 A1) in view of Scarborough et al. (Pub. No. US 2012/0078804 A1) in further view of Stewart et al. (Patent No. US 11,663,397 B1) in further view of Khanduja et al. (Pub. No. US 2021/0097495 A1). In regards to Claims 1 and 11, Champaneria discloses: A Method/apparatus for automatic posting acceptance, comprising: at least a processor; and a memory communicatively connected to the at least a processor, the memory containing instructions configuring the at least a processor to: ([0029]) receive a plurality of postings from a plurality of hosting websites for job listings, wherein receiving each posting comprises: ([0041], job descriptions used by the system include jobs descriptions submitted to one or more locations (including job board and websites), the jobs posted at these locations are the same jobs that are retrieved and analyzed for matching by the system) for each posting of the plurality of postings, determine at least an acceptance datum, as a function of the extracting relevant data, wherein the at least an acceptance datum comprises a required set of qualifications for automatic posting acceptance that includes a narrower set of qualifications than the posting and determining the acceptance datum comprises: ([0007]; [0065]; identifies required qualifications for the job, required qualifications would represent the acceptance datum for each job, job descriptions can include other qualifications (preferred, etc.) indicating that the required attributes are narrower than the whole job description; [0125], shows that required qualifications can be extracted/parsed from the job descriptions; [0060], shows many-to-many matching (matching multiple jobs to multiple candidates), although several examples in the reference discuss one job or one candidate, this is not limiting as the method/system performs the processes on multiple candidates and jobs; [0008], shows an automated system for accepting a candidate based on matching criteria from resumes and job descriptions (automatic acceptance, “…a candidate score for the candidate by matching the hiring information… to the one or more job requirements…automatically sending, with the automated recruitment system, a contact message to the candidate. This contact message may be, for example, a list of top reasons to work for the employer, a more extensive job description, a link to apply for a position, or any other contact message that may be desired.”; [0001], further discusses the practice of minimum requirements as a narrower set of job attributes in recruiting) for each candidate acceptance of the plurality of candidate acceptances, comparing the relevant extracted data from the posting with the at least a keyword associated with the candidate acceptance; ([0055]; [0062], uses a semantic matching engine and natural language processor to determine matches between candidates and jobs, these engines analyze the language and any relationships between language and words to identify key points, concepts, or requirements, etc. in the job descriptions and resumes for matching, wherein the process used by the semantic matching engine does identify key points/concepts/requirements by analyzing the language and requirements which encompasses keyword matching and provides similar results) receive, from a user interface, a user input comprising a plurality of candidate datums; (at least [0049], plurality of candidate resumes are retrieved; [0011], can include resumes directly submitted by candidates) for each candidate datum of the plurality of candidate datums, extract the candidate datum from the user input; (at least [0008], resumes are parsed to extract candidate data) for each candidate datum of the plurality of candidate datums, determine an acceptance compatibility score between the candidate datum and each posting of the plurality of postings; (Abstract; [0008]; [0062]; matches between candidate data and job data extracted from resumes and job descriptions are matched and scored) where a candidate datum of the plurality of candidate datums is classified to an acceptance datum of a posting of the plurality of postings: (for each candidate datum of the plurality of candidate datums, where possible, classify the candidate datum to the at least an acceptance datum for a matching posting of the at least a plurality of postings; [0065], each qualification (or job datum in the job description) may be given a weight and matched (classified to each other) separately, each datum/qualification is matched and aggregated to determine the score for that candidate and job (additionally see also [0039], the matching methods may include “Bayesian classifiers”); [0062]-[0065]) automatically accept a candidate associated with the classified candidate datum by automatically notifying, using a remote device, an employer associated with the posting that the candidate is eligible to be automatically hired; ([0030], providing a candidate with a score for a particular job opening. The method and system may then collect any other information that may be necessary, and may be configured to notify employers about one or more high-scoring candidates that have been found for a job opening.”, see also [0008] regrading automatic acceptance); and upload the classified candidate datum and the relevant data from the associated posting to the candidate acceptance database ([0055], parsed data is stored in a database). Although Champaneria discloses retrieving a plurality of candidate associated keywords, wherein each candidate [matches] is (i) associated with at least a keyword of the plurality of keywords; and (ii) represents a candidate datum, as described above, Champaneria does not explicitly state that the acceptance datum is determined based on candidate datum of current employees. However, Scarborough teaches: retrieving, from at least a candidate acceptance database, a plurality of candidate acceptances and associated [candidate employment qualifications], wherein each candidate acceptance of the plurality of candidate acceptances is (i) associated with at least a [candidate employment qualifications]; and (ii) represents a candidate datum of at least a current employee (see at least [0044]-[0046]; [0170], the determined acceptance data Is based on data pertaining to previous and current employees; [0139]-[0144], shows a database of employee data, including pre-hire and post-hire data) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria so as to have included retrieving, from at least a candidate acceptance database, a plurality of candidate acceptances and associated [candidate employment qualifications], wherein each candidate acceptance of the plurality of candidate acceptances is (i) associated with at least a [candidate employment qualifications]; and (ii) represents a candidate datum of at least a current employee, as taught by Scarborough in order to increase the strength and efficiency of matching candidates to jobs by including valuable job experience data that can further define a candidates fit to the job and including only effective data (Scarborough, [0016]; [0047]; [0097]). One of ordinary skill in the art would recognize that the effective hiring data determined in Scarborough could be used as acceptance datum in the postings of Champaneria. Champaneria discloses selecting at least a candidate acceptance representing a candidate datum as a function of the comparison between the plurality of keywords and the relevant extracted data from the posting, as described above. Champaneria does not explicitly state that the acceptance datum is determined based on candidate datum of current employees. However, Scarborough teaches: selecting at least a candidate acceptance representing a candidate datum of at least a current employee and determining the at least an acceptance datum for the posting from the selected at least a candidate acceptance (see at least [0044]-[0046]; [0170], the determined acceptance data Is based on data pertaining to previous and current employees; [0051]; [0052]; [0093]-[0097], current employee data can be refined to remove ineffective acceptance data (for example, application question that would be related to job qualifications and acceptance criteria), this refined data would be related to a narrower set of qualifications since the ineffective qualifications of the current employees is removed (it is noted that acceptance datum is claimed as datum included in a [job] posting, as is recited in Champaneria, applied above)) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria so as to have included selecting at least a candidate acceptance representing a candidate datum of at least a current employee and determining the at least an acceptance datum for the posting from the selected at least a candidate acceptance, as taught by Scarborough in order to increase the strength and efficiency of matching candidates to jobs by including valuable job experience data that can further define a candidates fit to the job and including only effective data (Scarborough, [0016]; [0047]; [0097]). One of ordinary skill in the art would recognize that the effective hiring data determined in Scarborough could be used as acceptance datum in the postings of Champaneria. Champaneria/Scarborough discloses a method/system for extracting candidate data, classifying the candidate data to job acceptance datum, and using the classification to automatically accept candidates for jobs, as described above. Champaneria/Scarborough does not explicitly disclose the following steps for identifying unnecessary data and extracting relevant data, however, Stewart teaches: communicating a retrieval request for the posting to a hosting website for job listings; (col. 9, par. 2, “…upon a retrieval request [from] a user and/or from computing device 104…For example, posting datum 112 may be downloaded from a hosting website for job listings.”) receiving the posting from the at least a hosting website for job listings in response to the retrieval request; (col. 9, par. 2, “…upon a retrieval request [from] a user and/or from computing device 104…For example, posting datum 112 may be downloaded from a hosting website for job listings.”) extract relevant data from each posting of the plurality of postings, wherein extract relevant data from each posting comprises: identifying unnecessary data within an accumulation of data from the hosting website for job listings based on the retrieval request; (col. 9, par. 2, “…computing device may extract needed information from database regarding the job position and avoid taking any information determined to be unnecessary.”) extracting the relevant data from the posting based on the identification of the unnecessary data; (col. 9, par. 2, “…computing device may extract needed information from database regarding the job position and avoid taking any information determined to be unnecessary.”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria/Scarborough so as to have included the above steps for extracting only relevant data, as taught by Stewart in order to increase efficiency by avoiding the importing of any unnecessary information (Stewart, col. 9, par. 2, “…avoid taking any information determined to be unnecessary.”) Additionally, Stewart teaches: for each candidate datum of the plurality of candidate datums, classify the candidate datum using a classifier machine learning model to the at least an acceptance datum for a matching posting of the at least a plurality of postings, wherein classifying each candidate datum comprises: training the classifier machine learning model using classifier training data comprising a plurality of data entries correlating candidate datum elements as inputs to acceptance datum elements associated with the matching posting as outputs; (col. 12, lines 46-67, “…computing device 104 may implement a compatibility algorithm or generate a compatibility machine-learning module, such as machine-learning module 124, to determine a compatibility score 136 between user and job position.” (see also col. 12, lines 14-45; col. 13, par. 1); col. 13, par. 2, “In one or more embodiments, a machine-learning process may be used to determine compatibility algorithm or to generate a machine-learning model that may directly calculate compatibility score 136.”, trained by correlating past inputs to past compatibility score outputs, “Training data may include inputs and corresponding predetermined outputs so that a machine-learning module may use the correlations between the provided exemplary inputs and outputs to develop an algorithm and/or relationship that then allows the machine-learning module to determine its own outputs for inputs…a machine-learning module may obtain a training set by querying a communicatively connected database that includes past inputs and outputs. Training data may include inputs from various types of databases, resources, and/or user inputs and outputs correlated to each of those inputs so that a machine-learning module may determine an output, such as compatibility score…”), applying the trained classifier machine learning model to the candidate datum to generate, for each candidate datum, a corresponding acceptance datum for the matching posting; (col. 12, lines 14-45, “A “classifier,” as used in this disclosure is a machine-learning model, such as a mathematical model, neural net, or program generated by a machine learning algorithm known as a “classification algorithm that sorts inputs into categories or bins of data, outputting the categories or bins of data and/or labels associated therewith. A classifier may be configured to output at least a datum that labels or otherwise identifies a set of data that are clustered together, found to be close under a distance metric, or the like. As used in this disclosure, a “candidate classifier” is a classifier that classifies users to a target resume or a job position description.”, “…correlates one or more of users and user datum to one or more job descriptions, description-specific data, and posting data.”) wherein determining the acceptance compatibility score comprises: filtering compatibility training data using a training data classifier; (col. 18, lines 18-22, “…training data may be filtered, sorted, and/or selected using one or more supervised and/or unsupervised machine-learning processes and/or models as described in further detail below; such models may include without limitation a training data classifier …”) training a compatibility machine learning model using the filtered compatibility training data, wherein the filtered compatibility training data comprises a plurality of data entries correlating acceptance datum elements and posting data elements as inputs to acceptance compatibility score elements as outputs; and applying the trained compatibility machine learning model to the acceptance datum for the candidate datum and the extracted relevant data from each posting to generate the acceptance compatibility score for each posting of the plurality of postings; (col. 12, lines 46-67, “…computing device 104 may implement a compatibility algorithm or generate a compatibility machine-learning module, such as machine-learning module 124, to determine a compatibility score 136 between user and job position.” (see also col. 12, lines 14-45; col. 13, par. 1); col. 13, par. 2, “In one or more embodiments, a machine-learning process may be used to determine compatibility algorithm or to generate a machine-learning model that may directly calculate compatibility score 136.”, trained by correlating past inputs to past compatibility score outputs, “Training data may include inputs and corresponding predetermined outputs so that a machine-learning module may use the correlations between the provided exemplary inputs and outputs to develop an algorithm and/or relationship that then allows the machine-learning module to determine its own outputs for inputs…a machine-learning module may obtain a training set by querying a communicatively connected database that includes past inputs and outputs. Training data may include inputs from various types of databases, resources, and/or user inputs and outputs correlated to each of those inputs so that a machine-learning module may determine an output, such as compatibility score…”) It would have been obvious to one of ordinary skill in the art, before to the effective filing date of the claimed invention, to have further modified the system of Champaneria/Scarborough so as to have included determining the acceptance compatibility score comprises the above cited filtering and training steps including training data that comprises a plurality of data entries correlating acceptance datum elements and posting data elements as inputs to acceptance compatibility score elements as outputs and the above steps for applying the trained compatibility machine learning model to the datum to generate the acceptance compatibility score for each posting of the plurality of postings, as taught by Stewart. Champaneria/Scarborough discloses a “base” method/system for matching candidates to job requirements by determining compatibility between the candidate attributes and job attributes, as shown above. Stewart teaches a comparable method/system that is also for matching candidates to job requirements by determining compatibility between the candidate attributes and job attributes, as shown above. Stewart also teaches an embodiment in which determining the acceptance compatibility score comprises the above cited filtering and training steps including training data that comprises a plurality of data entries correlating acceptance datum elements and posting data elements as inputs to acceptance compatibility score elements as outputs and the above steps for applying the trained compatibility machine learning model to the datum to generate the acceptance compatibility score for each posting of the plurality of postings, as shown above. One of ordinary skill in the art would have recognized the adaptation of determining the acceptance compatibility score comprising the above cited filtering and training including training data that comprising a plurality of data entries correlating acceptance datum elements and posting data elements as inputs to acceptance compatibility score elements as outputs and the above steps for applying the trained compatibility machine learning model to the datum to generate the acceptance compatibility score for each posting of the plurality of postings to Champaneria/Scarborough could be performed with the technical expertise demonstrated in the applied references. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.") Although Champaneria/Scarborough disclose user interfaces for interacting with the system, Champaneria/Scarborough does not explicitly disclose that the results are displayed to the user. However, Stewart teaches: for each candidate datum of the plurality of candidate datums, display in real time, using a graphical user interface, the acceptance compatibility score for each posting of the plurality of postings on a graphical user interface; (col. 16, par. 2, “…display component 128 may be configured to display posting datum 112, user datum 108, record recommendation 116, interaction preparation 120, compatibility score…user may view information and/or data displayed on display component 128 in real time… In one or more embodiments, display component may be configured to display received or determined information…”; col. 12, par. 1, “For example, a compatibility score may be a “2” for a set range of 1-10, where “1” represents a job position and user having a minimum compatibility and “10” represents job position and user having a maximum compatibility. In other non-limiting embodiments, compatibility score 136 may be a quality characteristic, such as a color coding, where each color is associated with a level of compatibility.”, characteristics, such as color coding provide additional indications of visual displays of the output; col. 13, par. 1) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria/Scarborough so as to have included display the acceptance compatibility score on a graphical user interface, as taught by Stewart in order to allow users to see the results and understand how candidates relate to jobs (Stewart, col. 12, par. 1; col. 16, par. 2). Although Champaneria/Scarborough/Stewart disclose the use of candidate datum (including salary and location) in training and applying models for determining acceptance datum (as described above), Champaneria/Scarborough/Stewart does not explicitly disclose that the datum includes wage index comprising a range of compensation calculated as a function of a geographic area. However, Khanduja teaches: wherein the candidate datum elements comprise wage index elements derived from the candidate datum, wherein the wage index elements comprise a geographical wage index comprising a range of compensation calculated as a function of a geographic area; ([0038], determines salary data based on user data that includes data based on geographic area, the salary data is determined based on dimensions such as “…total compensation…median of the salary range, and statutory minimum wages of the respective location…”, these dimensions demonstrate an index of compensation ranges related to locations/area; [0049], shows a trained model that uses wage data to determine acceptance criteria) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria/Scarborough/Stewart so as to have included wherein the candidate datum elements comprise wage index elements derived from the candidate datum, wherein the wage index elements comprise a geographical wage index comprising a range of compensation calculated as a function of a geographic area, as taught by Khanduja in order to provide additional criteria for ensuring that the best-matched candidates are identified and offered jobs (Khanduja, [0005]; Champaneria, [0040]; [0057]; Stewart, col. 20, lines 42-43). In regards to Claims 2 and 12, Champaneria does explicitly disclose, but Scarborouogh teaches: wherein accepting the candidate is determined using a machine learning model. ([0113]; [0129]; [0166]; etc., the model provides data indicating what candidates fall within acceptable ranges that is used for determining acceptance) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have further modified the system of Champaneria so as to have included wherein accepting the candidate is determined using a machine learning model, as taught by Scarborouogh. Champaneria discloses a “base” method/system that an automatically matches candidate data to job posting criteria and scores those matches to determine if a candidate should be accepted (i.e., acceptance classification), as shown above. Scarborouogh teaches a comparable method/system that trains a classifier using candidate mapped to desired job criteria data and uses that classifier to classify candidates in relation to job criteria, as shown above. Scarborouogh also teaches an embodiment in which the acceptance of the candidate is determined using a machine learning model, as shown above. One of ordinary skill in the art would have recognized the adaptation of determining the acceptance of the candidate using a machine learning model to Champaneria could be performed with the technical expertise demonstrated in the applied references. (See KSR [127 S Ct. at 1739] "The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.") In regards to Claims 3 and 13, Champaneria discloses: wherein accepting the candidate includes a confirmation process. ([0057], the process for accepting candidates is provided a confirmation process in order to ensure the best possible matches) In regards to Claims 4 and 14, Champaneria does not explicitly disclose, but Scarborough teaches: wherein the processor is further configured to determine an acceptance classification datum using fuzzy logic. ([0014]; [0077]; [0123]; [0129]; [0156]; and elsewhere throughout the reference, “For example, a neural network or a fuzzy logic system can be used to build a model that predicts a post-hire outcome.”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the system of Champaneria so as to have included wherein the processor is further configured to determine the acceptance classification datum using fuzzy logic, as taught by Scarborough in order to allow the use of known techniques to translate attribute data into a form that can be understood and processed by the system (Scarborough, [0122]; [0123]). In regards to Claims 5 and 15, Chamapaneria discloses: wherein the candidate is automatically accepted as a function of a verification process. ([0030]; [0045], qualified candidates can be further vetted and/or interviewed (verification process)) In regards to Claims 6 and 16, Chamapaneria discloses: wherein the candidate is automatically accepted as a function of a geographic datum. ([0008]; [0115], candidate location information can be used as information relevant to hiring of a candidate) In regards to Claims 7 and 17, Chamapaneria discloses: wherein the candidate is automatically accepted as a function of a wage range verification. ([0008]; [0117], candidate salary information (including desired salary) can be used as information relevant to hiring of a candidate) In regards to Claims 10 and 20, Chamapaneria discloses: wherein the at least a processor is further configured to rank candidates as function of the acceptance compatibility score. ([0056]; [0066], candidates are ranked based on scores) Relevant Prior Art not Relied Upon Asseer et. al. (WO 2020/006634 A1). “…there is provided a computer implemented method for filling a job vacancy, the method comprises receiving a request for staffing for a job, wherein the request includes job requirements; screening job candidates for compliance with the job requirements; automatically identifying candidates that are compliant with the job requirements; optionally forwarding an automated interview request; forwarding a job offer to a skillful candidate; and closing the job post once the job is accepted by a candidate.” (see at least page 4, paragraph 1). Similar material also appears in CA 3104016 A1. Fang et al. (Pub. No. US 2018/0130024 A1). Uses machine learning models to score and rank candidates in relation to job pipeline and selects qualified candidates (see at least [0026]; [0031]; [0038]). Gomes et al. (Pub. No. US 2020/0327505 A1). “…the assessment…includes a binary assessment of “recommended” or “qualified” as to a given job opening, and wherein the configured processor automatically…“hires” the candidate (forwards the resume metadata of the candidate to the appropriate human resources entity for immediate hire into the associated opening), providing a “one-click,” combination resume review, assessment and hiring process that provides efficiencies in time and resources over conventional, separate, multi-step candidate assessment and hiring processes.” (see at least [0035]; [0022]). Ma et al. (Pub. No. US 2020/0311163 A1). Discloses machine learning models for scoring candidate, then using additional machine learning models for further filtering candidates into sub-groups (see at least [0046]; [0047]). Mondal et al. (Patent No. US 10,255,585 B2). Uses current employee data for correlation to job performance metrics (see at least column 1, BACKGROUND OF THE INVENTION; column 2, paragraph 8; column 5, last paragraph (to column 6); column 11, paragraph 1). Nemirovsky et al. (Pub. No. US 2022/0083871 A1). Uses trained machine learning (classifiers) to determine whether candidates are permitted to join a hiring marketplace based on extracted candidate attribute data (see at least [0003]; [0005]-[0010]; [0018]-[0036]; [0048]; [0062]; [0077]-[0082]; [0121]-[0125]; [0131]; Table 3; [0140]; [0155]; [0158]). Also discloses immutable attributes (see at least [0015]; 0135]; Claim 12). Neumann (Pub. No. US 2022/0060333 A1) A1). Discloses training models with data, immutable sequential listings for storing data, and storing updated data to the immutable sequential listing (see at least [0030]-[0033]; [0056]; [0057]; [0060]-[0066]; Claim 19). Olivier (Patent No. US 6,480,885 B1) Uses fuzzy logic for determining rules for matching profiles (see at least Abstract; column 2, lines 59-64; column 20, lines 7-12). Similar material also appears in WO 0016209 A1. Siebach et al. (Pub. No. US 2016/0180234 A1). Uses trained classifiers to predict performance of a candidate for hire (see at least [0037]; [0051]). Shabtei et al. (Pub. No. US 2018/0315020 A1). Uses “one or more machine learning models can be utilized to automatically identify one or more candidates based on the candidate selection criteria” (see at least [0024]). Wen (CN 105787639 A). Uses artificial intelligence for selecting resumes based on selection conditions (see at least Claim 5; Claim 9). Yueng (Pub. No. US 2020/0402013 A1). Automatically matches and scores candidate attributes with job requirements including based on geography/location and salary range. Candidates are grouped based on shared attributes. Machine learning is incorporated (see at least [0028]-[0030]; [0034]-[0038]; [0043]-[0046]; [0057]; [0061]; Claim 14). Stewart et al. (Pub. No. US 2023/0237072 A1). Discloses retrieving candidate training data from a database, wherein the database comprises an immutable sequential listing; storing the updated candidate training data in the immutable sequential listing; and retrieving the updated candidate training data from the immutable sequential listing (see at least [0017]; [0028]; [0051]; [0064]-[0066]). Response to Arguments Applicant’s arguments filed 2/9/2026 have been fully considered but they are not persuasive. I. Rejection of Claims under 35 U.S.C. §101: Please see MPEP 2106.05(a), Improvements to the Functioning of a Computer or To Any Other Technology or Technical Field Step 2A, Prong One: Applicant argues that Under the USPTO’s 2025 guidance addressing subject matter eligibility of artificial intelligence-related inventions, limitations directed to training a machine learning model do not, recite an abstract idea. However, this is not correct as merely inducing the training of a machine learning model (ML) alone does not automatically render the claims eligible. Applicant has not demonstrated how/why the particular training techniques would provide something significantly more than the abstract ideas. Step 2A, Prong Two: Applicant argues that the claims recite a specific improvement to machine-learning training, requiring the processor to filter compatibility training data using a training data classifier prior to training the compatibility model. This filtering step constrains the training process to higher-quality, more relevant data and directly improves model performance. This improvement constitutes a practical application under MPEP § 2106.04(d) because it applies any alleged abstract idea in a manner that improves the performance of a machine-learning system. However, Applicant has not provided evidence background to support these assertions. Applicant has not demonstrated how/why the particular training techniques would provide the alleged improvements or practical application. For example, there is no evidence demonstrating how the filtering steps provide the improvement, such as what deficiencies exist in prior systems/methods, why prior systems could not or would not be able to apply this, how these deficiencies would be addressed by the training/filtering techniques, etc. As provided in the previous office action, regarding Example 47: Applicant asserts that the claims are eligible for the same reasons Example 47. However, Applicant merely summarizes the example and then asserts that Applicants clamed invention provides a similar practical application because they have similar elements. Applicant fails to provide any evidence or analysis to demonstrate how/why applicant’s claims would be subject to the findings of Example 47. Merely reciting similar elements or components (such as machine learning) does not mean that the claims provide the same practical application. Applicant has not demonstrated how/why the claimed invention would be subject to the same analysis and findings of Example 47. For example, Applicant does explain how the technical improvement in Example 47 (training and optimization of the ANN to improve network security in response to detected anomalies, as cited by Applicant) is comparable to improving data processing systems by enabling scalable, automated compatibility scoring and classification of candidate data across multiple postings and/or how it would provide the same practical application. Regarding Ex Parte Desjardins, as stated above, merely including training does not render the claims eligible. Applicant’s remarks fail to demonstrate how/why the training technique improves the ML performance or the accuracy and reliability (improves machine learning performance by requiring classifier-based filtering of training data prior to model training, improving the accuracy and reliability of the compatibility scores as a result). Merely describing a training process does not show how/why it is an improvement. Step 2B: In regards to Applicant’ s Berkheimer remarks, Examiner points out that the 101 rejections do not rely on the claims being well-understood, routine, or conventional at this time. Additionally, Applicant has asserted that the claims are not well-understood, routine, or conventional, but has not provided any explanation or evidence to demonstrate why they would not be well-understood, routine, or conventional. Applicant’s remaining remarks fail for the same reason outlined above. Applicant fails to provide explanation or evidence to demonstrate how/why the allege improvements would be achieved in a meaningful manner. Applicant does not demonstrate that the claim recites a specific technological implementation of machine-learning operations, such as training models with structured, correlated datasets and applying those trained models within a defined computational pipeline, such recited features constitute additional elements evidencing an improvement in computer functionality rather than an abstract idea in a manner comparable to the cited decisions (such as where The amended claim recite exactly this type of technological implementation). II. Rejection of Claims under 35 U.S.C. §103: Applicant’s remarks are drawn to the newly added claim material and are therefore moot in view of the newly provided prior art rejections, citations, and/or explanations, provided above. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). III. Additional Remarks regarding claim interpretation from the previous office actions: Examiner relied partly on [0020], [0021], and [0026] of the specification for interpretation of the machine learning models in the claims. Examiner notes that these descriptions are written broadly. For example, the specification does not provide a specific definitions of “acceptance match datum” or how it is significantly different than “acceptance datum”. Also, as described in the specification, the model for classifying and the model for automatic acceptance could be read as different models or possibly the same model (performing the same activities and merely labeling the data differently). Although Examiner has not determined that requires a 35 U.S.C. §112 rejection at this time, the claims may be provided a broad interpretation in light of this material. Examiner has attempted to provide prior art related to the manner in which Applicant is presenting the claimed invention (based on the claim language and previous interview discussion), however, Examiner notes that the claims are not limited to this interpretation and may be subject to a much broader interpretation under broadest reasonable interpretation in light of the specification. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN D SENSENIG whose telephone number is (571)270-5393. The examiner can normally be reached M-F: 10:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 571-272-6872. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.D.S/Examiner, Art Unit 3629 March 7, 2026 /SARAH M MONFELDT/Supervisory Patent Examiner, Art Unit 3629
Read full office action

Prosecution Timeline

May 13, 2022
Application Filed
Aug 13, 2022
Non-Final Rejection — §101, §103
Sep 20, 2022
Interview Requested
Sep 28, 2022
Applicant Interview (Telephonic)
Sep 30, 2022
Response Filed
Sep 30, 2022
Examiner Interview Summary
Oct 20, 2022
Final Rejection — §101, §103
Dec 05, 2022
Request for Continued Examination
Dec 07, 2022
Response after Non-Final Action
Jan 17, 2023
Non-Final Rejection — §101, §103
Feb 13, 2023
Interview Requested
Feb 23, 2023
Applicant Interview (Telephonic)
Feb 25, 2023
Examiner Interview Summary
Mar 02, 2023
Response Filed
Mar 22, 2023
Final Rejection — §101, §103
Jun 01, 2023
Request for Continued Examination
Jun 05, 2023
Response after Non-Final Action
Jun 15, 2023
Non-Final Rejection — §101, §103
Aug 17, 2023
Interview Requested
Aug 24, 2023
Applicant Interview (Telephonic)
Aug 25, 2023
Examiner Interview Summary
Aug 31, 2023
Response Filed
Sep 27, 2023
Final Rejection — §101, §103
Apr 04, 2024
Request for Continued Examination
Apr 08, 2024
Response after Non-Final Action
Apr 20, 2024
Non-Final Rejection — §101, §103
May 03, 2024
Interview Requested
May 15, 2024
Applicant Interview (Telephonic)
May 17, 2024
Examiner Interview Summary
Jul 25, 2024
Response Filed
Nov 04, 2024
Final Rejection — §101, §103
Mar 05, 2025
Applicant Interview (Telephonic)
Mar 08, 2025
Examiner Interview Summary
Mar 10, 2025
Request for Continued Examination
Mar 12, 2025
Response after Non-Final Action
Mar 21, 2025
Non-Final Rejection — §101, §103
May 23, 2025
Interview Requested
May 29, 2025
Examiner Interview Summary
May 29, 2025
Applicant Interview (Telephonic)
Jun 27, 2025
Response Filed
Oct 01, 2025
Final Rejection — §101, §103
Feb 09, 2026
Request for Continued Examination
Mar 01, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548097
SYSTEM AND METHOD FOR ADVANCED MISSION PLANNING
2y 5m to grant Granted Feb 10, 2026
Patent 12511669
PROJECTION PROCESSING DEVICE, STORAGE MEDIUM, AND PROJECTION METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12505497
Inter-agency Communication System for Promoting Situational Awareness
2y 5m to grant Granted Dec 23, 2025
Patent 12411978
CHARTING LOGIC DECISION SUPPORT IN ELECTRONIC PATIENT CHARTING
2y 5m to grant Granted Sep 09, 2025
Patent 12380408
DESIGNING CONFLICT REDUCING OUTREACH STRATEGIES TO MITIGATE INEFFICIENCIES IN PROACTIVE SOURCING PROCESS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
14%
Grant Probability
31%
With Interview (+16.6%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 400 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month