Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is responsive to Request for Continued Examination Transmittal received on 1/5/2026 . Claims 1, 8, and 15 are amended. Claims 1-20 are pending examination.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claims are directed to abstract idea of collecting and analyzing information about users and systems to predict outcomes, rank individuals and select participants for task assignment, which falls within the “mental processes” and “certain method of organizing human activity” groupings.
The claim recites, collecting information describing users and target systems; analyzing that information to predict likelihoods; ranking users based on the analysis and selecting and communicating with users based on the ranking. These steps of evaluating information and making decisions about assigning people to takes which can be performed mentally or using pen and paper and also relate to human activity such as managing work assignments and researcher selection.
Step 2A: even though the claims recite training and executing a machine learning model, including neural network or gradient boosted decision tree, the model is used only to perform the abstract analysis or predicting and ranking users. The claim do not recite an improvement to computer functionality, machine learning technology or data processing technique, not do they describe a specific solution to a technical problem. The model is applied in a generic solution to automate the abstract idea.
Step 2B; the claims do not include an inventive concept enough to transform the abstract idea into patent eligible subject matter. The additional elements recite conventional activities such as extracting data, training a machine learning model using historical data, ranking results and communicating selections. These elements are well understood, routine and conventional technique for data analysis. Finally, specifying particular types of machine learning model such as neural network or decision tree does not add meaningful limitation or technical improvement. The claims are directed to ineligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang et al. Context- and Fairness-Aware In-Process Crowd worker Recommendation (March 7, 2022).
As to claim 1, Wang teaches a computer-implemented method for assigning target systems to users for identifying system defects (see at least abstract, worker recommendation for crowd testing tasks in order to improve efficiency), the method comprising:
training a machine learning model to receive as input, information describing a user and a target system and predict a score indicating a likelihood of the user successfully identifying and reporting a system defect in the target system, wherein the user is a researcher who analyzes a system to identify and report defects in the system, the training performed using training data describing historical records of past defect submissions by users for target systems, wherein a record is labeled to indicate whether a user successfully identified a valid defect (see at least abstract pages 1-2 , section 3.3 “ learning based ranking”; 3.3.1 “feature extraction” and section 3.3.2 “ranking model Training);
receiving information describing a target system, wherein the target system is selected for analysis for identifying defects in the target system (see at least section 2.1), table 1);
receiving information describing a plurality of users (see section 2.1 background; section 3, fig.4; section 3 and section 4.2);
extracting features describing the target system (see section 3.1,3.21 and 3.3.1);
repeating, for each user from the plurality of users, extracting features describing the user, wherein at least some of the features comprise statistical metrics determined from past submissions of defects by the user providing the features describing the user and the features describing the target system as input to the machine learning model, and executing the machine learning model to predict a score for the user, the score indicating a likelihood of the user identifying and reporting a system defect in the target system (see 3.2.2; 3.3.1 and 3.3.2, feature extraction, resource context and ranking);
ranking the plurality of users based on scores predicted for the users; selecting a subset of users from the plurality of users based on the ranking (see sections 3.3 and 3.3.2, ranking list of crowd workers and learning based ranking); and
communicating with the subset of user selected from the plurality of the users (see section 1, introduction “identifying appropriate workers for a particular testing task; section 3 “approach” iRec 2.0 can dynamically recommend diverse set of capable crowd workers; 3.3.2, “ranking”, output is a ranked list of crowd workers used for recommendation; section 6.1 “benefits” discussion of applying recommendation in practice, when workers are contacted or invited based on ranking).
As to claim 2, Wang teaches the computer-implemented method of claim 1, wherein the features describing the target system represent one or more of, information describing an organization associated with the target system, a type of industry associated with the target system, and one or more types of technologies used by the target system (see section 2.1 “background”. Table 1, test requirements include descriptions of the app/system being tested; section 3.2.1, “process context”, task features are captured through task terms vector, representing task requirement; and section 4.2, task span various domains which indicates industry or organization category).
As to claim 3,Wang teaches the computer-implemented method of claim 1, wherein the features describing the user represent one or more of counts of various categories of past submissions of the user, counts of each priority of past submissions of the user, total number of past submissions of the user, number of accepted submissions of the user, average priority of past submissions of the user (see section 2.1 worker is associated with the historical reports that she or he submitted; section 3.3.2, resource context- activeness, number of bugs submitted by worker in past time and number of reports submitted by worker in pat time; section 3.3.1 Feature extraction, table 2, number bugs at different time intervals i.e., counts of past time and features 8-12, Num reports at different time intervals , i.e. total submissions and section 4.2, 80,200 submitted reports which provides historical data for deriving metrics).
As to claim 4, Wang teaches the computer-implemented method of claim 1, wherein the features describing the user represent user profile attributes describing one or more of interests of the user, qualifications of the user, past experience of the user, and skills of the user (see section 2.3, characterizing crowd worker and fig 2b, worker preferences across different skills and interests).
5. (Original) The computer-implemented method of claim 1, wherein the features describing the user are extracted by crawling one or more websites describing user profiles (see section 3.3.2 machine learner model).
6. (Original) The computer-implemented method of claim 1, wherein the machine learning model is a gradient boosted decision tree (see section 6.3, decision tree type of preferences and expertise).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Wang and further in view of Cran et al. U.S. Patent Pub. No. 2018/0103054 (referred to herein after as Cran).
As to claim 7, Wang teaches the computer-implemented method of claim 1.
Wang does not explicitly but Cran teaches wherein the machine learning model is a neural network (see at least paragraphs 0029 and 0098). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have known to combine the teachings of Cran with those of Wang to make the system more efficient by improving the scalability and speed by allowing many distributed participants to work in parallel and improve coverage and quality by identifying issues, generate or evaluate results or data.
Claims 8-20 do not introduce any additional features beyond those already recited in claims 1-7 and therefore rejected on the same grounds.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
-Shin U.S. Patent Pub. No. 2023/0379556, Crowd Source- Based Marking of Media Items at a Platform.
Auger U.S. Patent Pub. No. 2025/0078030, Method and System for Crowd Sourcing in an Academic Environment.
Response to Arguments
Applicant's arguments filed 12/23/2025 regarding 35 U.S.C. 101 have been fully considered but they are not persuasive.
Applicant’s arguments with respect to Claim Rejections 35 USC § 102 anticipated by Cran U.S. No. 2018/0103054, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed in response to claim rejection under 35 U.S.C. 101 have been fully considered but they are not persuasive.
The applicant argues the claims are not directed to methods of human activity.The claimed invention is directed to patent eligible subject matter as it addresses specific technological problems with concrete technological solutions that improve computer functionality.
Response: Although the claims are presented in a technological context involving defect identification and machine learning, the claimed subject matter is focused on gathering information about users and target systems, analyzing that information to predict likelihoods, ranking users and selecting users for participation. These steps reflect information analysis and task assignment, which are forms of abstract activity, regardless of the technological setting in which they are performed. The claims do not appear to address a specific technical problem from computer technology itself, nor do they recite a particular technical solution that improves computer functionality. The machine model learning is applied in a general manner to perform predication based on historical data, and the claims do not describe changes to the operation of the computer, improvements to machine learning, or enhancement to data mechanisms. The claims remain directed to patent ineligible subject matter.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARGON N NANO whose telephone number is (571)272-4007. The examiner can normally be reached 7:30 AM-3:30 PM. M.S.T..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached at 571 272 3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARGON N NANO/Primary Examiner, Art Unit 2443