Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim(s) 1-20 is/are directed to a system, method, and computer program product. Thus, all the claims are within the four potentially eligible categories of invention (a process, a machine and an article of manufacture, respectively), satisfying Step 1 of the Subject Matter Eligibility (SME) test.
As per Prong One of Step 2A of the §101 eligibility analysis set forth in MPEP 2106, the Examiner notes that the claims recite mental processes and certain methods of organizing human activity. More specifically, independent claims recite: adjusting one or more answers provided by an individual in an employment-based survey to one or more respective answers derived from a combination of human-computer interaction data of the individual and employment-based data of the individual. The claims recite steps to evaluate worker engagement and provide recommendations which is managing personal behavior or relationships - certain methods of organizing human activity. In addition, the claims recite an evaluation of worker engagement. This evaluation can practically be performed in the mind or with pen and paper and is therefore considered a mental process. The nominal recitation of computer elements does not necessarily preclude the claim from reciting an abstract idea as evidenced by the analysis at Prong 2 of Step 2A.
Regarding Prong Two of Step 2A, a claim reciting an abstract idea must be analyzed to determine whether any additional elements in the claim integrate the judicial exception into a practical application. Limitations that are indicative of integration into a practical application include: Improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a); Applying or using a judicial exception to effect a particular treatment or prophylaxis for disease or medical condition – see Vanda Memo; Applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b); Effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c); and Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e) and the Vanda Memo issued in June 2018.
In this case, the independent claims do not include limitations that meet the criteria listed above, thus the abstract idea is not integrated into a practical application. Independent claim 1 recites a system comprising a memory and a processor that executes computer-executable components stored in the memory. Independent claim 11 recites a system coupled to a processor. Independent claim 19 recites a computer program product comprising computer readable storage medium having program instructions executable by a processor. In each of these instances these additional limitations amount to using a computer as a tool to perform the abstract idea and/or instructions to implement the abstract idea on a computer. There is no integration into a practical application.
The dependent claims further limit the abstract idea and some recite additional elements that do not integrate the abstract idea into a practical application. Dependent claims 2, 12 and 20 describe collecting data by a processor. This amounts to using a computer as a tool to perform the abstract idea of data gathering which is considered an observation/evaluation and therefore is mental process. There is no integration into a practical application. Dependent claims 3 and 13 recite tabulating, by the system, data to generate a categorization for adjusting answers. This amounts to using a computer as a tool to perform the abstract idea of data analysis which is considered an evaluation and therefore is mental process. There is no integration into a practical application. Dependent claims 4 and 14 recites combining HCI data and employment-based data to detect human bias. This is a mental process – evaluation – implemented by a computer. This amounts to using a computer as a tool to perform the abstract idea. There is no integration into a practical application. Dependent claims 5 and 15 recite forming digital data and generating a mean average value from the digital data to adjust an answer. Dependent claim 6 and 16 recite generating a second score from the first score representing human bias. Both sets of claim recite mental processes – an evaluation – implemented by a computer. This amounts to using a computer as a tool to perform the abstract idea. There is no integration into a practical application. Dependent claims 7-9, 17 and 18 recite data comparisons and action recommendations in response to data comparisons. These claims also recite mental processes – an evaluation – implemented by a computer. This amounts to using a computer as a tool to perform the abstract idea. There is no integration into a practical application. The use of machine learning is recited at a high level of generality and merely indicates a field of use or technological environment in which the abstract idea is performed. Dependent claim 10 recites a human entity analyzing bias threshold to determine outliers which makes up training data to train the machine learning. A human analyzing bias is a mental process – an evaluation - implemented by a computer. This amounts to using a computer as a tool to perform the abstract idea. There is no integration into a practical application.
The claims do not include limitations beyond generally linking the use of the abstract idea to a particular technological environment. When considered individually and in combination, the system and software claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements. The invention is not directed to a technical improvement. When the claims are considered individually and as a whole, the additional elements noted above appear to merely apply the abstract concept to a technical environment in a very general sense.
Lastly and in accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, and when considered individually and in combination, the additional elements amount to no more than mere instruction to apply the exception using generic computer component. Mere instruction to apply an exception using generic computer components cannot provide an inventive concept.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-6, 8, 11-16, 19 and 20 is/are rejected under 35 U.S.C. 102(a)(1) and 35 USC 102(a)(2) as being anticipated by Valacich et al, US 2022/0020040.
As per claim 1, Valacich et al discloses a system, comprising: a memory that stores computer-executable components; and a processor that executes the computer-executable components stored in the memory, wherein the computer-executable components [0135-0138] comprise: an adjustment component that adjusts one or more answers provided by an individual in an employment-based survey to one or more new respective answers derived from a combination of human-computer interaction (HCI) data of the individual and employment-based data of the individual ([0034, 0059, 0071, 0092, 0125-0127] – survey results adjustment based on human computer interaction data combined with survey data associated with both academic and industry settings).
As per claim 2, Valacich et al discloses the system of claim 1, further comprising: a data collection component that collects the HCI data and the employment-based data of the individual, wherein the HCI data comprises digital device usage data of the individual, and wherein the employment-based data is sourced from an employer of the individual ([0059, 0062-0068] – various human-computer interaction (HCl) devices such computer mice, touch pads, touch screens, keyboards, accelerometers, and so on, provide an array of data that is collected at millisecond intervals. Thus, all human-computer interaction devices (e.g., keyboard, mouse, touch screen, etc.) as well as screen and device orientation sensors (e.g., gyrometers and accelerometers) steam data with very fine detail and precision. This data can be used to not only interact with the survey system, but also be used to capture and measure the fine motor movements of users).
As per claim 3, Valacich et al discloses the system of claim 1, further comprising: a tabulation component that tabulates the HCI data into respective analysis ratings defined by a worker engagement team associated with an employer of the individual to generate a categorization for the individual for adjusting the one or more answers ([0111-0116, 0125-0126 and tables 6-8] HCI data is used to generate metrics associated with navigation efficiency, response behaviors and time metrics that are used to check for response bias, the metrics are used to statistically moderate the relationship between collected self-reported variables (survey items) and an outcome. If the moderating relationship is significant, this indicates that a response bias is present).
As per claim 4, Valacich et al discloses the system of claim 1, further comprising: a detection component that combines the HCI data and the employment-based data of the individual to detect human bias in the one or more answers provided by the individual ([0111-0116, 0125-0126 and tables 6-8] HCI data is used to generate metrics associated with navigation efficiency, response behaviors and time metrics that are used to check for response bias, the metrics are used to statistically moderate the relationship between collected self-reported variables (survey items) and an outcome. If the moderating relationship is significant, this indicates that a response bias is present).
As per claim 5, Valacich et al discloses the system of claim 1, wherein the combination of the HCI data and the employment-based data of the individual forms digital data, and wherein adjusting an answer of the one or more answers to a new answer comprises generating a first score based on a mean average value of individual scores derived from one or more values of the digital data ([0113-0118] – raw data metrics for each category are normalized by calculating the mean and standard deviation).
As per claim 6, Valacich et al discloses the system of claim 5, wherein the first score is used to generate a second score that is representative of an amount of human bias in the answer, and wherein the second score is equal to a difference between the first score and a manual survey score representative of the answer ([0113-0127] – the normalized raw data metrics is subtracted from each value and are aggregated into meta variables used to adjust for response bias).
As per claim 8, Valacich et al discloses the system of claim 6, further comprising: a score decider engine that uses at least the second score to determine an amount of adjustment required for the answer, such that the human bias is reduced below a defined threshold ([0126-0127 – score adjustments used to detect bias based on the moderating relationship which can be adjusted in order to reduce or remove bias).
As per claim 11, Valacich et al discloses a computer-implemented method, comprising: adjusting, by a system operatively coupled to a processor, one or more answers provided by an individual in an employment-based survey to one or more new respective answers derived from a combination of human-computer interaction (HCI) data of the individual and employment-based data of the individual ([0034, 0059, 0071, 0092, 0125-0127] – survey results adjustment based on human computer interaction data combined with survey data associated with both academic and industry settings).
As per claim 12, Valacich et al discloses the computer-implemented method of claim 11, further comprising: collecting, by the system, the HCI data and the employment-based data of the individual, wherein the HCI data comprises digital device usage data of the individual, and wherein the employment-based data is sourced from an employer of the individual ([0059, 0062-0068] – various human-computer interaction (HCl) devices such computer mice, touch pads, touch screens, keyboards, accelerometers, and so on, provide an array of data that is collected at millisecond intervals. Thus, all human-computer interaction devices (e.g., keyboard, mouse, touch screen, etc.) as well as screen and device orientation sensors (e.g., gyrometers and accelerometers) steam data with very fine detail and precision. This data can be used to not only interact with the survey system, but also be used to capture and measure the fine motor movements of users).
As per claim 13, Valacich et al discloses the computer-implemented method of claim 11, further comprising: tabulating, by the system, the HCI data into respective analysis ratings defined by a worker engagement team associated with an employer of the individual to generate a categorization for the individual for adjusting the one or more answers ([0111-0116, 0125-0126 and tables 6-8] HCI data is used to generate metrics associated with navigation efficiency, response behaviors and time metrics that are used to check for response bias, the metrics are used to statistically moderate the relationship between collected self-reported variables (survey items) and an outcome. If the moderating relationship is significant, this indicates that a response bias is present).
As per claim 14, Valacich et al discloses the computer-implemented method of claim 11, further comprising: combining, by the system, the HCI data and the employment-based data of the individual to detect human bias in the one or more answers provided by the individual ([0111-0116, 0125-0126 and tables 6-8] HCI data is used to generate metrics associated with navigation efficiency, response behaviors and time metrics that are used to check for response bias, the metrics are used to statistically moderate the relationship between collected self-reported variables (survey items) and an outcome. If the moderating relationship is significant, this indicates that a response bias is present).
As per claim 15, Valacich et al discloses the computer-implemented method of claim 11, wherein the combination of the HCI data and the employment-based data of the individual forms digital data, and wherein adjusting an answer of the one or more answers to a new answer comprises generating a first score based on a mean average value of individual scores derived from one or more values of the digital data ([0113-0118] – raw data metrics for each category are normalized by calculating the mean and standard deviation).
As per claim 16, Valacich et al discloses the computer-implemented method of claim 15, wherein the first score is used to generate a second score that is representative of an amount of human bias in the answer, and wherein the second score is equal to a difference between the first score and a manual survey score representative of the answer ([0113-0127] – the normalized raw data metrics is subtracted from each value and are aggregated into meta variables used to adjust for response bias).
As per claim 19, Valacich et al discloses a computer program product for minimizing human bias in answers provided by an individual in an employment-based questionnaire, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: adjust, by the processor, one or more answers provided by an individual in an employment-based survey to one or more new respective answers derived from a combination of human-computer interaction (HCI) data of the individual and employment-based data of the individual ([0034, 0059, 0071, 0092, 0125-0127] – survey results adjustment based on human computer interaction data combined with survey data associated with both academic and industry settings).
As per claim 20, Valacich et al discloses the computer program product of claim 19, wherein the program instructions are further executable by the processor to cause the processor to: collect, by the processor, the HCI data and the employment-based data of the individual, wherein the HCI data comprises digital device usage data of the individual, and wherein the employment-based data is sourced from an employer of the individual ([0059, 0062-0068] – various human-computer interaction (HCl) devices such computer mice, touch pads, touch screens, keyboards, accelerometers, and so on, provide an array of data that is collected at millisecond intervals. Thus, all human-computer interaction devices (e.g., keyboard, mouse, touch screen, etc.) as well as screen and device orientation sensors (e.g., gyrometers and accelerometers) steam data with very fine detail and precision. This data can be used to not only interact with the survey system, but also be used to capture and measure the fine motor movements of users).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valacich et al, US 2022/0020040, in view of Long et al, US 2020/0074294.
As per claim 7, Valacich et al fails to explicitly disclose while Long et al discloses wherein the individual scores are determined by mapping the one or more values of the digital data to a Likert scale. ([0138] – employee survey data using a 5-point Likert scale). It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the system of Valacich et al the ability to map to a Likert scale as taught by Long et al since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 17, Valacich et al fails to explicitly disclose while Long et al discloses wherein the individual scores are determined by mapping the one or more values of the digital data to a Likert scale ([0138] – employee survey data using a 5-point Likert scale). It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the system of Valacich et al the ability to map to a Likert scale as taught by Long et al since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim(s) 9, 10, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Valacich et al, US 2022/0020040, in view of Dodwell et al, US 2021/0174222.
As per claim 9, Valacich et al fails to explicitly disclose, while Dodwell et al discloses a recommendation engine that uses machine learning to recommend one or more actions, based on the amount of adjustment, that an employer of the individual providing the answer can execute to maintain performance of the individual above a performance threshold ([0015, 0032] – machine learning model to generate recommendations to mitigate or reduce any risk of potential bias). It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the system of Valacich et al the ability to generate recommendations as taught by Dodwell et al since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 10, Valacich et al fails to disclose while Dodwell et al discloses, wherein training data used to train the machine learning to recommend the one or more actions comprises information based on a human entity analyzing bias thresholds for the answer to determine outliers [0072]. It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the system of Valacich et al the ability to generate recommendations as taught by Dodwell et al since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 18, Valacich et al discloses the computer-implemented method of claim 16, further comprising: determining, by the system, using the second score, an amount of adjustment required for the answer, such that the human bias is reduced below a defined threshold ([0126-0127 – score adjustments used to detect bias based on the moderating relationship which can be adjusted in order to reduce or remove bias). Valacich et al fails to explicitly disclose, while Dodwell et al discloses recommending, by the system, using machine learning, one or more actions, based on the amount of adjustment, that an employer of the individual providing the answer can execute to maintain performance of the individual above a performance threshold ([0015] – machine learning model to generate recommendations to reduce any risk of potential bias). It would have been obvious to one of ordinary skill in the art at the time of the invention to include in the system of Valacich et al the ability to generate recommendations as taught by Dodwell et al since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pertinent prior art is included in the attached PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNA LOFTIS whose telephone number is (571)272-6736. The examiner can normally be reached M-F 7:00am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JOHNNA LOFTIS
Primary Examiner
Art Unit 3625
/JOHNNA R LOFTIS/Primary Examiner, Art Unit 3625