Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The Action is responsive to the Amendments and Remarks filed on 11/21/2025. Claims 1-3 and 5-14 are pending claims. Claims 1 and 7 are written in independent form. Claim 4 has been cancelled
Priority
Acknowledgement is made of a claim for priority as a National Stage entry of PCT/CN2021/074023, International Filing Date: 01/28/2021, which claims foreign priority to CN202010620982.X, filed 06/30/2020, under 35 U.S.C. § 119(a)-(d) or (f), and is also acknowledged. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Objections
Claim 10 is objected to because of the following informalities:
Claim 10 appears to have been amended to recite a typographical error of “serial numbers of the image…is mapped to simple serial numbers obtained after local re-sorting”. Based on the previously recited language, the claim is understood as reciting “serial numbers of the image…[[is]] are mapped to simple serial numbers obtained after local re-sorting”.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below.
As per Independent Claim 1,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed system (claims 1-3 and 5-6) and method (claims 7-14) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The independent claim 1 recite the following limitations directed to an abstract idea:
A database archiving management module configured to
periodically update data of a facial test database based on a usage management requirement, and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating a usage management requirement, and based on the observation and evaluation, making a judgement and/or opinion to update data of a facial test database.
perform hierarchical classification management based on user permission allocation and according to data set annotation information and a data set identifier coding rule,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating user permission allocation and based on and according to the observation and evaluation, performing hierarchical classification management.
An evaluation annotation functional module configured to
evaluate facial images and facial videos,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating facial images and facial videos imported in large batches.
perform data preprocessing and image annotation by a facial testing algorithm and image processing, and
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of a facial testing algorithm and image processing that perform data preprocessing and image annotation.
Perform face cutting and image quality evaluation prompting on the facial images, to generate preprocessed facial images that were cut and to be referenced in the image annotation,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating facial images, and based on the observation and evaluation, performing face cutting and image quality evaluation prompting on the facial images resulting in preprocessed facial images that were cut and to be referenced in the image annotation.
Generate codes for the preprocessed facial images that were cut and to be referenced in the image annotation,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating preprocessed facial images, and based on the observation and evaluation, making a judgement and/or opinion of (generating) codes for the preprocessed facial images.
Maintain uniqueness of data set identifiers and facial image codes, according to different facial information factors,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating data set identifiers, facial image codes, and different facial information factors, and based on the observation and evaluation, making judgement and/or opinions that maintain the uniqueness of the data set identifiers and facial image codes.
generate a unique facial image code or a facial video code, to construct a large-scale normalized facial test database;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by making a judgement and/or opinion of a unique facial image code or a facial video code to construct a large-scale normalized facial test database.
The aspect of the limitation related to constructing a large-scale normalized facial test database after generating a unique facial image code or a facial video code recites a mathematical concept of organizing information and manipulating information based on relationships into a normalized database.
STEP 2A Prong Two:Claim 1 recites that the steps are performed using “a facial recognition device”, “a storage server including a processor executing code including a database archiving management module”, “a facial test database”, “a client including another processor executing other code including an evaluation annotation functional module..and a testing service functional module”, and “a test database”, which is a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The claims recite the following additional elements:
The evaluation annotation functional module configured to
Query individual data sets in different test databases by using one or more screening conditions,
The limitation recites an insignificant extra-solution activity as retrieval of data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Provide a test database matching condition required for testing in an actual application scenario, and
The limitation recites an insignificant extra solution activity as providing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
a testing service functional module configured to
call the database archiving management module,
The limitation recites an insignificant extra solution activity as sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
provide, for performance testing, a test database that meets a standard requirement, and
The limitation recites an insignificant extra solution activity as providing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
provide a test result feedback statistics service.
The limitation recites an insignificant extra solution activity as providing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to “Query individual data sets in different test databases by using one or more screening conditions,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “Provide a test database matching condition required for testing in an actual application scenario,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “call the database archiving management module,” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “provide, for performance testing, a test database that meets a standard requirement,” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “provide a test result feedback statistics service.” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
As per Dependent Claims 2-3 and 5-6,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed system (claims 1-3 and 5-6) and method (claims 7-14) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The dependent claims 2-6 recite the following limitations directed to an abstract idea:
The limitation of Dependent Claim 2 includes the step(s) of:
The preprocessing database is configured to
perform data preprocessing,,
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of preprocessing data.
generate an annotated data set,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating a dataset, and based on the observation and evaluation, making a judgement/opinion of annotations to generate an annotated data set.
the annotated data set in the built databases is
verified according to an evaluation result,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the annotated data set in the built databases and an evaluation result, and based on the observation and evaluation, making a judgement and/or opinion to verify the annotated data set.
subjected to a conformity check performed based on a technical requirement on test databases,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating the annotated data set in the built databases and a technical requirement on test databases, and based on the observation and evaluation, performing a conformity check on the annotated data set.
The feedback database is configured to update data in the primary storage database.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating data in a feedback database, and based on the observation and evaluation, updating data in the primary storage database.
The limitation of Dependent Claim 5 includes the step(s) of:
The device interface debugging module is configured to interact with the device to be tested by calling a test interface function, to push or obtain a facial image;
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of a test interface function to push or obtain a facial image.
The test result module is configured to manage test results of the performance indicators comprising a Fault Acceptance Rate (FAR) and a Fault Rejection Rate (FRR) of the device to be tested.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating test results of the performance indicators comprising the FAR and the FRR, and based on the observation and evaluation, making judgements and/or opinions that manage the test results.
STEP 2A Prong Two:The claim(s) recite the following additional elements:
The limitation of Dependent Claim 2 includes the step(s) of:
Wherein the database archiving management module comprises a primary storage database, a usage sub-database, an approval database, a preprocessing database and a feedback database;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/databases being used to represent the database archiving management module, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The primary storage database comprises individual data sets of single individuals, and a facial image and facial video in each individual data set in a constructed target facial test database each having a unique irreversible identification code;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the primary storage database, a constructed target facial test database, and subsequently each individual data set in the constructed target facial test database, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The usage sub-database has a set scale and quantity obtained from the primary storage database according to a data set configuration rule and based on a performance test level requirement of a device to be tested, comprises a target set and a probe set meeting a sample distribution requirement, and is configured to test performance indicators comprising a Fault Acceptance Rate (FAR) and a Fault Rejection Rate (FRR) of the device to be tested;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the usage sub-database and subsequently the test database, a target set, and a probe set, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The preprocessing database is configured to
receive facial images or facial videos initially imported into the storage server in batches,
The limitation recites an insignificant extra solution activity as sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
provide an evaluation result,
The limitation recites an insignificant extra solution activity as providing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
save the annotated data set into the approval database; and
The limitation recites a high-level recitation of generic computer components, in the form of storing the annotated data set in the primary storage database, and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The annotated data set is saved into the primary storage database after being approved;
The limitation recites a high-level recitation of generic computer components, in the form of storing the annotated data set in the primary storage database, and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The feedback database comprises individual data sets built by the test user, mainly coming from data sets for which a data anomaly occurs during performance testing performed by the testing service functional module using the downloaded usage sub-database,
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the feedback database and subsequently the “individual data sets…coming from data sets for which a data anomaly occurs…”, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 3 includes the step(s) of:
Wherein the database archiving management module further comprises a test result database and/or data logs, and
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the database archiving management module, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The test result database is configured to store results of testing of performance indicators comprising the FAR and the FRR for data update association and statistical analysis of test database service application requirements; and
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the test result database, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
the data logs comprise logs related to operations and audit of databases and test results in the database archiving management module.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the data logs, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 5 includes the step(s) of:
Wherein the testing service functional module comprises a database calling module, a device interface debugging module, a statistics and report module and a test result module;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/sub-modules being used to represent the testing service functional module, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The database calling module is configured to download or upload an individual data set according to a requirement and an operation;
The limitation recites an insignificant extra solution activity as uploading/sending/receiving/downloading data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The statistics and report module is configured to provide data set statistics, project statistics, algorithm statistics and simulation test statistics;
The limitation recites an insignificant extra solution activity as providing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 6 includes the step(s) of:
Wherein the testing service functional module further comprise a user login module,
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/sub-modules being used to represent the testing service functional module, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The user login module configured to cooperate with the database archiving management module to perform a rights-based access operation on each sub-database in the facial test database according to rights of a user.
The limitation recites an insignificant extra solution activity as accessing/sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to Claim 2 reciting “Wherein the database archiving management module comprises a primary storage database, a usage sub-database, an approval database, a preprocessing database and a feedback database;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 2 reciting “The primary storage database comprises individual data sets of single individuals, and a facial image and facial video in each individual data set in a constructed target facial test database each having a unique irreversible identification code;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 2 reciting “The usage sub-database has a set scale and quantity obtained from the primary storage database according to a data set configuration rule and based on a performance test level requirement of a device to be tested, comprises a target set and a probe set meeting a sample distribution requirement, and is configured to test performance indicators comprising a Fault Acceptance Rate (FAR) and a Fault Rejection Rate (FRR) of the device to be tested;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 2 reciting “The preprocessing database is configured to receive facial images or facial videos initially imported into the storage server in batches,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 2 reciting “The preprocessing database is configured to provide an evaluation result,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 2 reciting “The feedback database comprises individual data sets built by the test user, mainly coming from data sets for which a data anomaly occurs during performance testing performed by the testing service functional module using the downloaded usage sub-database,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 3 reciting “Wherein the database archiving management module further comprises a test result database and/or data logs,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 3 reciting “The test result database is configured to store results of testing of performance indicators comprising the FAR and the FRR for data update association and statistical analysis of test database service application requirements;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 3 reciting “the data logs comprise logs related to operations and audit of databases and test results in the database archiving management module.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 5 reciting “Wherein the testing service functional module comprises a database calling module, a device interface debugging module, a statistics and report module and a test result module;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 5 reciting “The database calling module is configured to download or upload an individual data set according to a requirement and an operation;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 5 reciting “The statistics and report module is configured to provide data set statistics, project statistics, algorithm statistics and simulation test statistics;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 6 reciting “Wherein the testing service functional module further comprise a user login module,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 6 reciting “the user login module configured to cooperate with the database archiving management module to perform a rights-based access operation on each sub-database in the facial test database according to rights of a user.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
As per Independent Claim 7,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed system (claims 1-3 and 5-6) and method (claims 7-14) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The independent claim 7 recites the following limitations directed to an abstract idea:
Automatically evaluating facial images and assigning unique face information codes to the facial images, to build a facial test database;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating unique face information codes, facial images, and a data set identification and coding rule, and based on the observation and evaluation, making a judgement and/or opinion of unique face information codes for the facial images.
The aspect of the limitation related to building a test database of a required category recites a mathematical concept of organizing information and manipulating information based on relationships into a facial test database.
Form a target set and a probe set;
The limitation recites a mathematical concept of organizing information and manipulating information based on relationships into a target set and a probe set.
Performing data preprocessing and image annotation, including
Performing face cutting and image quality evaluation prompting on the facial images, to generate preprocessed facial images that were cut and to be referenced in the image annotation; and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating facial images, and based on the observation and evaluation, performing face cutting and image quality evaluation prompting on the facial images resulting in preprocessed facial images that were cut and to be referenced in the image annotation.
Generating codes for the preprocessed facial images that were cut and to be referenced in the image annotation, and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating preprocessed facial images, and based on the observation and evaluation, making a judgement and/or opinion of (generating) codes for the preprocessed facial images.
Maintaining uniqueness of data set identifiers and facial image codes, according to different facial information factors.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating data set identifiers, facial image codes, and different facial information factors, and based on the observation and evaluation, making judgement and/or opinions that maintain the uniqueness of the data set identifiers and facial image codes.
STEP 2A Prong Two:Claim 7 recites that the steps are performed using “a facial recognition device”, which is a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The claim recites the following additional elements:
Importing facial images in large batches, and
The limitation recites an insignificant extra solution activity as sending/receiving data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Downloading a test database of a required scale;
The limitation recites an insignificant extra solution activity as sending/receiving/downloading data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to “Importing facial images in large batches,” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “Downloading a test database of a required scale;” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
As per Dependent Claims 8-14,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed system (claims 1-3 and 5-6) and method (claims 7-14) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The dependent claims 8-14 recite the following limitations directed to an abstract idea:
The limitation of Dependent Claim 8 includes the step(s) of:
Wherein the test database management method further comprises:
data encryption and desensitization is implemented with reference to a mapping relation for use.
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of data encryption and desensitization with reference to a mapping relationship.
The limitation of Dependent Claim 9 includes the step(s) of:
Wherein the test database is a test sub-database formed according to a data set configuration and usage rule and
The limitation recites a mathematical concept of organizing information and manipulating data according to the data set configuration and usage rule.
a data set information and code mapping table that simply sorts and numbers data after processing based on the mapping relation is configured to be viewed.
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of processing data based on mapping relationship and simply sorting and numbering data using a code mapping.
The limitation of Dependent Claim 10 includes the step(s) of:
only an image for which feature value extraction fails in the test result of a current test and a facial image or facial video in the test database are authorized through access and query of an automatic test system, and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating an image for which feature value extraction fails in the test results of the current test and a facial image or facial video in the test database, and based on the observing and evaluating, making a judgement and/or opinion to authorize through access and query of an automatic test system.
serial numbers of the image for which feature value extraction fails in the test result of the current test and the facial image or facial video in the test database are mapped to simple serial numbers obtained after local resorting.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating serial numbers of the image for which feature value extraction fails in the test result of the current test and the facial image or facial video in the test database and simple serial numbers obtained after local resorting, and based on the observation and evaluation, mapping the serial numbers.
The limitation of Dependent Claim 13 includes the step(s) of:
wherein the unique face information codes are assigned to the facial images according toa data set identification and coding rule,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating unique face information codes, facial images, and a data set identification and coding rule, and based on the observation and evaluation, making a judgement and/or opinion to assign unique face information codes to the facial images.
Wherein the data set identification and coding rule is configured to perform hierarchical classification management according to different test databases and individual data sets in the different test databases, and
The limitation recites a mathematical concept of executing a mathematical formula/function in the form of hierarchical classification management according to different test databases and individual data sets in the different test databases.
assign different names, where identifiers are unique.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating where identifiers are unique, and based on the observation and evaluation, making a judgement and/or opinion of names to be assigned.
The limitation of Dependent Claim 14 includes the step(s) of:
the unique face information codes are assigned to the facial images according to a data set identification and coding rule, and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, by observing and evaluating unique face information codes, facial images, and a data set identification and coding rule, and based on the observation and evaluation, making a judgement and/or opinion to assign unique face information codes to the facial images.
the data set identification and coding rule is configured to form a dictionary table based on influencing factors of images according to a facial data set identifier superposition manner corresponding to a database, for automatic generation of codes which are unique.
The limitation recites a mathematical concept of organizing information and manipulating data to form a dictionary table based on influencing factors of images according to a facial data set identifier superposition manner corresponding to a database, for automatic generation of codes which are unique.
STEP 2A Prong Two:The claim(s) recite the following additional elements:
The limitation of Dependent Claim 8 includes the step(s) of:
the test database is downloaded according to a data security mechanism during use,
The limitation recites an insignificant extra solution activity as sending/receiving/downloading data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 9 includes the step(s) of:
based on a requirement of a single project test, [the test database] is downloaded after authorization and stored in a ciphertext manner,
The limitation recites an insignificant extra solution activity as sending/receiving/downloading data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation also recites the data is “stored”, which is a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The limitation of Dependent Claim 10 includes the step(s) of:
Wherein a data set in the test database for which a data anomaly occurs during performance indicator testing is displayed in a form of a test result,
The limitation recites an additional element that does not amount to significantly more because it
recites sending or displaying in a form of a test result and MPEP 2106.05(d) states that performing basic computer functions such as sending and receiving data as being well-understood, routine, and conventional activity.
The limitation of Dependent Claim 11 includes the step(s) of:
Wherein the mapping relation is a correspondence between complete information, including the annotation information and codes, of data sets in the test database and viewable annotation information and codes of data sets used for performance testing.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent the mapping relation, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 12 includes the step(s) of:
feeding back a test result and a data usage status during use, and
The limitation recites an insignificant extra solution activity as feeding/sending/receiving a test result and a data usage status (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
uploading a data set for which anomaly occurs,
The limitation recites an insignificant extra solution activity as uploading/sending/receiving a data set (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
form a self-loop update mode for the test database.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/mode being used to represent the test database, as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to Claim 8 reciting “the test database is downloaded according to a data security mechanism during use,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 9 reciting “based on a requirement of a single project test, [the test database] is downloaded after authorization and stored in a ciphertext manner,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 10 reciting “Wherein a data set in the test database for which a data anomaly occurs during performance indicator testing is displayed in a form of a test result,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 11 reciting “Wherein the mapping relation is a correspondence between complete information, including the annotation information and codes, of data sets in the test database and viewable annotation information and codes of data sets used for performance testing.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 12 reciting “feeding back a test result and a data usage status during use,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 12 reciting “uploading a data set for which anomaly occurs,” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 12 reciting “form a self-loop update mode for the test database.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-7, 10, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Huijgens et al. (U.S. Pre-Grant Publication No. 2013/0129159, hereinafter referred to as Huijgens) and further in view of Varadarajan et al. (U.S. Pre-Grant Publication No. 2011/0055266, hereinafter referred to as Varadarajan) and Foreign Publication CN104091173 B, hereinafter referred to as CN173.
Regarding Claim 1:
Huijgens teaches a facial test database management system for testing a facial recognition device, the facial test database management system comprising:
A storage server including a processor executing code including a database archiving management module; and
Huijgens teaches “One or more of the modules in the facial recognition system 10 can be implemented in software, hardware, firmware, or any combination thereof. In certain embodiments, the module(s) may be implemented in software or firmware that is stored in a memory and/or associated components and that are executed by a processor, or any other processor(s) or suitable instruction execution system” (Para. [0082]).Huijgens further teaches “One or more of the modules in the facial recognition system 10 can be implemented in software, hardware, firmware, or any combination thereof. In certain embodiments, the module(s) may be implemented in software or firmware that is stored in a memory and/or associated components and that are executed by a processor, or any other processor(s) or suitable instruction execution system. In software or firmware embodiments, the logic may be written in any suitable computer language. One of ordinary skill in the art will appreciate that any process or method descriptions associated with the operation of the facial recognition system 10 may represent modules, segments, logic or portions of code which include one or more executable instructions for implementing logical functions or steps in the process.” and “the modules may be embodied in any non-transitory computer readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.” (Para.[0082])
A client including another processor executing other code including
Huijgns teaches “One or more of the modules in the facial recognition system 10 can be implemented in software, hardware, firmware, or any combination thereof. In certain embodiments, the module(s) may be implemented in software or firmware that is stored in a memory and/or associated components and that are executed by a processor, or any other processor(s) or suitable instruction execution system” (Para. [0082]) and “the methods illustrated in FIG. 4 may be implemented in one or more general, multi-purpose or single purpose processors. Such processors execute instructions, either at the assembly, compiled or machine-level, to perform that process.” (Para. [0083]) thereby teaching the use of multiple processors executing instructions to perform the process.
an evaluation annotation functional module is configured to
Huijgens teaches “Reference image pre-processing is shown generally as a reference image pre-processing module 146.” (Para. [0071] & Fig. 8)
evaluate facial images and facial videos,
Huijgens teaches “The input portion 12 generally is the front end of the facial recognition system 10 and typically accepts data input into the facial recognition system 10. The input portion 12 also starts a processing loop in the library portion 14. The input portion 12 of the facial recognition system 10 includes an input module 22 that is coupled to and receives input information or data from a Facial Recognition Technology (FERET) Database 24 or other suitable source of one or more reference facial images. The FERET database 24 is a conventional database of facial images that often is used in many facial recognition applications. Additional data corresponding to the received facial images is input to the facial recognition system 10” (Para. [0023]) and “The image pre-processing module or logic 136 also can be used to improve a reference image, e.g., a reference image provided or supplied by the face image database 134. Reference image pre-processing is shown generally as a reference image pre-processing module 146.” (Para. [0071]).Huijgens further teaches “a subject or probe face image that is captured live using a 2D still camera or using a frame captured from a video stream” (Para. [0069]) thereby teaching facial images and facial videos being captured.
perform data preprocessing by a facial testing algorithm and image processing, and
Huijgens teaches “Using available processing modules, the facial recognition system 10 initially converts a two dimensional face image, e.g., a face image captured from a photo or camera or a photo scanner, into a three dimensional (3D) model or version of the face image. Then, using suitable processing modules, the 3D version of the captured face image can be rotated or otherwise adjusted to improve the pose and lighting of the face before the improved face image is converted back to a 2D image” (Para. [0020]).
Perform face cutting and image quality evaluation prompting on the facial images, to generate preprocessed facial images that were cut and to be referenced in the image annotation,
Huijgens teaches “ each configuration file can include a test mode option indicating the type of test to be performed using the image associated with the configuration file, e.g., a False Accept Rate (FAR) test, a False Reject Rate (FRR) test, a CyberExtruder False Accept Rate (CEFAR) test, a Quality test, or other suitable tests that comply with the configuration format, including customized tests. Also, each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]) where “Each configuration file can include other options, such as whether or not to use CyberExtruder pose and lighting correction processing (or other suitable pose and lighting correction processing), the amount of images for comparison, whether or not to use saved templates for generated output files, the path to those saved templates, the output directory for generated output files and how many processors (cpus) to use.” (Para. [0042]) and “the output generated by the pose and lighting correction class 56 is an image, which can be saved in an appropriate location, e.g., on disk storage space in a temporary directory.” (Para. [0037]).
Generate codes for the preprocessed facial images that were cut and to be referenced in the image annotation,
Huijgens teaches “the facial recognition system 160, in verification (or authentication) mode, uses a biometric Identity verification system based on face recognition, using any suitable 2D face recognition process module to measure the similarity score between a reference image that has an associated known identity, and a subject or probe face image that is captured live using a 2D still camera or using a frame captured from a video stream.” (Para. [0075]) and Varadarajan teaches “the content is annotated based on the text-oriented, video-oriented, and audio-oriented analyses at the level of granularity of interest (170), and the annotations are stored in a database (180).” (Para. [0033]).
Huijgens further teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]) and “each configuration file can include other options, such as whether or not to use CyberExtruder pose and lighting correction processing (or other suitable pose and lighting correction processing), the amount of images for comparison, whether or not to use saved templates for generated output files, the path to those saved templates, the output directory for generated output files and how many processors (cpus) to use.” (Para. [0042]).
Maintain uniqueness of data set identifiers and facial image codes, according to different facial information factors,
Huijgens teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images” (Para. [0041]) thereby teaching maintaining uniqueness of data according different facial information factors.
Query individual data sets in different test databases by using one or more screening conditions,
Huijgens teaches “the facial recognition system described herein can be used in any suitable application. For example, the facial recognition system described herein can be used in an identification mode application, i.e., in which a captured subject or probe facial image is compared with a plurality of reference facial images (e.g., from a facial image database) to determine a match or potential match between the subject facial image and one or more of the reference facial images.” (Para. [0021]).
Provide a test database matching condition required for testing in an actual application scenario, and
Huijgens teaches “the facial recognition system described herein can be used in any suitable application. For example, the facial recognition system described herein can be used in an identification mode application, i.e., in which a captured subject or probe facial image is compared with a plurality of reference facial images (e.g., from a facial image database) to determine a match or potential match between the subject facial image and one or more of the reference facial images.” (Para. [0021]).
generate a unique facial image code or a facial video code; and
Huijgens teaches “The classes also include a settings class 44 and a CTS Config class 46. The settings class 44 is a parser that parses the parameters of the input command line received by the input module 22 and gives a meaning to the command line parameters. The settings class 44 is separated from the CTS Config class 46 to ensure that the system stays modular and to also ensure that the settings class 44 meets the requirements of cohesion (i.e., single-responsibility principle)” (Para. [0034]) where “The classes also include a DataProducer class 48. The DataProducer class 48 produces data, e.g., the test data needed to make a test report. The DataProducer class 48 also uses the face recognition library 34 and the pose and lighting correction library 36 to generate the data. The DataProducer class 48 includes various functions, such as CE_Match, CE_Quality, Match and Quality, which will be described in greater detail hereinbelow.” (Para. [0035]).
A testing service functional module configured to
call the database archiving management module,
Huijgens teaches “The test cases are globally sorted in the following categories: Constructors, Data accessors, function returns and Type checking” (Para. [0027]).
provide, for performance testing, a test database that meets a standard requirement, and
Huijgens teaches “a step 72 of receiving input data. As discussed hereinabove, the input module 22 of the facial recognition system 10 receives input information or data from the FERET Database 24 or other suitable database of reference facial images. The input module 22 of the facial recognition system 10 also receives face image information through one or more configuration files 26, which are given to the facial recognition system 10 via command lines.” (Para.[0054]) and “If the determining step 84 determines that the input data parsing operation was performed successfully (Y), the method 70 proceeds to a step 86 of performing the appropriate test.” (Para. [0057]) where “as part of the test performing step 86, as discussed hereinabove, available pose correction and/or lighting and/or resolution processing can be used to improve the appearance of a face image prior to that face image being used in any facial recognition processing. In an identification mode application, available image improvement processing can be used to enhance the quality of a captured subject or probe facial image as well as one or more of a plurality of reference facial images from a facial image database. In a verification mode application, available image improvement processing can be used to enhance the quality of a captured subject or probe facial image and/or an associated known identity reference facial image (e.g., a passport image).” (Para. [0058]).
provide a test result feedback statistics service.
Huijgens teaches “The DataProducer class 48 produces data, e.g., the test data needed to make a test report.” (Para. [0035]) and “each configuration file can include a test mode option indicating the type of test to be performed using the image associated with the configuration file, e.g., a False Accept Rate (FAR) test, a False Reject Rate (FRR) test, a CyberExtruder False Accept Rate (CEFAR) test, a Quality test, or other suitable tests that comply with the configuration format, including customized tests” (Para. [0041]).
Huijgens explicitly teaches all of the elements of the claimed invention as recited above except:
periodically update data of a facial test database based on a usage management requirement,
perform hierarchical classification management based on user permission allocation;
image annotation by a facial testing algorithm and image processing, and
construct a large-scale normalized facial test database;
However, in the related field of endeavor of hierarchical image processing, Varadarajan teaches:
perform hierarchical classification management based on user permission allocation;
Varadarajan teaches “note that most of the classifiers may have some sort of pre-processing, say regionalization, as part of their classification process. Further, as part of the post-processing in some of the classifiers, the context, say, as defined by the path in the hierarchy, gets used in reducing the ambiguity, and thereby enhancing the recognition accuracy. Observer this aspect in the C-SeaShore classifier.” (Para. [0156]).Varadarajan further teaches “one kind of video/image analysis is for machine processing while the other kind of video/image analysis is for providing information directly or indirectly to users. Note that video/image compression falls into the first kind while the video/image annotation is of second kind. For example, video/image annotations help in supporting semantics based end user queries on videos and relevance based ad targeting while watching the videos” (Para. [0002])
image annotation by a facial testing algorithm and image processing, and
Varadarajan teaches “depicts an overview of a video analysis system. A content retrieval system (100) obtains a content to be annotated from the content database (110). The content is typically a multimedia video and a multi-modal analysis is performed to extract as much of information as possible with an end objective of annotating of the content.” (Para. [0033]).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Varadarajan and Huijgens at the time that the claimed invention was effectively filed, to have combined the hierarchical image processing, as taught by Varadarajan, with the systems and methods for face recognition using a face image database, as taught by Huijgens.
One would have been motivated to make such combination because Varadarajan teaches “The known systems perform syntactic and semantic analyses of the images in an isolated manner to address the issues related to the processing complexity. The present invention provides a system and method to enhance the overall image recognition accuracy by building on top of the well known proposed systems by exploiting the hierarchical domain semantics.” (Para. [0014]).
Varadarajan and Huijgens explicitly teaches all of the elements of the claimed invention as recited above except:
periodically update data of a facial test database based on a usage management requirement,
construct a large-scale normalized facial test database;
However, in the related field of endeavor of human characteristic recognition, CN173 teaches:
periodically update data of a facial test database based on a usage management requirement,
CN173 teaches “periodically updating the human body characteristic database” (Page 11 Paragraph 6)
construct a large-scale normalized facial test database;
CN173 teaches “according to the correction coordinate of each key feature point calculated by dividing each part of the face image, the position of the key characteristic point of each human face image mark is not completely the same, so divided from the sub-area image of each human face image size is not the same, the position of each key feature points are different, thus performing normalization processing for each key characteristic point.” (Page 10 Paragraph 1).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of CN173, Varadarajan, and Huijgens at the time that the claimed invention was effectively filed, to have combined the sex identification of humans in an image, as taught by CN173, with the hierarchical image processing, as taught by Varadarajan, and the systems and methods for face recognition using a face image database, as taught by Huijgens.
One would have been motivated to make such combination because CN179 teaches “a method based on human body featureidentification method, device and system of network camera, can improve the recognition efficiency of target human body characteristic, accuracy and real-time performance.” (Page 2 Paragraph 4) and it would have been obvious to a person having ordinary skill in the art that incorporating recognizing extra features, such as gender and sex of the human in the image, to improve recognition efficiency would improve the combined recognition systems of Huijgens and Varadarajan.
Regarding Claim 2:
CN173, Varadarajan, and Huijgens further teach:
Wherein the database archiving management module comprises a primary storage database, a usage sub-database, an approval database, a preprocessing database and a feedback database;
Huijgens teaches “One or more of the modules in the facial recognition system 10 can be implemented in software, hardware, firmware, or any combination thereof. In certain embodiments, the module(s) may be implemented in software or firmware that is stored in a memory and/or associated components and that are executed by a processor, or any other processor(s) or suitable instruction execution system” (Para. [0082]).
The primary storage database comprises individual data sets of single individuals, and a facial image and facial video in each individual data set in a constructed target facial test database each having a unique irreversible identification code;
Huijgens teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images” (Para. [0041]) thereby teaching storing data with a unique irreversible identification code (path).
The usage sub-database has a set scale and quantity obtained from the primary storage database according to a data set configuration rule and based on a performance test level requirement of a device to be tested, comprises a target set and a probe set meeting a sample distribution requirement, and is configured to test performance indicators comprising a Fault Acceptance Rate (FAR) and a Fault Rejection Rate (FRR) of the device to be tested;
Huijgens teaches “The configuration files input to the facial recognition system 10 can have any suitable format that is recognized by the facial recognition system 10. Also, each configuration file includes a number of option parameters for use by the various components and processing modules in the facial recognition system 10. For example, each configuration file can include a test mode option indicating the type of test to be performed using the image associated with the configuration file, e.g., a False Accept Rate (FAR) test, a False Reject Rate (FRR) test, a CyberExtruder False Accept Rate (CEFAR) test, a Quality test, or other suitable tests that comply with the configuration format, including customized tests. Also, each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]) where “when the FAR test is chosen, the number of images needs to be defined because the FAR test uses the image amount setting to determine the sample size and how many images need to load for the test.” (Para. [0043])
The preprocessing database is configured to receive facial images or facial videos initially imported into the storage server in batches, perform data preprocessing, provide an evaluation result, generate an annotated data set, and save the annotated data set into the approval database, and
Huijgens teaches “The facial recognition system 130 also includes an image pre-processing module or logic 136, which is configured to perform image pre-processing, e.g., as discussed hereinabove, to improve captured face images. The image pre-processing module or logic 136 can be used to improve a subject or probe image, e.g., a subject or probe image captured by a camera 138 of a subject 142. Subject image pre-processing is shown generally as a subject image pre-processing module 144. The image pre-processing module or logic 136 also can be used to improve a reference image, e.g., a reference image provided or supplied by the face image database 134. Reference image pre-processing is shown generally as a reference image pre-processing module 146.” (Para. [0071]) and “the output generated by the pose and lighting correction class 56 is an image, which can be saved in an appropriate location, e.g., on disk storage space in a temporary directory.” (Para. [0037]).
The annotated data set is verified according to the evaluation result, subjected to a conformity check performed based on a technical requirement on test databases, and saved into the primary storage database after being approved; and
Huijgens teaches “the facial recognition system 160, in verification (or authentication) mode, uses a biometric Identity verification system based on face recognition, using any suitable 2D face recognition process module to measure the similarity score between a reference image that has an associated known identity, and a subject or probe face image that is captured live using a 2D still camera or using a frame captured from a video stream.” (Para. [0075]) and Varadarajan teaches “the content is annotated based on the text-oriented, video-oriented, and audio-oriented analyses at the level of granularity of interest (170), and the annotations are stored in a database (180).” (Para. [0033]). Therefore, Huijgens teaches verifying or authenticating aspects of the images based on a conformity/comparison check and Varadarajan teaches archiving and saving the annotations into the database.
The feedback database comprises individual data sets built by the test user, mainly coming from data sets for which a data anomaly occurs during performance testing performed by the testing service functional module using the downloaded usage sub-database, and is configured to update data in the primary storage database.
Huijgens teaches “The DataProducer class 48 produces data, e.g., the test data needed to make a test report. The DataProducer class 48 also uses the face recognition library 34 and the pose and lighting correction library 36 to generate the data. The DataProducer class 48 includes various functions, such as CE_Match, CE_Quality, Match and Quality,” (Para. [0035]) and “the FaceRec class 58 is a pure abstract class to give a blueprint of the functions needed by a face recognition class. The FaceRec class 58 ensures that the testing program is relatively adaptable to be used with different face recognition processing modules with relatively few modifications.” (Para. [0038]).
CN173 further teaches “periodically updating the human body characteristic database,” (Page 11 Paragraph 6).
Regarding Claim 3:
CN173, Varadarajan, and Huijgens further teach:
Wherein the database archiving management module further comprises a test result database and/or data logs, and the test result database is configured to store results of testing of performance indicators comprising the FAR and the FRR for data update association and statistical analysis of test database service application requirements; and the data logs comprise logs related to operations and audit of all databases and test results in the database archiving management module.
Huijgens teaches “If the determining step 88 determines that the testing was performed successfully (Y), the method 70 proceeds to a step 92 of processing the test results (shown generally as results 94).” (Para. [0060]) and “The output module 30 typically is responsible for generating or providing the results from the core module 28, e.g., as one or more files, such as a CSV (Comma-separated values) file.” (Para. [0024]).
Huigens further teaches auditing all of the databases and test results by teaching “the Quality test assesses the quality of all of the images that are frontal and quarter turned.” (Para. [0043]) and “o measure the False Accept Rate, every image of a person in the dataset is matched against all of the images of the different persons in the dataset.” (Para. [0045]).
Regarding Claim 5:
CN173, Varadarajan, and Huijgens further teach:
Wherein the testing service functional module comprises a database calling module, a device interface debugging module, a statistics and report module and a test result module;
Huijgens teaches “One or more of the modules in the facial recognition system 10 can be implemented in software, hardware, firmware, or any combination thereof. In certain embodiments, the module(s) may be implemented in software or firmware that is stored in a memory and/or associated components and that are executed by a processor, or any other processor(s) or suitable instruction execution system” (Para. [0082]).
The database calling module is configured to download or upload an individual data set according to a requirement and an operation;
Huijgens teaches “After such preprocessing is performed, these functions call the Match or Quality function and pass the preprocessed images as parameters.” (Para. [0053]).
The device interface debugging module is configured to interact with a device to be tested by calling a test interface function, to push or obtain a facial image;
Huijgens teaches “The method 70 also includes a step 84 of determining whether or not the parsing operation or step 82 was performed successfully. If the determining step 84 determines that the input data parsing operation was not performed successfully (N), the method 70 proceeds to the step 76 of displaying an error message. The method 70 then either ends or returns to the start of the method 70 (i.e., the end/return step 78). If the determining step 84 determines that the input data parsing operation was performed successfully (Y), the method 70 proceeds to a step 86 of performing the appropriate test.” (Para. [0057]).
The statistics and report module is configured to provide data set statistics, project statistics, algorithm statistics and simulation test statistics; and
Huijgens teaches “The DataProducer class 48 produces data, e.g., the test data needed to make a test report.” (Para. [0035]) and “The configuration files input to the facial recognition system 10 can have any suitable format that is recognized by the facial recognition system 10. Also, each configuration file includes a number of option parameters for use by the various components and processing modules in the facial recognition system 10. For example, each configuration file can include a test mode option indicating the type of test to be performed using the image associated with the configuration file, e.g., a False Accept Rate (FAR) test, a False Reject Rate (FRR) test, a CyberExtruder False Accept Rate (CEFAR) test, a Quality test, or other suitable tests that comply with the configuration format, including customized tests. Also, each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]).
The test result module is configured to manage test results of performance indicators comprising a Fault Acceptance Rate (FAR) and a Fault Rejection Rate (FRR) of the device to be tested.
Huijgens teaches “The DataProducer class 48 produces data, e.g., the test data needed to make a test report.” (Para. [0035]) and “The configuration files input to the facial recognition system 10 can have any suitable format that is recognized by the facial recognition system 10. Also, each configuration file includes a number of option parameters for use by the various components and processing modules in the facial recognition system 10. For example, each configuration file can include a test mode option indicating the type of test to be performed using the image associated with the configuration file, e.g., a False Accept Rate (FAR) test, a False Reject Rate (FRR) test, a CyberExtruder False Accept Rate (CEFAR) test, a Quality test, or other suitable tests that comply with the configuration format, including customized tests. Also, each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]).
Regarding Claim 6:
CN173, Varadarajan, and Huijgens further teach:
Wherein the testing service functional module further comprise a user login module configured to cooperate with the database archiving management module to perform a rights-based access operation on each sub-database in the facial test database according to rights of a user.
Huijgens teaches “The face matching module 148 provides the ranking list to the identification module 132, which makes the ranking list available to a suitable a business application 152 and/or a user. The ranking list is used by the user and/or a machine within the business application 152 to associate an identity to the subject image.” (Para. [0073]) thereby teaching rights-based access by the business application and/or user.Huijgens further teaches “The verification module 162 is coupled to an accept/reject controller 164, which is configured to allow or disallow a subject to gain entrance or otherwise be accepted based on the accept/reject determination made by the verification module 162 and delivered to the accept/reject controller 164” (Para. [0076]).
Regarding Claim 7:
Some of the limitations herein are similar to some or all of the limitations of Claim 1.
CN173, Varadarajan, and Huijgens further teach a test database management method for testing a facial recognition device, comprising:
Downloading a test database of a required scale, to form a target set and a probe set.
CN173 teaches “extracting human body characteristic from the pre-set human body characteristic database by the server according to the human body characteristic training human body characteristic recognition classifier;” (Page 2 Paragraph 8).
Varadarajan teaches “Obtain the set of image annotations associated with the various hierarchies” (Para. [0151]) and “obtaining of a total number of pairs based on said plurality of labels” (Claim 4).
Regarding Claim 10:
CN173, Varadarajan, and Huijgens further teach:
Wherein a data set in the test database for which a data anomaly occurs during performance indicator testing is displayed in a form of a test result, only an image for which feature value extraction fails in the test result of the current test and a facial image or facial video in the test database are authorized through access and query of an automatic test system, and serial numbers of the image for which feature value extraction fails in the test result of the current test and the facial image or facial video in the test database are mapped to simple serial numbers obtained after local resorting.
Huijgens teaches “The method 70 also includes a step 74 of determining whether or not the received input data is valid data. If the determining step 74 determines that the input data is not valid data (N), the method 70 proceeds to a step 76 of displaying an error message. The method 70 then either ends or returns to the start of the method 70, which is shown generally as an end/return step 78.” (Para. [0055]).
Regarding Claim 12:
CN173, Varadarajan, and Huijgens further teach:
feeding back a test result and a data usage status during use; and uploading a data set for which anomaly occurs, to form a self-loop update mode for the test database.
Huijgens teaches “If the determining step 88 determines that the testing was performed successfully (Y), the method 70 proceeds to a step 92 of processing the test results (shown generally as results 94).” (Para. [0060]).
CN173 teaches “periodically updating the human body characteristic database, and extracting features from the human body characteristic database for updating to re-train the human characteristic recognition classifiers, keeping the real-time property of data.” (Page 11 Paragraph 6) thereby teaching a self-loop update model for updating and retraining.
Regarding Claim 13:
CN173, Varadarajan, and Huijgens further teach:
Wherein the unique face information codes are assigned to the facial images according to a data set identification and coding rule, and
Huijgens teaches “the facial recognition system 160, in verification (or authentication) mode, uses a biometric Identity verification system based on face recognition, using any suitable 2D face recognition process module to measure the similarity score between a reference image that has an associated known identity, and a subject or probe face image that is captured live using a 2D still camera or using a frame captured from a video stream.” (Para. [0075]) and Varadarajan teaches “the content is annotated based on the text-oriented, video-oriented, and audio-oriented analyses at the level of granularity of interest (170), and the annotations are stored in a database (180).” (Para. [0033]).
Huijgens further teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]) and “each configuration file can include other options, such as whether or not to use CyberExtruder pose and lighting correction processing (or other suitable pose and lighting correction processing), the amount of images for comparison, whether or not to use saved templates for generated output files, the path to those saved templates, the output directory for generated output files and how many processors (cpus) to use.” (Para. [0042]).
Huijgens additionally teaches “The classes also include a settings class 44 and a CTS Config class 46. The settings class 44 is a parser that parses the parameters of the input command line received by the input module 22 and gives a meaning to the command line parameters. The settings class 44 is separated from the CTS Config class 46 to ensure that the system stays modular and to also ensure that the settings class 44 meets the requirements of cohesion (i.e., single-responsibility principle)” (Para. [0034]) where “The classes also include a DataProducer class 48. The DataProducer class 48 produces data, e.g., the test data needed to make a test report. The DataProducer class 48 also uses the face recognition library 34 and the pose and lighting correction library 36 to generate the data. The DataProducer class 48 includes various functions, such as CE_Match, CE_Quality, Match and Quality, which will be described in greater detail hereinbelow.” (Para. [0035]).
the data set identification and coding rule is configured to perform hierarchical classification management according to different test databases and individual data sets in the different test databases, and assign different names, where identifiers are unique.
Varadarajan teaches “Video-oriented analysis (140) involves analyzing of the sequence of frames that are part of the content. Such frames are extracted and a hierarchical image processing is performed (150) based on the database of domain hierarchical semantics (160)” (Para. [0033])
Huijgens teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images.” Thereby teaching assigning different names/paths with unique identifiers.
Regarding Claim 14:
CN173, Varadarajan, and Huijgens further teach:
Wherein the unique face information codes are assigned to the facial images according to a data set identification and coding rule, and
Huijgens teaches “the facial recognition system 160, in verification (or authentication) mode, uses a biometric Identity verification system based on face recognition, using any suitable 2D face recognition process module to measure the similarity score between a reference image that has an associated known identity, and a subject or probe face image that is captured live using a 2D still camera or using a frame captured from a video stream.” (Para. [0075]) and Varadarajan teaches “the content is annotated based on the text-oriented, video-oriented, and audio-oriented analyses at the level of granularity of interest (170), and the annotations are stored in a database (180).” (Para. [0033]).
Huijgens further teaches “each configuration file can include a FERETPath option indicating the path to the FERET reference images.” (Para. [0041]) and “each configuration file can include other options, such as whether or not to use CyberExtruder pose and lighting correction processing (or other suitable pose and lighting correction processing), the amount of images for comparison, whether or not to use saved templates for generated output files, the path to those saved templates, the output directory for generated output files and how many processors (cpus) to use.” (Para. [0042]).
Huijgens additionally teaches “The classes also include a settings class 44 and a CTS Config class 46. The settings class 44 is a parser that parses the parameters of the input command line received by the input module 22 and gives a meaning to the command line parameters. The settings class 44 is separated from the CTS Config class 46 to ensure that the system stays modular and to also ensure that the settings class 44 meets the requirements of cohesion (i.e., single-responsibility principle)” (Para. [0034]) where “The classes also include a DataProducer class 48. The DataProducer class 48 produces data, e.g., the test data needed to make a test report. The DataProducer class 48 also uses the face recognition library 34 and the pose and lighting correction library 36 to generate the data. The DataProducer class 48 includes various functions, such as CE_Match, CE_Quality, Match and Quality, which will be described in greater detail hereinbelow.” (Para. [0035]).
Wherein the data set identification and coding rule is configured to form a dictionary table based on influencing factors of images according to a facial data set identifier superposition manner corresponding to a database, for automatic generation of codes which are unique.
Varadarajan teaches “710 provides a table depicting the class hierarchy. Note that this class hierarchy is based on semantic relationships among the labels associated with the classes. Each node in the hierarchy has a Class ID and is bound with one or more classifiers. For example, Class ID 1 is bound with two classifiers: C-Day and C-Night. C-Day is a specialist classifier to analyze an input image to recognize whether the input image is a day time image. Similarly, the other classifiers are also specialist classifiers. In particular, note that most of the classifiers may have some sort of pre-processing, say regionalization, as part of their classification process. Further, as part of the post-processing in some of the classifiers, the context, say, as defined by the path in the hierarchy, gets used in reducing the ambiguity, and thereby enhancing the recognition accuracy. Observer this aspect in the C-SeaShore classifier.”
Claim(s) 8, 9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over CN173, Varadarajan, and Huijgens, and further in view of Foreign Publication CN111310734A, hereinafter referred to as CN734.
Regarding Claim 8:
CN173, Varadarajan, and Huijgens explicitly teach all of the elements of the claimed invention as recited above except:
Wherein the test database is downloaded according to a data security mechanism during use, and data encryption and desensitization is implemented with reference to a mapping relation for use.
However, in the related field of endeavor of facial recognition, CN734 teaches:
Wherein the test database is downloaded according to a data security mechanism during use, and data encryption and desensitization is implemented with reference to a mapping relation for use.
CN734 teaches “the current encryption scheme field of computers, either symmetrical encryption algorithm, asymmetrical encryption algorithm, a hash algorithm such as widely application of classical encryption algorithm, or homomorphic encryption algorithm of the research hot flashes occur again in recent years, because the encryption process destroys the face picture in two-dimensional space can be interpreted, so the ciphertext after encryption thereof can not be directly applied to the facial recognition. If again decrypt the ciphertext and then performing face recognition, then the information after decrypting it still has the risk of privacy leakage.” (page 4 Paragraph 7).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of CN734, CN173, Varadarajan, and Huijgens at the time that the claimed invention was effectively filed, to have combined the protecting user privacy, taught by CN734, with the sex identification of humans in an image, as taught by CN173, the hierarchical image processing, as taught by Varadarajan, and the systems and methods for face recognition using a face image database, as taught by Huijgens.
One would have been motivated to make such combination because CN734 teaches protecting user privacy (Abstract), and it would have been obvious to a person having ordinary skill in the art that the users of business applications taught in the combination of CN173, Varadarajan, and Huijgens would benefit from having their privacy protected.
Regarding Claim 9:
CN734, CN173, Varadarajan, and Huijgens further teach:
Wherein the test database is a test sub-database formed according to a data set configuration and usage rule and based on a requirement of a single project test, is downloaded after authorization and stored in a ciphertext manner, and a data set information and code mapping table that simply sorts and numbers data after processing based on the mapping relation is configured to be viewed.
CN734 teaches “the current encryption scheme field of computers, either symmetrical encryption algorithm, asymmetrical encryption algorithm, a hash algorithm such as widely application of classical encryption algorithm, or homomorphic encryption algorithm of the research hot flashes occur again in recent years, because the encryption process destroys the face picture in two-dimensional space can be interpreted, so the ciphertext after encryption thereof can not be directly applied to the facial recognition. If again decrypt the ciphertext and then performing face recognition, then the information after decrypting it still has the risk of privacy leakage.” (page 4 Paragraph 7) and “in this example, normalized by scale and quantization operations to desensitization, the obtained result as the ciphertext, still is a characteristic pattern of multiple channels, it has spatial information, can be directly applied to the recognition model based on deep learning, it can avoid the risk of privacy leakage directly applied the original feature map has higher safety.” (Page 5 Paragraph 11).
Regarding Claim 11:
CN734, CN173, Varadarajan, and Huijgens further teach:
Wherein the mapping relation is a correspondence between complete information, including the annotation information and codes, of data sets in the test database and viewable annotation information and codes of data sets used for performance testing.
Varadarajan teaches “This is more so when the analysis is required to extract information from the images for providing of the same directly or indirectly to the users.” (Abstract) and “For a successful annotation of an image, it is necessary to undertake the semantic analysis of the image: the image is analyzed to identify the prominent objects in the image so as provide that annotation based on these recognized objects” (Para. [0002]). Varadarajan further teaches in Fig. 7B an illustrative result of image annotations being displayed.
Response to Amendment
Applicant’s Amendments to the Claims and Abstract, filed on 11/21/2025, are acknowledged and accepted.
In light of the Amendments filed on 11/21/2025, the claim objections to claims 2 and 4 are withdrawn.
In light of the Amendments filed on 11/21/2025, the 101 rejection to claims 1-6 for being directed to non-statutory subject matter, due to not falling within at least one of the four categories of patent eligible subject matter, has been withdrawn.
Response to Arguments
On pages 9-11 of the Remarks filed on 11/21/2025, Applicant states that the independent claims 1 and 7 have been amended in such a way “that such aspects constitute significantly more than an abstract idea, and are neither conventional nor, as discussed below, disclosed by the cited art”.Upon spending further time for considering Applicant’s arguments and the amended limitations, the 101 rejection the claims for being directed toward an abstract idea without significantly more is being maintained and the amended limitations have been fully addressed in the rejection above.
On page 14 of the Remarks filed on 11/21/2025, Applicant argues that “Huijgens does not suggest any means or process to ... perform face cutting and image quality evaluation prompting on the facial images, to generate preprocessed facial images that were cut and to be referenced in the image annotation . . . generate codes for the preprocessed facial images that were cut and to be referenced in the image annotation…”.Upon further review, Applicant’s argument is not convincing and the amended scope of the independent claims has been addressed in the prior-art rejection above.
On page 15 of the Remarks filed on 11/21/2025, Applicant argues that “Huijgens does not suggest ... maintain uniqueness of data set identifiers and facial image codes, according to different facial information factors, query individual data sets in different test databases by using one or more screening conditions, provide a test database matching condition required for testing in an actual application scenario, and generate a unique facial image code or a facial video code, to construct a large-scale normalized facial test database ...”
Upon further review, Applicant’s argument is not convincing and the amended scope of the independent claims has been addressed in the prior-art rejection above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kim et al. (U.S. Pre-Grant Publication No. 2021/0019345) teaches a method for selecting an image of interest to construct a retrieval database including receiving an image captured by an imaging device, detecting an object of interest in the received image, selecting an image of interest based on at least one of complexity of the image in which the object of interest is detected and image quality of the object of interest, and storing information related to the image of interest in the retrieval database, and an image control system performing the same.
Movellan et al. (U.S. Pre-Grant Publication No. 2018/0012067) teaches implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.
Foreign Publication CN 113596158 A teaches a scene-based algorithm configuration method and device, Internet of Things to the technical field of Internet of Things, capable of configuring algorithm packet to the edge domain node according to the requirement of the user real-time scene analysis, analyzing the data received from the terminal device according to the configured algorithm packet, improving the user experience. The specific scheme is that the cloud centre receives the sampling data sent by the first edge domain node; the cloud centre determines a plurality of algorithm packets corresponding to the type of the sampling data; the data type of the sampling data is used for reflecting the scene when the terminal device obtains the sampling data. performing each algorithm packet in the plurality of algorithm packets to the sampling data by the cloud centre, obtaining the algorithm result corresponding to each algorithm packet; the cloud centre according to the algorithm result corresponding to each algorithm packet, determining the target algorithm packet from a plurality of algorithm packet; the target algorithm packet is used for analyzing the data collected by the terminal device. The embodiment of the invention is used for Internet of Things algorithm configuration process.
Foreign Publication CN 104166833 A teaches a human face detection method. it mainly comprises the following steps: first inputting the image, pre-processing, then face detection, detection section uses the human-face detecting method based on statistics comprises face recognition, video transmission, video monitoring, other, final result output. using the method can precisely carry out the face detection.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT F MAY/Examiner, Art Unit 2154 3/3/2026
/BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154