DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continuation Application
This Application is a continuation of US Application No. 18/050,894 filed on 10/28/2022 now patent US 12,417,665 (“Parent Application”). See MPEP § 201.07. In accordance with MPEP § 609.02 (II)(A)(2) and MPEP § 2001.06(b) (last paragraph), the Examiner has reviewed and considered the prior art cited in the Parent Application. Also, in accordance with MPEP § 2001.06(b) (last paragraph), all documents cited or considered ‘of record’ in the Parent Application are now considered cited or ‘of record’ in this application. Additionally, Applicant(s) is/are reminded that a listing of the information cited or ‘of record’ in the Parent Application need not be resubmitted in this application unless Applicant(s) desires the information to be printed on a patent issuing from this application. See MPEP § 609.02 (II)(A)(2).
Information Disclosure Statement
The Information Disclosure Statement filed 09/15/2025 was considered. An initialed copy of the Form PTO-1449 is enclosed herewith.
Acknowledgements
This Office Action is in response to the claims filed on 09/15/2025.
Claims 1-21 were introduced.
Claims 1-21 are pending.
Claims 1-21 were examined.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-18 of U.S. Patent No. 12,417,665. Although the claims at issue are not identical, they are not patentably distinct from each other because the current claims represent a broader embodiment of the patented claims, as follows:
Current Application 19/329,463
US Patent 12,417,665 B1
1.A computer program product stored on a non-transitory computer storage medium comprising machine readable program code for causing, when executed, a computing device to perform process steps, comprising:
receiving one or more user entered identification data elements at a server hosting a voter registration system for registration of a user as a voter within a jurisdiction;
receiving, at the server, an image of a photo identification (ID) card of the user, the photo ID card having a barcode containing an encoding of one or more user identification data elements;
decoding the barcode, at the server, to extract one or more decoded user identification data elements encoded in the barcode;
comparing the one or more user entered identification elements with the one or more decoded user identification data elements; and
registering a user with the voter registration system when there is a match between the one or more user entered identification elements with the one or more decoded user identification data elements.
1. A computer program product stored on a non-transitory computer storage medium comprising machine readable program code for causing, when executed, a computing device to perform process steps, comprising:
receiving one or more user entered identification data elements at a server hosting a voter registration system for registration of a user as a voter within a jurisdiction;
receiving, at the server, an image of a photo identification (ID) card of the user, the photo ID card having a barcode containing an encoding of one or more user identification data elements,
the image of the photo ID card captured within a contained instance of a session between a computing device of the user and the server;
decoding the barcode, at the server, to extract one or more decoded user identification data elements encoded in the barcode;
comparing the one or more user entered identification elements with the one or more decoded user identification data elements;
receiving, at the server, a corresponding image from a governmental data repository, the corresponding image referenced by the one or more decoded user identification data elements;
comparing, via an artificial intelligence (AI) platform implementing a machine learning (ML) model trained to distinguish one or more facial recognition attributes of an image to identify a person according to a tolerance threshold, an image of a person from the photo ID to the corresponding image;
registering the user with the voter registration system in an initial verification of the user in response to a match between the one or more user entered identification elements with the one or more decoded user identification data elements; and
registering the user as a voter with the jurisdiction in a supplemental verification in response to matching of the corresponding image with the image of the person from the photo ID
according to the tolerance threshold.
4.The computer program product of claim 3, further comprising:
receiving, at the server, a ballot containing one or more candidates for an elected office or a ballot initiative for an election cycle within the jurisdiction.
2. The computer program product of claim 1, further comprising:
receiving, at the server, a ballot containing one or more candidates for an elected office or a ballot initiative for an election cycle within the jurisdiction.
5.The computer program product of claim 4, further comprising:
receiving, at the server, a contained instance image of the user captured within the session between the computing device of the user and the server;
comparing the contained instance image of the user with the image of the person from the photo ID by the Al platform; and
when there is a matching of the contained instance image with the image of the person from the photo ID, verifying the user as a voter.
3. The computer program product of claim 2, further comprising:
receiving, at the server, a contained instance image of the user captured within the session between the computing device of the user and the server;
comparing the contained instance image of the user with the image of the person from the photo ID by the AI platform; and
when there is a matching of the contained instance image with the image of the person from the photo ID, verifying the user as a voter.
6.The computer program product of claim 5, further comprising:
specifying a tolerance threshold for the matching;
the tolerance threshold establishing a closeness of the matching between the contained instance image of the user and the image of the person from the photo ID; and
matching the contained instance image with the image of the person from the photo ID according to the tolerance threshold.
4. The computer program product of claim 3, wherein the tolerance threshold establishes a closeness of the matching between the contained instance image of the user and the image of the person from the photo ID.
7.The computer program product of claim 5, further comprising:
transmitting the ballot to the computing device of the user when the user is verified as a voter.
5. The computer program product of claim 3, further comprising:
transmitting the ballot to the computing device of the user when the user is verified as a voter.
8.The computer program product of claim 7, further comprising:
receiving, at the server, a voter selection of the one or more candidates or a voter choice on the ballot initiative for the ballot.
6. The computer program product of claim 5, further comprising:
receiving, at the server, a voter selection of the one or more candidates or a voter choice on the ballot initiative for the ballot.
9.The computer program product of claim 8, further comprising:
transmitting, from the server, a confirmation that the voter selection or the voter choice for the ballot has been received.
7. The computer program product of claim 6, further comprising:
transmitting, from the server, a confirmation that the voter selection or the voter choice for the ballot has been received.
10.The computer program product of claim 8, further comprising:
transmitting, by the server, the voter selection or the voter choice for the ballot to the governmental data repository.
8. The computer program product of claim 6, further comprising:
transmitting, by the server, the voter selection or the voter choice for the ballot to the governmental data repository.
11.The computer program product of claim 10, further comprising:
receiving, at the server, a confirmation from the governmental data repository that the voter selection or the voter choice for the ballot has been tabulated.
9. The computer program product of claim 8, further comprising:
receiving, at the server, a confirmation from the governmental data repository that the voter selection or the voter choice for the ballot has been tabulated.
Examiner notes current claims 12-21 were omitted for brevity. Claims 12-21 correspond in scope to current claims 1-8, 10 and 11, and are similarly rejected for the subject matter of patented claims 10-18, according to the same rationale applied above for claims 1-11.
Claim Objections
Claim 12 is objected to because of the following informalities: Claim 12 recites “A system for online voter registration and a voting”. Examiner interprets the language as “A system for online voter registration and voting”. The language is objected as "a voting" could be interpreted as an element in addition to the claimed "a system", when it appears it is recited as an element in addition to "online voting registration", as evidenced by the following language "a server hosting the online voter registration and voting". Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
With respect to the Eligibility Step 1 of the Alice/Mayo two-part test of the subject matter eligibility analysis (see MPEP 2106), in the instant case, claims 1-11 are directed to a system, and claims 12-21 are directed to a product. Therefore, these claims fall within the four statutory categories of invention.
Following step 2A, prong one of the analysis, the language of the independent claims reciting an abstract idea are marked in bold below:
a. “receiving one or more user entered identification data elements at a server hosting a voter registration system for registration of a user as a voter within a jurisdiction;”;b. “receiving, at the server, an image of a photo identification (ID) card of the user, the photo ID card having a barcode containing an encoding of one or more user identification data elements;”;c. “decoding the barcode, at the server, to extract one or more decoded user identification data elements encoded in the barcode; ”;d. “comparing the one or more user entered identification elements with the one or more decoded user identification data elements”;e. “registering a user with the voter registration system when there is a match between the one or more user entered identification elements with the one or more decoded user identification data elements”
Therefore, the portions highlighted in bold above recite collecting information, analyzing it and displaying certain results of the collection and analysis, which is an abstract idea grouped within the and mental processes grouping of abstract ideas in prong one of step 2A. The claims are grouped within mental processes because the steps recited describe extracting and processing information from a user and a hard copy document (i.e. extracting data from a barcode and from user input), analyzing it (comparing the extracted and processed data), and displaying certain results of the collection and analysis (i.e. registering a user upon successful data verification), which is a concept that can be performed in the human mind or by pen and paper. Accordingly, the claims recite an abstract idea.
With respect to step 2A, prong two of the analysis, this judicial exception is not integrated into a practical application. In particular, the additional element(s) of the claims include: computer program product stored on a non-transitory computer storage medium; a server and a program product, a voter registration system and a barcode. Specifically, with respect to using computer program product stored on a non-transitory computer storage medium; a server and a program product to perform the recited steps/functions, these additional elements performs the steps or functions such as: “receiving… data…”, “receiving… card…”, “decoding… barcode… to extract… data…”, “comparing… data… with… data…”, “registering a user… when there is a match…”. These additional elements are recited at a high-level of generality such that it represents no more than mere instructions to apply the exception using a generic computer component, which only serves to use computers as a tool to perform the abstract idea. Therefore, these elements do not integrate the abstract idea into a practical application because they require no more than a computer performing functions that correspond to acts required to carry out the abstract idea. The additional element(s) of a voter registration system and a barcode amount to generally linking the use of the judicial exception to a particular technological environment or field of use. Accordingly, the additional elements above do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, following the analysis of step 2A, prong two, the claims are still directed to an abstract idea.
With respect to step 2B of the analysis, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional computer elements, such as computer program product stored on a non-transitory computer storage medium; a server and a program product, a barcode, a voter registration system. The computer program product stored on a non-transitory computer storage medium; and a server and a program product perform the steps/functions of “receiving… data…”, “receiving… card…”, “decoding… barcode… to extract… data…”, “comparing… data… with… data…”, “registering a user… when there is a match…”, and amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept beyond the abstract idea of collecting information, analyzing it and displaying certain results of the collection and analysis. The additional element(s) of voter registration system amount to generally linking the use of the judicial exception to a particular technological environment or field of use. With respect to the remaining additional element of a voter registration system and a barcode, these additional elements amount to generally linking the use of a judicial exception to a particular technological environment or field of use. As discussed above, taking the claim elements separately, these additional elements perform the steps or functions that correspond to the actions required to perform the abstract idea. Viewed as a whole, the combination of elements recited in the claims merely recite the concept of collecting information, analyzing it and displaying certain results of the collection and analysis. Therefore, the independent claims are not eligible.
Examiner notes that, for elements recited in the dependent claims which were previously analyzed as additional elements of the independent claims above (i.e. computer program product stored on a non-transitory computer storage medium; a server and a program product, barcode), the assessment of these elements under step 2A and step 2B for the dependent claims is inherited from the analysis of the independent claims and omitted for brevity, unless noted by Examiner below.
Dependent claims 2, 3, 13 and 14 further recite the following additional language, in which elements which merely further define the identified abstract idea are marked in bold below:
f) wherein the image of the photo ID card is captured within a contained instance of a session between a computing device of the user and the server. g) further comprising: receiving, at the server, a corresponding image from a governmental data repository, the corresponding image referenced by the one or more decoded user identification data elements; comparing, via an artificial intelligence (AI) platform implementing a machine learning (ML) model trained to distinguish one or more facial recognition attributes of an image to identify a person, an image of a person from the photo ID and the corresponding image; and registering the user as a voter with the jurisdiction when there is a matching of the corresponding image with the image of the person from the photo ID.
With respect to claims 2 and 13, the claims recite item f) above, language which does not introduce additional elements/functions. The additional language merely represents statements directed to directed to non-functional descriptive material by describing what additional details regarding the manner in which the image "is captured". Those statements are insufficient to significantly alter the eligibility analysis. Further, This language further elaborates the abstract idea of collecting information, analyzing it and displaying certain results of the collection and analysis identified in the analysis of independent claims 1 and 12. The additional elements/functions, alone or in combination, are insufficient to integrate the abstract idea into a practical application because the additional elements/functions do not pertain to an improvement to the functioning of a computer or to another technology. The additional elements/functions, alone or in combination, do not offer significantly more than the abstract idea, because the additional elements/functions merely further recite additional instructions to implement the abstract idea on a computer. Examiner notes the claims further describe characteristics of the previously identified additional elements and amount to no more than mere instructions to apply the exception using generic computer components (data capture elements).
Therefore, while the additional language f) of dependent claims 2, 13 slightly modify the analysis provided with respect to independent claims 1 and 12, these additional elements/functions are insufficient to render the dependent claims eligible, as detailed above. Therefore, these dependent claims are also ineligible.
With respect to claims 3 and 14, the claims recite item g) above, which represent the additional elements/functions of retrieving additional data and performing an additional comparison of the received data with the retrieved data via an AI platform. This language further elaborates the abstract idea of collecting information, analyzing it and displaying certain results of the collection and analysis identified in the analysis of claims 1, 2, 12 and 13 above. The additional elements/functions integrate the abstract idea into a practical application by implementing the judicial exception with a particular machine or manufacture. Therefore, claims 3 and 14 are eligible under step 2A of the analysis.
Therefore, the additional language g) of dependent claims 3, 14 significantly alter the analysis provided with respect to independent claims 1 and 12. These additional elements/functions are sufficient to render the dependent claims eligible, as detailed above. Therefore, dependent claims 3, 14 are eligible. Since claims 3 and 14 are eligible, their dependent claims 4-11 and 14-21 are also eligible due to their dependency on claims 3 and 14.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2-11 and 13-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 2, 3, 5-7, 13, 14 and 16-18 recite the language “the user” multiple times. Examiner notes claims 1 and 12 recite both "registration of a user as a voter within a jurisdiction" and "registering a user with the voter registration system". It is not clear whether those are the same or distinct users and this duality renders the scope of the language "the user" in the dependent claims unclear. Since it is unclear which “a user ” the claims are referring to (Claims 1 and 12 introduce “a user ” more than once, in lines 5 and 14; and 8 and 17, respectively). See MPEP 2173.05(e): “… if two different levers are recited earlier in the claim, the recitation of “said lever” in the same or subsequent claim would be unclear where it is uncertain which of the two levers was intended”. Dependent claims 3-11 and 14-21 are also rejected since they depend on claims 2 and 13, respectively.
Claims 13, 15 and 16 recite the language “the server” in lines 3, 2 and 2, respectively. Examiner notes claim 12 recites both "a server hosting the online voter registration and voting" and "a server hosting a voter registration system". It is not clear whether those are the same or distinct servers and this duality renders the scope of the language "the server" in the dependent claims unclear. Since it is unclear which “a server” the claims are referring to (Claim 12 introduces “a server” more than once, in lines 2 and 7, respectively). See MPEP 2173.05(e): “… if two different levers are recited earlier in the claim, the recitation of “said lever” in the same or subsequent claim would be unclear where it is uncertain which of the two levers was intended”. Dependent claims 14-21 are also rejected since they depend on claim 13.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 12 and 13 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Dashiff et al. (US 2015/0221153 A1), hereinafter Dashiff, in view of Durham, III et al. (US 2021/0051017 A1), hereinafter Durham.
With respect to claims 1 and 12, Dashiff teaches a system for online voter registration and voting; and a computer program product stored on a non-transitory computer storage medium (Methods and apparatus for voter registration and voting using mobile communication devices) comprising:
a server hosting the online voter registration and voting, the server configured to communicate with one or more user computing devices and a data repository of a governmental entity (see Fig. 1, enterprise server 130, government voting agency server 160, communication device (user) 110, validation information database 184 and/or voter registration number database 164, paragraphs [0021], [0024], [0028]: [0029] [0033] and paragraph [0035]: “The government validation agency server 180 can be, for example, a web server, an application server, a proxy server, a telnet server, a file transfer protocol (FTP) server, a mail server, a list server, a collaboration server and/or the like. The government validation agency server 180 can be, for example, a server associated with the Department of Motor Vehicles (DMV) in a particular state... The memory 182 includes a validation information database 184 that can be look-up table that includes the identifiers (name, date of birth, gender, residential address, driver's license number, voter registration number, social security number, passport number, etc.) associated with a population (e.g., both legitimate voters and non-voters such as underage US citizens, felons, permanent resident aliens, non-permanent resident aliens, etc.) in, for example, a state, a county, or a specific voting district.”);
a program product comprising machine-readable program code for causing, when executed, the server to perform process steps, comprising: receiving one or more user entered identification data elements at a server hosting a voter registration system for registration of a user as a voter within a jurisdiction (see Fig. 2, user authentication information 210, paragraph [0039]: “At 210, the voter registration application installation module 118 (located in the communication device 110) can send user identification information to the validation module 140 (located in the enterprise server 130) via the network 150. Note that a user of a communication device 110 may or may not be a legitimate voter in a specific voting district and/or may or may not be (externally) registered to vote. Moreover, at 210, the user identification information can include an identifier or a set of identifiers unique to each user of the communication device 110. The identifier(s) associated with each user can be, a user login, a user password, a personal identification number (PIN), a driver's license number, a social security number, and/or the like.”);
receiving an image of a photo identification (ID) card of the user (see Fig. 2, voter registration identification information 220, "possession factor", paragraph [0043]: “Upon successful installation of the voter registration file in the communication device 110, a voter (e.g., a user of the communication device) can take a photograph of the voter's driving license (or state-issued identification card) with the image acquisition module 120 (more generally, a "possession factor" associated with the voter)... The "possession factor" can refer to an object or article that is unique to a voter and such an object or article is expected to be in possession of the voter only. Examples of "possession factor" can include a driver's license card, a government-issued identification card, a social security card, a passport, a voter registration card, and/or the like...”);
registering a user with the voter registration system when [user is approved] (see paragraph [0045]: “The validation module 140 can receive the voter registration identification information and can send at least a portion of the voter registration identification information (also referred to as voter registration information) to the government validation module 188, at 222. In some instances, the government validation module 188 can register the user to vote based on the voter registration identification information. Similarly stated, the user can be added to the government's database of registered voters based on signal 222. In other instances, the government validation module 188 can validate voter registration information associated with a voter in a specific voting district by comparing the presented voter registration information with the corresponding voter information stored in, for example, the validation information database 184, at 224. Upon successful registration and/or validation of the voter registration information, the government validation module 188 can send a voter registration validation signal to the validation module 140 via the network, at 226.”; paragraph [0048]: “Thus at this point, the voter is registered and ready to vote in a subsequent election...").
Dashiff does not explicitly disclose a system and product comprising: the photo ID card having a barcode containing an encoding of one or more user identification data elements; decoding the barcode to extract one or more decoded user identification data elements encoded in the barcode; comparing the one or more user entered identification elements with the one or more decoded user identification data elements; and [user is approved] when there is a match between the one or more user entered identification elements with the one or more decoded user identification data elements.
However, Durham discloses a system and product (Mobile voting and voting verification system and method) comprising:
the photo ID card having a barcode containing an encoding of one or more user identification data elements (see paragraph [0032]: “...The request may also include a photograph of an identification card, passport, etc., e.g., a driver's license, of the user. A photograph of the user and/or the user's identification card may also include metadata such as the time, location, etc. at which the photograph was taken, which metadata may also be included in the verification request. The verification request may also include the voter's electronic signature, which may be captured via the voter's mobile device...”; paragraph [0046]: “... According to still another exemplary embodiment, if the user's picture identification card includes a barcode or other type of encoded information..."); decoding the barcode to extract one or more decoded user identification data elements encoded in the barcode; comparing the one or more user entered identification elements with the one or more decoded user identification data elements (see paragraph [0046]: “...According to still another exemplary embodiment, if the user's picture identification card includes a barcode or other type of encoded information, the mobile voter verification system may decode the information contained in the barcode and determine whether the information corresponds to the bibliographic information received from the registrar server and/or the mobile carrier billing system..."); and
[user is approved] when there is a match between the one or more user entered identification elements with the one or more decoded user identification data elements (see paragraph [0046]: “...By ensuring that the various pieces of bibliographic information and/or signature of the user correspond to the bibliographic information and/or signature stored in the registrar server and/or the mobile carrier billing system, the mobile voter verification system provides additional voter verification before providing the ballot to the user.”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the additional voter verification as disclosed by Durham in the system and product of Dashiff, the motivation being to increase security and the ability to verify that each person casting a vote is who they claim to be, and that each voter is only able to vote once (see Durham, paragraph [0003]).
With respect to claims 2 and 13, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 1 and 12. Furthermore, Dashiff discloses a system and product wherein the image of the photo ID card is captured within a contained instance of a session between a computing device of the user and the server (see paragraph [0043]: “Upon successful installation of the voter registration file in the communication device 110, a voter (e.g., a user of the communication device) can take a photograph of the voter's driving license (or state-issued identification card) with the image acquisition module 120 (more generally, a "possession factor" associated with the voter)..."). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims.
Claims 3, 4, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dashiff (US 2015/0221153 A1), in view of Durham (US 2021/0051017 A1), in view of Boyd et al. (US 11,531,737 B1), hereinafter Boyd.
With respect to claims 3 and 14, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 2 and 13. Dashiff further teach a system and product registering the user as a voter with the jurisdiction when there is a matching of the corresponding image with the image of the person from the photo ID (see paragraph [0045]: “The validation module 140 can receive the voter registration identification information and can send at least a portion of the voter registration identification information (also referred to as voter registration information) to the government validation module 188, at 222. In some instances, the government validation module 188 can register the user to vote based on the voter registration identification information. Similarly stated, the user can be added to the government's database of registered voters based on signal 222. In other instances, the government validation module 188 can validate voter registration information associated with a voter in a specific voting district by comparing the presented voter registration information with the corresponding voter information stored in, for example, the validation information database 184, at 224. Upon successful registration and/or validation of the voter registration information, the government validation module 188 can send a voter registration validation signal to the validation module 140 via the network, at 226.”; paragraph [0048]: “Thus at this point, the voter is registered and ready to vote in a subsequent election...").
The combination of Dashiff and Durham does not explicitly teach a system and product further comprising: receiving, at the server, a corresponding image from a governmental data repository, the corresponding image referenced by the one or more decoded user identification data elements; comparing, via an artificial intelligence (AI) platform implementing a machine learning (ML) model trained to distinguish one or more facial recognition attributes of an image to identify a person, an image of a person from the photo ID and the corresponding image; and registering the user as a voter with the jurisdiction when there is a matching of the corresponding image with the image of the person from the photo ID.
However, Boyd discloses a system and product (Biometric identity disambiguation) further comprising:
receiving, at the server, a corresponding image from a governmental data repository, the corresponding image referenced by the one or more decoded user identification data elements (see Fig. 4, block 414, col. 32 lines 54-65: “Bock 414 is representative of querying a gallery of references based on information received from a person that asserts an identity, e.g., biographically asserts. For example, upon reaching a touchpoint included in a TSA checkpoint a person scans his/her driver's license to inform the touchpoint of who he/or she is. In response, a gallery of references is questioned to obtain those references that are generally similar to the asserted biographic information. Those of skill in the art will appreciate that the querying of block 414 may include obtaining references from a data structure such as that of the central resource if a gallery has not been built, such as if the person is a “walk-up.””);
comparing, via an artificial intelligence (AI) platform implementing a machine learning (ML) model trained to distinguish one or more facial recognition attributes of an image to identify a person, an image of a person from the photo ID and the corresponding image (see Fig. 4, block 416, col. 32, line 66 to col. 33, line 3: “A calculation is performed (Block 416) of the biographic similarity between the asserted identity (e.g., the biographic information included in for example an identification document (ID)) by a person seeking identification and that in the references from the query.”; col. 17, line 57 to col. 18, line 5: “As should be recognized, a biographic comparator 132 conducts this biographic similarity analysis using an algorithm (implemented as a program of computer executable instructions, designed to cause the computing resource calculate how similar and or not similar the supplied biographic information is to references in the data structure or supplied in another manner, e.g., supplied as part of identification such as if a person asserts an identity in a mDL 126 (composed of biographic/biometric information) during an overall identification procedure. Such algorithms may implement a variety of approaches including, but not limited, to heuristic, ad hoc, artificial intelligence, machine learning approaches to electronically determine, based on a calculation whether or not or to what extent asserted biographic information corresponds to information in various references.”; Claim 13: "The system of claim 12, wherein either or both of the current biometric information and the prior biometric information comprises a fingerprint image, a facial image, or an iris scan.").
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the systems, devices and methods for use in biometric identification as disclosed by Boyd in the system and product of Dashiff and Durham, the motivation being to resolve ambiguity and in particular situations in which a unique authoritative identifier is unavailable for use in locating relevant references to serve as the basis for biometric matching and identification (see Boyd, Abstract).
With respect to claims 4 and 15, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 3 and 14. Furthermore, Dashiff discloses a system and product further comprising:
receiving, at the server, a ballot containing one or more candidates for an elected office or a ballot initiative for an election cycle within the jurisdiction (see paragraph [0019]: “The ballot module can be operable to obtain voter identification information, for example, by causing the validation module to obtain the possession factor and/or the inherence factor during a voting time period, which can occur after the registration time period. The ballot module can also be operable to obtain any other suitable identification information, such as a knowledge factor or an indication of approval from a pre-authorized poll worker via any suitable module, such as an input/output module and/or device (e.g., a keyboard/monitor, touchscreen, etc.). The ballot module can be operable to allow a user of the apparatus to cast a vote. For example, the ballot module can be operable to receive a representation of a ballot and an authentication code, for example, in response to the network module sending the registration information. The ballot module can then send a representation of a selection of at least one question on the ballot. The ballot module can also send the authentication code and the voter identification information such that a voting server tallies the selection of the question on the ballot when the possession factor or the inherency factor (or any other suitable identification information) obtained during the voting time period matches information stored in a registration database and when the sent authentication code matches the received authentication code.”; paragraph [0057]: “Upon successful validation of a voter, the validation module 340 sends a voter validation signal to the voter registration number generator module 368 via the network, at 414. The voter registration number generation module 368 can generate and/or define a virtual voter ballot, at 416. The virtual voter ballot can also be associated with a unique identifier (e.g., the authentication code) for added security purposes. The voter registration number generation module 368 can send the virtual voter ballot to the validation module 340 via the network, at 418. Subsequently or concurrently, the voter registration number generation module 368 can also send the virtual voter ballot to the voter registration application 316 via the network, at 420'').
The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of the independent claims.
Claims 5-11 and 16-21 are rejected under 35 U.S.C. 103 as being unpatentable over Dashiff (US 2015/0221153 A1), in view of Durham (US 2021/0051017 A1), in view of Boyd (US 11,531,737 B1), in view of Benkreira et al. (US 2020/0042773 A1), hereinafter Benkreira.
With respect to claims 5 and 16, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 4 and 15. Furthermore, Dashiff discloses a system and product further comprising:
receiving, at the server, a contained instance image of the user captured within the session between the computing device of the user and the server (see "inherence factor", paragraph [0043]: “Upon successful installation of the voter registration file in the communication device 110, a voter (e.g., a user of the communication device) can... take a self-photograph of the voter's face (more generally, an "inherence factor" associated with the voter) with the image acquisition module 120... The "inherence factor" can refer to a physical feature or attribute that is unique to the voter and can be used to identify the voter. Examples of "inherence factor" can include facial features, fingerprint patterns, retinal patterns, iris patterns, birth marks, voice print and/or the like...”).
The combination of Dashiff, Durham and Boyd does not explicitly teach a system and product further comprising: comparing the contained instance image of the user with the image of the person from the photo ID by the AI platform; when there is a matching of the contained instance image with the image of the person from the photo ID, verifying the user.
However, Benkreira discloses a system and product (Biometric identity disambiguation) further comprising:
comparing the contained instance image of the user with the image of the person from the photo ID by the AI platform (see paragraph [0044]: “In 201, the process (e.g., a process performed by a system such as the identity verification server 102) may receive an image (e.g., an image 150) including both a live facial image 152 of a user 116 and an identity document 154 that has a photograph of the user (e.g., a photo ID of the user). In one implementation, the image is captured by a camera of a client device 118 and transmitted via network 115. In some implementations, the image capture may be performed by an enrollment or account access application 122 available to all users of the client device 118…”; paragraph [0045]: “In the example of FIG. 2, 201 may comprise receiving a selfie taken by a user while that user was holding a photo ID (e.g., visible in the frame of the selfie). However, the image may be captured by an ATM that provides an enrollment application to a specific user for the purpose of signing up the user for a new account. In certain implementations, 201 may be performed by the image processor 104.”; paragraph [0046]: “In 202, the system may calculate a facial match score by comparing facial features in the live facial image to facial features in the photograph on the photo ID (identity document). In the example of FIG. 2, 202 may comprise performing facial recognition. For example, the system may use the image captured by the camera to perform the facial recognition and verify or determine a likelihood or probability that the person shown in the live facial image is the same person as is shown in the photo ID. In certain implementations, 202 may be performed by the facial recognition module 108.”; paragraph [0046]: “In 202, the system may calculate a facial match score by comparing facial features in the live facial image to facial features in the photograph on the photo ID (identity document). In the example of FIG. 2, 202 may comprise performing facial recognition. For example, the system may use the image captured by the camera to perform the facial recognition and verify or determine a likelihood or probability that the person shown in the live facial image is the same person as is shown in the photo ID. In certain implementations, 202 may be performed by the facial recognition module 108.”); and
when there is a matching of the contained instance image with the image of the person from the photo ID, verifying the user as a voter (see paragraph [0052]: “In addition, in 206, the user's identity may be verified based at least in part on a combination of facial recognition as well as OCR from the identity document (e.g., ID card) to verify that the face of the user in the selfie matches the face shown in the photograph on the identity document. As another example, the user identity may be verified based at least in part on recognizing a name from the identity document using OCR, and verifying that the recognized name corresponds to a name associated with an existing or closed user account. For instance, the identity verification server 102 may access previously collected user information 112 for a particular user to assist in verifying that user's identity when signing up for a new account or new service.”; paragraph [0053]: “At 207, the system may output the identity verification status. In the example of FIG. 2, 207 may comprise providing the status to a display device (e.g., the interactive display 114 of the client device 118).”).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the facial recognition information processing, including machine learning techniques as disclosed by Benkreira in the system and product of Dashiff, Durham and Boyd, the motivation being to improves security for user transactions, reduce the instance of fraud and improve security when verifying a user, and improve identity verification results over time. (see Benkreira, paragraphs [0016] and [0038]).
With respect to claims 6 and 17, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 5 and 16. Furthermore, Benkreira discloses a system and product further comprising:
specifying a tolerance threshold for the matching; the tolerance threshold establishing a closeness of the matching between the contained instance image of the user and the image of the person from the photo ID (see Fig. 1, facial recognition module 108, paragraph [0035], paragraph [0038]: “…the identity verification module 110 may compare the facial match score calculated by the facial recognition module 108 to a predetermined, tunable, facial match threshold to determine a confidence level representing whether the individual in the live facial image is the same person depicted in the photograph in the identity document. In some implementations, the document validity score and the facial match scores may be expressed as numeric values (e.g., percentages or numbers indicating a confidence level that the identity document is valid and the person depicted in the live facial image and the photograph is the same individual). For example, a 75% facial match score may indicate that 75% of the distinguishing facial characteristics detected in the live facial image and in the photograph match. By using sets of training data of facial image pairs to train a machine learning model, the identity verification module 110 may improve identity verification results over time.”; Fig. 2, paragraph [0046]: “In 202, the system may calculate a facial match score by comparing facial features in the live facial image to facial features in the photograph on the photo ID (identity document). In the example of FIG. 2, 202 may comprise performing facial recognition. For example, the system may use the image captured by the camera to perform the facial recognition and verify or determine a likelihood or probability that the person shown in the live facial image is the same person as is shown in the photo ID...”); and
matching the contained instance image with the image of the person from the photo ID according to the tolerance threshold (see paragraph [0050]: “At 206, the system may determine, based on comparing the facial match score to a predetermined facial match threshold and comparing the document validity score to a predetermined document validity threshold, an identity verification status of the user. The thresholds may be numeric values (e.g., percentages) that must be met before the system deems the identity document to be valid and the facial images (in the live facial image and photograph) to be a match. For example, the facial match threshold may be a percentage ranging from about 60% to 100%, such as 65%, 70%, 75%, or 80%, and the document validity threshold may be a percentage ranging from about 70% to 100%, such as 75%, 80%, 85%, or 90% In certain implementations, 206 may include a feedback loop whereby the user is prompted when the facial match threshold is not met. For instance, if a confidence level representing whether the individual in the live facial image 152 is the same person depicted in the photograph in the identity document 154 is too low (e.g., below the facial match threshold), 206 may include prompting the user via the interactive display 114 to provide more data (e.g., “Re-take selfie,” “Take a close-up,” or the like) or alter the conditions (e.g., “turn on the lights,” “turn off flash”, “take off your sunglasses”, or the like).”; paragraph [0051]: “In some implementations, the different percentages for the facial match threshold and the document validity threshold might be weighted differently and combined together to create an overall confidence level...”).
The reasons for combining the references remain unaltered from the reasons described in conjunction with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
With respect to claims 7 and 18, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 5 and 16. Furthermore, Dashiff discloses a system and product further comprising: transmitting the ballot to the computing device of the user when the user is verified as a voter (see paragraph [0057]: “Upon successful validation of a voter, the validation module 340 sends a voter validation signal to the voter registration number generator module 368 via the network, at 414. The voter registration number generation module 368 can generate and/or define a virtual voter ballot, at 416. The virtual voter ballot can also be associated with a unique identifier (e.g., the authentication code) for added security purposes. The voter registration number generation module 368 can send the virtual voter ballot to the validation module 340 via the network, at 418. Subsequently or concurrently, the voter registration number generation module 368 can also send the virtual voter ballot to the voter registration application 316 via the network, at 420..."; paragraph [0058]: “A legitimate and registered voter (e.g., user of the communication device 310) can use the voter registration application 316 to review and fill out (or complete) the virtual voter ballot, at 422. Completing the virtual voter ballot can include selecting an answer to at least one question on the ballot such as, for example, entering the voter registration number, selecting the name of a candidate for state legislator and/or a candidate for state governor and/or a candidate for the US House of Representatives and/or a candidate for the US Senate and/or a candidate for US president and/or a specific ballot initiative (e.g., legalization of same sex marriage in a state, limiting access to abortion services in a state, etc.), answering a question for an opinion poll, providing feedback on an advocacy group initiative, etc.”). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
With respect to claims 8 and 19, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 7 and 18. Furthermore, Durham discloses a system and product further comprising: receiving, at the server, a voter selection of the one or more candidates or a voter choice on the ballot initiative for the ballot (see paragraph [0065]: “Assuming the correct unique code is entered, the mobile voting system 108 may provide to the RBI servers 405 a ballot corresponding to the user. The RBI servers 405 may then transmit the ballot to the cloud browsers 410 for rendering in HTML or other suitable format. The RBI servers 405 may then transmit or stream an image of the rendered ballot to the user's mobile phone or device 100. As explained above, the user may then use the mobile phone or device 100 to make selections on the ballot, and those selections are transmitted to the RBI servers 405, and ultimately to the cloud browsers 410 for entry on the rendered ballot. As explained above, because only an image of the ballot is provided to the mobile phone or device 100, any malware or other malicious software residing on the mobile phone or device 100 will not be able to determine what selections the user is making.”. Examiner notes Dashiff also teach this limitation, see paragraph [0059]: “The voter registration application 316 sends the completed virtual voter ballot (e.g., the answer to the at least one question) to the validation module 340 via the network, at 424." ). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
With respect to claim 9, the combination of Dashiff and Durham teaches all the subject matter of the system as described above with respect to claim 8. Furthermore, Durham discloses a system further comprising: transmitting, from the server, a confirmation that the voter selection or the voter choice for the ballot has been received (see paragraph [0066]: “Once the user has completed his or her ballot, for example, by selecting a “submit” option on the ballot, the cloud browsers 410 transmit the completed ballot to the mobile voting system 108 via the RBI servers 405. The mobile voting system 108 may transmit the completed ballot to, or allow the completed ballot to be retrieved by, an entity responsible for tallying votes, such as county election systems. Once the ballot has been completed, the mobile voting system 108 may notify the mobile voting verification system 115 that the voter has completed voting. The mobile voting verification system 115 may then update a log of voters that have already voted so that any future voter verification requests from the voter or the voter's mobile telephone number will be rejected.”;). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
With respect to claims 10 and 20, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 8 and 19. Furthermore, Dashiff discloses a system and product further comprising: transmitting, by the server, the voter selection or the voter choice for the ballot to the governmental data repository (see paragraph [0060]: “Note that in some instances, the completed voter ballot is sent by the voter registration application 316 to only the validation module 340 (and not the voter registration number generator module 368, and/or the analysis module 396). In such instances, the validation module 340 can periodically or substantially periodically send copies of completed virtual voter ballots to the voter registration number generator module 368, and/or the analysis module 396..."). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
With respect to claims 11 and 21, the combination of Dashiff and Durham teaches all the subject matter of the system and product as described above with respect to claims 10 and 20. Furthermore, Dashiff discloses a system and product further comprising: receiving, at the server, a confirmation from the governmental data repository that the voter selection or the voter choice for the ballot has been tabulated (see paragraph [0061]: “In some instances, after all the votes for a specific voting district in an election has been cast, the enterprise server 330 can aggregate and display the results of the voting if the data files for votes cast is identically recorded in the validation module 340, and the voter registration number generator module 368, and the analysis module 396. If the data files for votes are not identically recorded in the validation module 340 and the voter registration number generator module 368 and the analysis module 396, a signal indicating voting fraud can be generated by the validation module 340. Hence, periodic or substantially periodic transmission of completed virtual voter ballots to the three modules can help detect voting irregularities and thus can assist in implementing accurate methods to overcome such voting irregularities.”). The motivation for combining the references remain unaltered from the motivation described above in conjunction with the rejection of claims 1, 3, 5, 12, 14 and 16 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Non-Patent Literature
T. Kuklinski and B. Monk (NPL 2008, listed in PTO-892 as page 1, reference "U") disclose “The Use of ID Reader-Authenticators in Secure Access Control and Credentialing”, including reading IDs in full color at high resolution with multiple light sources, extracting image fields such as photos, as well as data fields, whether from text, barcodes, magnetic stripes, or embedded chips.
Patent Literature
Pribble et al. (US 2020/0394429 A1) disclose image analysis and processing pipeline with real-time feedback and autocapture capabilities, and visualization and configuration system, including a real-time (or near real-time) image analysis and processing pipeline that facilitates capturing of high-resolution images of documents (e.g., ID cards, passports, personal checks, bank checks, and/or the like).
Kuklinski et al. (US 2017/0132866 A1) disclose a self-learning system and methods for automatic document recognition, authentication, and information extraction, including reading any data that is embedded on an ID. See FIG. 29, Data Extraction Method. There are a number of possible sources of such data. Non-encoded text data is printed on the ID and can be read by traditional Optical Character Recognition (OCR) methods.
Castelblanco Cruz et al. (US 11,144,752 B1) disclose physical document verification in uncontrolled environments, including separating the physical document from the background by semantic segmentation utilizing an artificial neural network trained using an augmented dataset generated by applying geometric transformations over different backgrounds. Features of the pre-processed image are extracted to determine a document type. In response to determining the document type of the physical document, the method includes verifying, utilizing a machine learning classifier, whether the physical document is authentic based on the extracted features relative to expected features for the corresponding document type.
Lev (WO 2006136958 A2) discloses system and method of improving the legibility and applicability of document pictures using form based image enhancement, including a document database containing additional information about special fields or areas in the document, e.g. boxes for handwritten input, ticker boxes, places for a photo ID, pre-printed information, barcode location, etc. This information is used in the processing stage to optimize the resulting image quality by applying different processing to the different parts of the document.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDUARDO D CASTILHO whose telephone number is (571)270-1592. The examiner can normally be reached Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached at (571) 272-7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDUARDO CASTILHO/Primary Examiner, Art Unit 3698