DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 22 December 2025 has been entered.
Allowable Subject Matter
Claims 10 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
With regards to claims 10 and 20, each of these claims recite the same patentable features as were found allowable in parent application 18/055,293, which issued as United States Patent No. 11,853,406 on 26 December 2023. Applicant filed a terminal disclaimer on 20 May 2025, which was accepted. Accordingly, these claims are allowable for the same reasons as were presented in the parent application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 5-9, 11, 13 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler (U.S. PG Pub. No. 2017/0351909) in view of Smith et al (US PG Pub. No. 2016/0125180).
With regards to claim 1, the limitations of this claim are obvious over the teachings of the prior art, as evidenced by the references discussed in the following pages.
The Kaehler reference
Kaehler discloses receiving, by a computer processor (“augmented reality device”), a user request (i.e., inspector triggering ARD authentication of a traveler an airport security checkpoint) and image data from an electronic source comprising: i) a facial image of a user, and ii) an identity document, comprising a photograph of the user at ¶ [0160] and FIG. 15 (block 1510). See, also: ¶¶ [0025]-[0027](“For example, at an airport security checkpoint, a traveler usually presents his or her identification document (e.g., a driver's license or passport) to an inspector who may wear the ARD… The inspector may view the traveler (as well as other persons in the traveler's environment) and the traveler's documents through the ARD. The ARD can image the traveler and the traveler's documents and detect linkages among the traveler's documents and the traveler (or others in the environment, such as traveling companions).”), ¶ [0029], ¶ [0103] and FIG. 12A; ¶ [0129], ¶¶ [0149]-[0150] and FIGS. 12B, 14.
PNG
media_image1.png
561
535
media_image1.png
Greyscale
Kaehler discloses determining, by the computer processor, an authenticity of the user based at least in part on at least one facial recognition technique configured to detect a presence of a live facial image, comprising the facial image of the user and the identity document at: ¶¶ [0141]-[0144](“ Multiple faces may exist in the image of the environment. The system can use facial recognition techniques such as a wavelet-based cascade algorithm or DNN to locate these faces… [T]he AR system can detect a first face among the multiple faces in the image using one or more filters. For example, as described with reference to FIG. 12A, one of the filters may be the distance between a face and the AR system which acquires the image. The system may determine that the first face may be the face that has the closest distance to the device… In some implementations, the second face may be a face on a document such as a driver's license. The AR system can detect the second face by searching within a document. The AR system can distinguish a face in the document from a physical face by tracking movements of keypoints.”) and FIG. 13; ¶ [0160](“The AR system may detect a first document, a second document, and a person in the image”); ¶¶ [0167]-[0168] and FIG. 15 (blocks 1544, 1554). See, also: ¶¶ [0029], [0104]-[0106], [0120], [0125]-[0127]. Kaehler does not specify determining, by the computer processor, an authenticity of the user based at least in part on a proximity of a presence of a registered mobile device associated with the user. However, this limitation was known in the art as evidenced by the Smith reference discussed below.
Kaehler discloses calculating, by the computer processor, a facial match score (e.g., distance between feature vectors or “confidence score”) by comparing a plurality of first facial features extracted from the facial image to a plurality of second facial features extracted from the photograph at ¶¶ [0167]-[0168] and FIG. 15 (blocks 1544, 1554). See, also: ¶¶ [0029], [0104]-[0106], [0120], [0125]-[0127].
Kaehler discloses calculating a document validity score (“confidence score”) by comparing at least one item of the identity document to data objects known to be present in an identity document type associated with the identity document at ¶ [0128] (“Besides using confidence score to verify a person's identity, the confidence score may be used to verify the validity of a document…”), ¶¶ [0163]-[0164] and FIG. 15 (blocks 1542, 1544). See, also, ¶¶ [0030], [0155]-[0158]. ¶¶ [0161]-[0163] and FIG. 15 (blocks 1520, 1532). See, also, ¶ [0030] (“For example, the ARD can extract a name from a driver's license…”), ¶¶ [0113]-[0115] (“A document may also include hidden information (not directly perceivable by a person when the document is illuminated with light within the human visible spectrum). Hidden information may be encoded in a label or may contain a reference to another data source (such as an identifier that can be used to query a database to retrieve additional information associated with the document). For example, as shown in FIG. 12B, the document (e.g. airline ticket 5470) may include an optical label such as a quick response (QR) code 5470 or a bar code.”), ¶ [0117] (“The ARD can determine the authenticity of the document based on information (explicit or hidden) in the document. The ARD may perform such verification by communicating with another data source and looking up information acquired from the image of the document in that data source. For example, where the document shows an individual's street address, the ARD may look up the street address in a database and determine whether the street address exists.”); ¶ [0122] (“As another example, the factor for verifying the linkage between a person and a document may include matching hair colors. The ARD can obtain the hair color of the person 5030 from the image 1200a. The ARD can compare this information with the hair color described on the driver's license 5150. In the section 5130a of the driver license 5150a, John Doe's hair color is brown. If the ARD determines that the hair color of the person 5030 is also brown, then the ARD may determine a match exists for the fair color.”), ¶¶ [0153]-[0154] (“For example, the AR system may use text recognition and extract the expiration date of an identification document from the image of the identification document… Besides expiration date, the categories of information may also include, for example, birthday, expiration date, departure time, hair color, eye color, iris code, etc.”). Kaehler further discloses identifying a type of the identity document at ¶¶ [0151]-[0152] (“As another example, the AR system may be configured to only identify certain types of documents such as identification documents or airline tickets, and exclude other documents such as flyers or informational notices. The AR system can also identify the first and the second document based on content in the two documents.”)
Kaehler discloses determining, by the computer processor, an identity verification status of the user based the facial match score at ¶¶ [0169]-[0170] and FIG. 15 (block 1560). See, also, ¶¶ [0029]-[0030] (“The ARD can identify information on the document (e.g., an image of the face of the person who was issued the document) and identify relevant features of the person (e.g., facial or other body features). The ARD can compare the information from the document with the features of the person and calculate a confidence level. When the confidence level is higher than a threshold, the ARD may determine that the person presenting the document is indeed the person described by the document”), [0128] (“If the confidence score is below a certain threshold, the ARD may determine that the document is invalid.”), ¶ [0168] (“The AR system can calculate a confidence score based on one or [more] factors. The AR system can determine whether a linkage exists based on whether the confidence score passes a threshold.”)
Kaehler discloses controlling, by the computer processor, a response (e.g., outputting authentication results) to the user request (i.e., inspector triggering ARD authentication of a traveler an airport security checkpoint) based on the identity verification status of the user for the user request at ¶¶ [0027]-[0029]; to wit: “[T]he ARD may display a border around the passport photograph and around the traveler, and a virtual graphic showing the likelihood of a match between the traveler and the person shown in the photograph (e.g., the facial characteristics of the traveler match the photo on the passport). The inspector can use the virtual information displayed by the ARD to pass the traveler through security (in the event of a high degree of match for the linkage between the photo and the traveler).” See, also, ¶¶ [0133]- [0137], [0156]-[0157] and FIGS. 12A, 12B.
The Smith reference
Smith discloses determining, by a computer processor, an authenticity of the user based at least in part on a proximity of a presence of a registered mobile device associated with the user at ¶¶ [0116]-[0122]; to wit: “[T]he trusted execution environment authenticating user identity includes authenticating a near field communication ( NFC ) device and analyzing the sensory data to detect whether a user is in a vicinity of the user device… [P]rovisioning the NFC device upon detecting that the NFC device may be used as an authentication factor.” See, also, ¶ [0048]; to wit: “In one embodiment, status manager 335 receives and analyzes signals from a proximity sensor (e.g., infrared, ultrasonic, Bluetooth, etc.) to determine whether a user is still in the vicinity of local computing device 102 . In such an embodiment, status manager 335 may receive the signals from the secured input/output module 206 and/or biometric capturing device 128. In a further embodiment, mirror pass module 330 installs the private key in identity protection module 350 once authentication is successfully performed. Identity protection module 350 is a resource manager that uses the private key received from mirror pass module 330 to establish secure access to a resource at one or more remote computing devices (e.g., remote computing device 106 ). In one embodiment, mirror pass module 330 disables and removes the private key from identity protection module upon detection that the authenticated user is no longer present.” At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art to determine, by a computer processor, an authenticity of the user based at least in part on a proximity of a presence of a registered mobile device associated with the user, as taught by Smith, before calculating a facial match score, as taught by Kaehler. The motivation for doing so comes from Smith, which discloses, “[T]rusted execution environment 202 implements a flexible identity verification mechanism that adapts a challenge/response authentication based on a given situation. For instance, when a room is dark and not appropriate for face recognition trusted execution environment 202 platform senses user presence and lack of light for good face recognition and automatically presents alternate authentication mechanisms.” (¶ [0065]). Therefore, it would have been obvious to combine Smith with Kaehler to obtain the invention specified in this claim.
With regards to claim 3, Kaehler discloses outputting the identity verification status comprises presenting, on a display device, the identity verification status at ¶¶ [0029], [0133]-[0137], [0156]-[0157] and FIGS. 12A, 12B. Kaehler further discloses the identity verification status indicates that the identity of the user is verified in response to determining that the facial match score exceeds the facial match threshold at ¶¶ [0029], [0133]-[0137], and that the document validity score exceeds the document validity threshold at [0156]-[0157].
With regards to claim 5, Kaehler discloses the recognized data objects include a bar code and a Quick Response (QR) code. at ¶¶ [0113]-[0115] (“A document may also include hidden information (not directly perceivable by a person when the document is illuminated with light within the human visible spectrum). Hidden information may be encoded in a label or may contain a reference to another data source (such as an identifier that can be used to query a database to retrieve additional information associated with the document). For example, as shown in FIG. 12B, the document (e.g. airline ticket 5470) may include an optical label such as a quick response (QR) code 5470 or a bar code.”),
With regards to claim 6, Kaehler discloses determining the identity verification status of the user is further based on comparing the recognized data objects to user profile data for the user retrieved from a data store at ¶¶ [0111 ]-[0112], [0114], [0117], [0128], [0164].
With regards to claim 7, Kaehler discloses determining the identity verification status of the user is further based on extracting, based on comparing the recognized characters to characteristics present in the identified type of the identity document, one or more secondary characteristics (e.g., “other identifying information for the document (e.g., age, height, gender)”) of the user from the image and comparing the one or more user characteristics to the facial features in the live facial image and the facial features in the photograph at ¶¶ [0029], [0122], [0168].
With regards to claim 8, Kaehler discloses the characteristics present in the identified type of the identity document include one or more of: hair color (¶¶ [0113], [0122], [0125]), gender (¶¶ [0029], [0113], [0125]), height (¶¶ [0029], [0113]).
With regards to claim 9, Kaehler discloses determining the identity verification status of the user is further based on comparing the one or more user characteristics to user profile data for the user retrieved from a data store at ¶¶ [0111]-[0113], [0114], [0117], [0128], [0164].
With regards to claim 11, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 1, which recites a method performing these same steps.
With regards to claim 13, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 3, which recites a method performing these same steps.
With regards to claim 15, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 5, which recites a method performing these same steps.
With regards to claim 16, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 6, which recites a method performing these same steps.
With regards to claim 17, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 7, which recites a method performing these same steps.
With regards to claim 18, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 8, which recites a method performing these same steps.
With regards to claim 19, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler and Smith for the same reasons as were provided in the discussion of claim 9, which recites a method performing these same steps.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler (U.S. PG Pub. No. 2017/0351909) in view of Smith et al (US PG Pub. No. 2016/0125180), and in further view of Sun et al (U.S. PG Pub. No. 2018/0089499).
With regards to claim 2, Kaehler discloses calculating a facial match score comprises extracting a first set of facial features (e.g., “distances among certain facial features”) from the live facial image, extracting a second set of facial features from the photograph and comparing, using a facial recognition algorithm, the first set of facial features to the second set of facial features at ¶ [0120], to wit: “For example, when the two faces have similar distances between two eyes and similar distances from nose to mouth, the ARD may determine that the two faces are likely to be the same.” See, also, ¶¶ [0105]-[0107]. But, Kaehler does not disclose calculating the facial match score as a percentage of facial features in common between the live facial image and the photograph. However, this limitation was known in the art:
Sun discloses calculating the facial match score as a percentage of facial features in common between first and second images; to wit: “It is noted that, the face image matching the facial feature information can be understood as that, when the similarity between the facial feature information and the facial feature information of the face image stored in the database exceeds a preset ratio, the face recognition device determines that there is a face image matching the facial feature information in the database. The preset ratio can be set to 85%, 90%, or 95%, which is not limited in the embodiments of the present disclosure.” ( ¶¶ [0066]-[0068] ). At the time of filing of the present application, it would have been obvious to a person of ordinary skill in the art to calculate a facial match score as a percentage of facial features in common between a first set of facial features and a second set of facial features, as taught by Sun, as a substitute for calculating a facial match score by comparing distances among certain facial features, as taught by Kaehler. This combination is a simple substitution of one known element for another to obtain predictable results. The prior art contained a method, taught by Kaehler, which differed from the claimed method by the substitution of the manner of calculating a facial match score. Calculating a facial match score as a percentage of facial features, and its functions were known in the art. One of ordinary skill in the art could have substituted calculating a facial match score as a percentage of facial features into the method taught by Kaehler and the results would have been predictable; to wit, substantially the same facial recognition results would be produced.
With regards to claim 12, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler, Smith and Sun for the same reasons as were provided in the discussion of claim 2, which recites a method performing these same steps.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Kaehler (U.S. PG Pub. No. 2017/0351909) in view of Smith et al (US PG Pub. No. 2016/0125180), and in further view of Eckel et al (U.S. PG Pub. No. 2014/0363057).
With regards to claim 4, Kaehler discloses calculating a document validity score (“confidence score”) at ¶ [0128] (“Besides using confidence score to verify a person's identity, the confidence score may be used to verify the validity of a document…”), ¶¶ [0163]-[0164] and FIG. 15 (blocks 1542, 1544). See, also, ¶¶ [0030], [0155]-[0158]. But, Kaehler does not disclose calculating the document validity score comprises comparing the recognized data objects to a watermark security feature present in the identified type of the identity document. However, this limitation was known in the art:
Eckel discloses calculating a document validity score (e.g., numerical correlation level) comprises comparing a recognized data object to a watermark security feature present in the identified type of the identity document at ¶¶ [0057]-[0060]; to wit: “In some implementations, the personally identifiable information (PII) retrieved from the digital watermark may be correlated with the extracted personally identifiable information from the machine-readable codes (224)... The correlation may yield a numerical correlation level… The correlation level may then be compared against the threshold level… If, however, the correlation level is less than the threshold level, then the retrieved PII from the digital watermark may be determined as uncorrelated... If the retrieved PII from the digital watermark is determined as not correlated with the extracted PII from the machine-readable code of a MRZ, then a presumption may be raised against the authenticity of the identification document. At the time of the filing of the present application, it would have been obvious to a person of ordinary skill in the art to compare a recognized data object to a watermark security feature present in the identified type of the identity document, as taught by Eckel, when calculating a document validity score, as taught by Kaehler. The motivation for doing so comes from Eckel, which discloses, “[T]he identity of a passenger who possesses a digital watermarked identification document that complies with the requirements of a federally mandated process and whose face matches that on the identification document may be verified with a higher degree of confidence.” (Eckel, ¶ [0029]). Therefore, it would have been obvious to combine Eckel with Kaehler to obtain the invention specified in this claim.
With regards to claim 14, the steps performed by the apparatus of this claim are obvious over the combination of Kaehler, Smith and Eckel for the same reasons as were provided in the discussion of claim 4, which recites a method performing these same steps.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID F DUNPHY whose telephone number is (571)270-1230. The examiner can normally be reached on 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached on (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID F DUNPHY/Primary Examiner, Art Unit 2668