DETAILED ACTION
This action is in response to the original filing dated 29 August 2023. Claims 1-3 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 2 is objected to because of the following informalities: claim 2 recites “the legal file is configured to sent to an official government depository.” Examiner suggests “the legal file is configured to be sent to an official government depository”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1-3 recite the limitation “the notarization event”. There is insufficient antecedent basis for this limitation in the claim.
Claims 1-3 recite the limitation “the visual person” within the biometric data analysis step. There is insufficient antecedent basis for this limitation in the claim.
Claim 2 recites the limitation “the official legal mark”. There is insufficient antecedent basis for this limitation in the claim.
Claims Interpreted as Invoking 35 U.S.C. 112(f)/Sixth Paragraph
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “liveliness engine configured to”, “analysis system to obtain”, “DNA analyzer to determine”, “hand geometry analyzer to determine”, “signature analyzer to determine” and “AI system to generate” in claims 1-3.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 are rejected under 35 U.S.C. 103 as being unpatentable over Dawson et al. (US 2022/0021528 A1) in view of Silverstein et al. (US 2023/0254300 A1) and further in view of Sant Anselmo (US 2011/0213700 A1).
As for independent claim 1, Dawson teaches a system comprising:
an enrollment system, comprising: a liveness test engine driven by artificial intelligence, wherein the liveness engine is configured to test whether a real person is communicating with the automatic system, wherein the liveness test engine is configured to tell a real person from an AI-generated person via a liveness test, wherein the liveness test is configured to utilize facial features, image and video analysis, facial expressions and micro movement analysis, voice verification, and behavior analysis to tell a real person from an AI-generated person, wherein the liveness test is configured to utilize questioning and answering, or gesture judgment or body language response to tell a real person from any AI-generated person, wherein the liveness test engine is configured to ask a person before a screen communicating with the liveness test engine to perform certain gestures, body poses or facial expressions, and via video recognition to determine whether performances of the person before the screen exceeds a pre-determined threshold, wherein when the performances exceed the pre-determined threshold, the person before the screen will be deemed a real person [(e.g. see Dawson paragraphs 0053, 0095, 0125, 0127) ”the user device 104 may receive instructions from the onboarding system 108 that direct the user to turn their head to show their face from a different angle at a particular time, which may increase the reliability of determining the identity of the user (e.g., decreasing the risk of spoofing). In some embodiments, the artificial intelligence module may determine whether the person depicted is the same person … the application on the user device may lead a “liveliness” test to capture the images, which may be in the form of continuous image frames sampled at a suitable rate or non-continuous frames sampled at a predefined slower rate. During the liveliness test, the user may be instructed to change their face by winking, blinking, opening mouth, etc. to demonstrate that the user is indeed active and participating in with the onboarding process … the application may use a camera on the user device to capture one or more “live” images of the user to perform a liveliness test. This may include opening their mouth, blinking their eyes, etc. to capture different images of the user to avoid spoofing using an image of the user. The service provider may use the livestream sequence of images and the image from the passport to verify that the user is the same. UI 810 depicts that the liveliness test has been passed and that a user identifier (“ID”) has been generated for the user … utilize an artificial intelligence model to compare the identity image and the plurality of images to confirm that the same person is depicted in all of the images. If this is the case, the service provider 102 may confirm that the user has passed the liveliness test. If not, the test may fail and the user may be informed”].
a biometric data analysis system including a data input interface to obtain, for a user who already passes the liveness test to generate the visual person to execute the legal event, biometric data of the user to identify the user according to information of the user using any one or combination of: [(e.g. see Dawson paragraphs 0053, 0095) ”UI 810 depicts that the liveliness test has been passed and that a user identifier (“ID”) has been generated for the user. In some embodiments, the user identifier may further indicate that one or more cryptographic keys have also been generated, in association with the user account. Thus, in this example, the user's face is depicted, the ID, and a message … the onboarding system 108 may be configured to receive other user biometric data in addition to or instead of the user image data. For example, a fingerprint, a faceprint, eye scan, body scan, or other biometric data may be used to confirm the identity of the user”].
a DNA analyzer to determine unique blood patterns, a hand geometry analyzer to determine unique hand patterns, a fingerprint scanner to determine unique fingerprint patterns, a signature analyzer to determine unique signature patterns, a facial recognizer to capture unique facial characteristics, a voice analyzer to determine unique vocal patterns, and an electro-optical photographic system including a static image photographic system, and a dynamic video image photographic system, to record physical images of the user [(e.g. see Dawson paragraphs 0046, 0053) ”Some non-limiting examples include the user showing their face from different angles via a livestream video (e.g., upon receiving one or more instructions), recording and verifying the user's voice, saying, “I approve,” verifying the user's signature, etc. In some embodiments, the system may regularly update user identity information (e.g., images of the user, biometric data (e.g., fingerprints, voice recordings, video feeds, etc.). Accordingly, the system improves upon accuracy and reliability when verifying a person's identity when conducting a transaction … a fingerprint, a faceprint, eye scan, body scan, or other biometric data may be used to confirm the identity of the user”]. Examiner notes the use of “any one of:” language in the claim.
a legal document generator including at least one processor configured to obtain global positioning system (GPS) coordinates corresponding to a location of the legal event would occur [(e.g. see Dawson paragraph 0068) ”The user device 306 may also include geo-location devices (e.g., a global positioning system (GPS) device or the like) for providing and/or recording geographic location information associated with the user device 306”].
Dawson does not specifically teach an Al system to generate a visual person via machine learning models trained on and associated with the biometric data of the user, a user interface to interact with the visual person, or wherein the legal document generator is configured to activate the visual person to execute the legal event. However, in the same field of invention, Silverstein teaches:
an Al system to generate a visual person via machine learning models trained on and associated with the biometric data of the user [(e.g. see Silverstein paragraphs 0022, 0032, 0033) ”avatar model engine 232 may be configured to create, store, update, and maintain a three-dimensional animation model of a subject, as disclosed herein. The three-dimensional animation model may include, in addition to encoders and decoders, tools such as a guide mesh tool 242 and a ray marching tool 244. In some embodiments, avatar model engine 232 may access one or more machine learning models stored in a training database 252. A database 252 includes training archives and other data files that may be used by avatar model engine 232 in the training of a machine learning model, according to the input of the user through application 222 … Guide mesh tool 242 determines facial expression parameters (z) based on input images, upon a classification scheme that is learned by training. In some embodiments, guide mesh tool 242 includes a head pose encoder to determine a rotation (e.g., a matrix, r) and a translation (e.g., a vector, t) of the head of a person in the input images … With the advent of modern bio-technologies such as DNA sequencing, iris pattern recognition, and fingerprint recognition, identification codes may also include biological data, perhaps established at birth. Accordingly, embodiments as disclosed herein include the processing and handling of the above identification techniques associated to a subject-based avatar”].
a user interface to interact with the visual person [(e.g. see Silverstein paragraph 0049 and Fig. 6) ”FIG. 6 illustrates a contractual scenario 600 with subject-based avatars 632A and 632B (hereinafter, collectively referred to as “subject-based avatars 632”) interacting in a virtual environment 651 by a virtual reality application 622, according to some embodiments. Network server 630, mobile devices 610-1A and 610-1B (hereinafter, collectively referred to as “mobile devices 610-1”), VR headsets 610-2A and 610-2B (hereinafter, collectively referred to as “headsets 610-2”), are as described heretofore (cf. client devices 110, 310, and 410 and servers 130, 330, and 430). Mobile devices 610-1 and headsets 610-2 will be referred to, collectively, as “client devices 610”). Subject-based avatar 632A (632B) is associated with a user 602A (602B), and the association may be verified via the authentication mechanisms and credentials described in this disclosure (cf. FIGS. 3-4)”].
wherein the legal document generator is configured to activate the visual person to execute the legal event [(e.g. see Silverstein paragraphs 0049, 0050 and Fig. 6) ”FIG. 6 illustrates a contractual scenario 600 with subject-based avatars 632A and 632B (hereinafter, collectively referred to as “subject-based avatars 632”) interacting in a virtual environment 651 by a virtual reality application 622, according to some embodiments. Network server 630, mobile devices 610-1A and 610-1B (hereinafter, collectively referred to as “mobile devices 610-1”), VR headsets 610-2A and 610-2B (hereinafter, collectively referred to as “headsets 610-2”), are as described heretofore (cf. client devices 110, 310, and 410 and servers 130, 330, and 430). Mobile devices 610-1 and headsets 610-2 will be referred to, collectively, as “client devices 610”). Subject-based avatar 632A (632B) is associated with a user 602A (602B), and the association may be verified via the authentication mechanisms and credentials described in this disclosure (cf. FIGS. 3-4) … virtual reality application 622 may include a “legal/administrative” flag 627A and 627B (hereinafter, collectively referred to as “flags 627”) in the display for each of users 602. Flags 627 may indicate that “the terms of the contract being signed by the parties is subject to the laws of [jurisdiction]” in virtual environment 651. Accordingly, flags 627 may prompt users 602 to accept the legal/administrative terms, in which case subject-based avatars 632 become legal entities protected by or under the obligations of, the specific jurisdiction”].
Therefore, considering the teachings of Dawson and Silverstein, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add an Al system to generate a visual person via machine learning models trained on and associated with the biometric data of the user, a user interface to interact with the visual person, and wherein the legal document generator is configured to activate the visual person to execute the legal event, as taught by Silverstein, to the teachings of Dawson because it allows a true digital representation of an individual to become authenticated as a legal entity (e.g. see Silverstein paragraph 0003, 0021, 0050).
Dawson and Silverstein do not specifically teach wherein the legal document generator comprises a time stamp retriever that is configured to connect to an official time provider via a network and to retrieve an official current time corresponding to the notarization event or wherein the document of the legal event is recorded with biometric data of the user and the location and time of the legal event is recorded with the document via the legal document generator and the time stamp retriever. However, in the same field of invention, Sant Anselmo teaches:
wherein the legal document generator comprises a time stamp retriever that is configured to connect to an official time provider via a network and to retrieve an official current time corresponding to the notarization event [(e.g. see Sant Anselmo paragraphs 0005, 0017, 0019, 0022 and claims) ”wherein the target object is marked and notarized according to localized regulations … notarization applications include, but are not limited to: … legal documents … a request to electronically notarize a target object, electronically notarizing, by a second computer, the target object … such electronic notarization service that as a brief overview, provides an "Official Time Keeping Source" i.e., the National Institute of Standards and Technology (NIST) for non-military time-related applications and the United States Naval Observatory (USNO) for military time-related applications to provide continuous unbiased time and date information to the time stamp enterprise for the various embodiments … a processor which includes a time retrieval unit to connect to an official time provider via a network and to retrieve an official current time”].
wherein the document of the legal event is recorded with biometric data of the user and the location and time of the legal event is recorded with the document via the legal document generator and the time stamp retriever [(e.g. see Sant Anselmo paragraphs 0022, 0236, 0341 and Figs. 6A-B and 8) ”a request to electronically notarize a target object, electronically notarizing, by a second computer, the target object … time stamping of transactional documents, either in physical hardcopy or in electronic versions, with machine-readable time stamp symbols containing the time, date, type and location of transaction … The example embodiments incorporate any number of biometric forms of digital identification represented into a machine-readable time stamp as shown in FIG. 6B and the time stamp can then be positively linked to an individual. Examples of biometric forms of identification are: Vital Identification Statistics, Physical Characteristics (604) etc., Photograph or Facial Image Recognition Technologies (605), Retina Scan patents (606), Digital Signature Technologies (607), Fingerprint or Handprint Recognition Technologies (608), DNA Sequence Technologies (609)”].
Therefore, considering the teachings of Dawson, Silverstein and Sant Anselmo, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the legal document generator comprises a time stamp retriever that is configured to connect to an official time provider via a network and to retrieve an official current time corresponding to the notarization event and wherein the document of the legal event is recorded with biometric data of the user and the location and time of the legal event is recorded with the document via the legal document generator and the time stamp retriever, as taught by Sant Anselmo, to the teachings of Dawson and Silverstein because it provides the user working from their home, office, library or field location, with a fast, convenient, automatic and extremely accurate means for notarization (e.g. see Sant Anselmo paragraph 0005).
As for independent claim 2, Dawson, Silverstein and Sant Anselmo teach a system. Claim 2 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. Further, Dawson teaches:
data storage to store a legal file associated with the legal mark, wherein the legal file is configured to sent to an official government depository [(e.g. see Dawson paragraphs 0050, 0061, 0100) ”an organization that may typically store large amounts of data (e.g., a bank, an investment firm, a government organization) may have flexible options for storing the data. For example, data may be stored to any suitable location (e.g., local storage within the organization … the user may have authored a new document and may upload the document to the service provider 1403 via the application … the data storage and validation system 114 may be configured to store data (e.g., documents”].
Dawson and Silverstein do not specifically teach a marking system to mark the legal document with the official legal mark. However, Sant Anselmo teaches:
a marking system to mark the legal document with the official legal mark [(e.g. see Sant Anselmo paragraphs 0005, 0022 and claims) ”wherein the target object is marked and notarized according to localized regulations … notarization applications include, but are not limited to: … legal documents … a request to electronically notarize a target object, electronically notarizing, by a second computer, the target object”].
The motivation to combine is the same as that used for claim 1.
As for independent claim 3, Dawson, Silverstein and Sant Anselmo teach a system. Claim 3 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. Further, while independent claim 1 recites “utilize questioning and answering, or gesture judgment or body language response”, claim 3 recites “utilize questioning and answering, gesture judgement and body language response”. Dawson teaches all three methods in paragraphs 0053, 0088, 0125 [” may also include questions and/or answers (Q/A) to security questions (e.g., place of birth, mother's maiden name, etc.) … the user device 104 may receive instructions from the onboarding system 108 that direct the user to turn their head to show their face from a different angle at a particular time, which may increase the reliability of determining the identity of the user (e.g., decreasing the risk of spoofing). In some embodiments, the artificial intelligence module may determine whether the person depicted is the same person … the application on the user device may lead a “liveliness” test to capture the images, which may be in the form of continuous image frames sampled at a suitable rate or non-continuous frames sampled at a predefined slower rate. During the liveliness test, the user may be instructed to change their face by winking, blinking, opening mouth, etc.”].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent 10,360,464 B1 issued to McKay et al. on 23 July 2019. The subject matter disclosed therein is pertinent to that of claims 1-3 (e.g. liveness detection and biometric authentication).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174