Prosecution Insights
Last updated: April 19, 2026
Application No. 18/832,978

METHOD FOR DETERMINING AN ACCESS RIGHT OF A USER, REQUESTING COMPUTER DEVICE, AUTHENTICATING COMPUTER DEVICE, AND AUTHENTICATING SYSTEM

Final Rejection §103§112
Filed
Jul 25, 2024
Examiner
YANG, HAN
Art Unit
2493
Tech Center
2400 — Computer Networks
Assignee
TRINAMIX GMBH
OA Round
2 (Final)
92%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
818 granted / 887 resolved
+34.2% vs TC avg
Moderate +11% lift
Without
With
+11.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
21 currently pending
Career history
908
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
33.3%
-6.7% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 887 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The present office action is responsive to communications received on 1/22/2026. Claims 1-20 are pending. Response to Arguments The arguments/remarks filed by the applicant on 1/22/2026 have been fully considered and are responded in the following. Applicant’s arguments, ‘Anantharaman does not disclose receiving a detector signal comprising biometric information including a point-cloud facial image captured at the requesting computer device, as recited in amended claim 1.’, see p. 8, ¶1, filed 1/22/2026, with respect to the amended claims overcoming the cited prior art references of the rejection of claims 1, 12 and 14 under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn; however, upon further search and consideration, a new grounds of rejection – as necessitated by amendment – is made in view of newly cited prior art Lin. Please refer to "Claim Rejections - 35 USC § 103" section below for detail analysis. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “a user access manager unit configured to manage an access to the requesting computer device based on the received user access right information” in claim 12. The written description discloses the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function in [0119]. Therefore, the claim is not indefinite and is not rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-2, 4, 5, 8 and 10-16 are rejected under 35 U.S.C. 103 as being unpatentable Anantharaman (US 20140230018 A1) in view of Lin (US 20210168306 A1). Regarding claim 1, Anantharaman teaches a method for determining an access right of a user to a requesting computer device (home device, Fig. 1: 108, 116) , the method comprising, by an authenticating computer device (Fig.1: Home Biometric Security Server 102): receiving (S1) a detector signal comprising biometric information; (Fig. 2: 200; [0044]: "At block 200, a home biometric security server (server) receives a request from a home device"; [0046]: "At block 204, the server extracts the biometric data from the request") authenticating (S2) the user based on the detector signal; (Fig. 2: 208; [0047]) determining (S3) an access right information (user profile including permissions) of the user based on the authentication and based on a prestored access right information indicating user rights associated with one or multiple users, ([0046]: "The server includes a database of user profiles with associated biometric data") the access right information indicating an extent to which the user is allowed to access the requesting computer device; and (Fig. 2: 214, 216; [0051]: "At block 216, the server adds the user profile to the response[ ... ]. [ ... ] the user profile can include a variety of information, including [ ... ] permissions for all devices connected to the network[ ... ]. For example, the home device can request only the user identifier and the set of permissions related to the requesting home device"; [0067]: "The user profile 324 can contain a variety of information, including [ ... ] a set of permissions indicating what operations the user is permitted to do on each device. The user profile 324 could be implemented in a manner similar to an access control list") transmitting (S4) the access right information of the user to the requesting computer device. (Fig. 2: 216, 218, 246; [0051]: "At block 216, the server adds the user profile to the response"; [0061]: "At block 246, the server sends the response to the home device ... ") Anantharaman teaches a detector signal comprising biometric information. But Anantharaman does not explicitly teach a detector signal comprising biometric information including a point-cloud facial image captured at the requesting computer device. This aspect of the claim is identified as a difference. However, Lin in an analogous art explicitly teaches a detector signal comprising biometric information including a point-cloud facial image captured at the requesting computer device. ([0006] The imaging device provided herein includes an infrared projector and an infrared camera. The infrared projector includes an infrared source, a light reflective section, a light filtering section, and at least one driving component. The infrared camera is configured to emit infrared light… The light filtering section is configured receive the infrared light reflected by the light reflective component and let the infrared light pass through to be projected on an object to form point cloud… The infrared camera is configured to capture an image of the project according to the point cloud. [0041] In case of face recognition for unlocking, the 3D imaging device 12 may capture a facial image of a user and provide the facial image to the at least one processor 13, whereby the at least one processor 13 can compare the facial image with a preset facial image template to determine whether the user is a legal or registered user of the terminal. If the facial image matches with the facial image template, it indicates that the facial recognition is successful and the screen 16 of the terminal device 10 can be unlocked.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the “biometrics based authentication and authorization” concept of Anantharaman, and the “infrared projector/imaging device” approach of Lin. One of ordinary skill in the art would have been motivated to provide more accurate and higher resolution 3D shape measurement techniques, such as super resolution imaging, 3D measurement, 3D sensor/scanner, point cloud, needed by modern applications (Lin [0003, 0029-0032]). Regarding claim 2, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. The combination further teaches capturing (S6) a biometric representation of the user by an infrared (IR) light point projector of the requesting computer device, the biometric representation including the point-cloud facial image, wherein the detector signal corresponds to the biometric representation or to a low-level representation of the biometric representation. (Anantharaman Fig. 2 steps 204,208; [0026], [0047] capturing of a biometric representation of a user by the home device, and comparing, by the sever, extracted biometric features with prestored biometric features.) ([Lin 0006] The imaging device provided herein includes an infrared projector and an infrared camera. The infrared projector includes an infrared source, a light reflective section, a light filtering section, and at least one driving component. The infrared camera is configured to emit infrared light… The light filtering section is configured receive the infrared light reflected by the light reflective component and let the infrared light pass through to be projected on an object to form point cloud… The infrared camera is configured to capture an image of the project according to the point cloud. [0041] In case of face recognition for unlocking, the 3D imaging device 12 may capture a facial image of a user and provide the facial image to the at least one processor 13, whereby the at least one processor 13 can compare the facial image with a preset facial image template to determine whether the user is a legal or registered user of the terminal. If the facial image matches with the facial image template, it indicates that the facial recognition is successful and the screen 16 of the terminal device 10 can be unlocked.) Regarding claim 4, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. Anantharaman further teaches in the requesting computer device, at least partly allowing or prohibiting (S9) an access to the requesting computer device based on the access right information received from the authenticating computer device. (0040], Fig. 4 steps 408-414 and 422 and [0081-0083] allowing and prohibiting access to the requesting home device based on the permissions included in the user profile.) Regarding claim 5, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. Anantharaman further teaches wherein the step of authenticating (S2) the user based on the detector signal includes, in the authenticating computer device: extracting facial features from the point-cloud facial image; obtaining prestored user information from a database, the prestored user information indicating prestored biometric features associated with one or multiple users: comparing the extracted facial features with the prestored biometric features; and determining an identity of the user associated with the point-cloud facial image based on a result of the comparison between the extracted facial features and the prestored biometric features, or determining that the user does not correspond to any of the one or more multiple users whose prestored biometric features are prestored in the prestored user information. (Fig. 2 steps 204,208; [0026], [0047] capturing of a biometric representation of a user by the home device, and comparing, by the sever, extracted biometric features with prestored biometric features.) Here Lin discloses point-cloud facial image in [0006, 0041]. Regarding claim 8, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. Anantharaman further teaches wherein the extent to which each contact is allowed to access the requesting computer device is defined by a main user of the requesting computer device. ([0041] to transfer permissions to a guest user.) Regarding claims 10-16, the scope of the claims is similar to that of claim 1-2, 4, 5 and 8 respectively. Accordingly, the claims are rejected using a similar rationale. Claim 3 and 17 are rejected under 35 U.S.C. 103 as being unpatentable Anantharaman (US 20140230018 A1) in view of Lin (US 20210168306 A1) and Streit (US 20200351097 A1). Regarding claim 3, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. Anantharaman further teaches generating at least one feature vector as a low-level representation of the captured biometric information. (Fig. 2 steps 204,208; [0026], [0047] capturing of a biometric representation of a user by the home device, and comparing, by the sever, extracted biometric features with prestored biometric features.) But Anantharaman does not explicitly teach using a trained machine learning model. This aspect of the claim is identified as a difference. However, Streit in an analogous art explicitly teaches using a trained machine learning model. ([0053] the system is configured to provide one to many search and/or matching on encrypted biometrics in polynomial time. According to one embodiment, the system takes input biometrics and transforms the input biometrics into feature vectors (e.g., a list of floating point numbers (e.g., 128, 256, or within a range of at least 64 and 10240, although some embodiments can use more feature vectors)). According to various embodiments, the number of floating point numbers in each list depends on the machine learning model being employed.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the “biometrics based authentication and authorization” concept of Anantharaman, and the “privacy-enabled biometric processing” approach of Streit. One of ordinary skill in the art would have been motivated to perform such a modification to employ machine learning to solve the problems of time consuming, error prone and frequently nearly impossible to do (Streit [0106]). Regarding claim 17, the scope of the claim is similar to that of claim 3 respectively. Accordingly, the claim is rejected using a similar rationale. Claim 6-7 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable Anantharaman (US 20140230018 A1) in view of Lin (US 20210168306 A1) and Quintuna (EP 2458548 A1, listed in IDS). Regarding claim 6, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. But Anantharaman does not explicitly teach automatically generating the prestored access right information by the requesting computer device and/or by the authenticating computer device based on a user list provided by the requesting computer device, the user list being a list of contacts provided on the requesting computer device. This aspect of the claim is identified as a difference. However, Quintuna in an analogous art explicitly teaches automatically generating the prestored access right information by the requesting computer device and/or by the authenticating computer device based on a user list provided by the requesting computer device, the user list being a list of contacts provided on the requesting computer device. (Fig. 5A, [0005], [0006] and [0010] collecting data related to the communications between a user and their contacts, automatically grouping the contacts into different groups based on a level of communications between the user and the user's contacts, and defining an access level for each group, with each access level granting access to some part of the user's data.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the “biometrics based authentication and authorization” concept of Anantharaman, and the “dynamic access control rules” approach of Quintuna. One of ordinary skill in the art would have been motivated to perform such a modification to permit a dynamic assignment of authority to access content that does not have to be actively managed by the user (Quintuna [0006]). Regarding claims 7 and 18-19, the scope of the claims is similar to that of claim 6 respectively. Accordingly, the claims are rejected using a similar rationale. Claim 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable Anantharaman (US 20140230018 A1) in view of Lin (US 20210168306 A1) and Aley-Raz (US 20100131273 A1). Regarding claim 9, Anantharaman in view of Lin teaches all the features with respect to claim 1, as outlined above. But Anantharaman does not explicitly teach by the requesting computer device and/or by the authenticating computer device: determining (S7) a liveliness of the user based on the captured biometric information about the user. This aspect of the claim is identified as a difference. However, Aley-Raz in an analogous art explicitly teaches determining (S7) a liveliness of the user based on the captured biometric information about the user. ([0008] generating a first matching score based on a comparison between: (a) a voice-print from a first text-dependent audio sample received at an enrollment stage, and (b) a second text-dependent audio sample received at an authentication stage; generating a second matching score based on a text-independent audio sample; and generating a liveness score by taking into account at least the first matching score and the second matching score.) It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the “biometrics based authentication and authorization” concept of Anantharaman, and the “liveness detection utilizing voice biometrics” approach of Aley-Raz. One of ordinary skill in the art would have been motivated to perform such a modification to indicate that the speaker interacting with the system is both alive and authentic, as well as provide higher level of accuracy (Aley-Raz [0060-0061]). Regarding claim 20, the scope of the claim is similar to that of claim 9 respectively. Accordingly, the claim is rejected using a similar rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN YANG whose telephone number is (408)918-7638. The examiner can normally be reached on Monday to Friday, 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Colin can be reached on 571-272-3862. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN YANG/Primary Examiner, Art Unit 2493
Read full office action

Prosecution Timeline

Jul 25, 2024
Application Filed
Oct 18, 2025
Non-Final Rejection — §103, §112
Jan 22, 2026
Response Filed
Mar 15, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596818
Permission Management Method and Terminal Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597449
INDUCTIVE ENERGY HARVESTING AND SIGNAL DEVELOPMENT FOR A MEMORY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12593621
PHASE CHANGE MULTILAYER HETEROSTRUCTURE WITH MULTIPLE HEATERS
2y 5m to grant Granted Mar 31, 2026
Patent 12592828
SYSTEM AND METHOD FOR PARALLEL MANUFACTURE AND VERIFICATION OF ONE-TIME-PASSWORD AUTHENTICATION CARDS
2y 5m to grant Granted Mar 31, 2026
Patent 12586627
REFRESH PERFORMANCE OPTIMIZATIONS FOR DRAM TECHNOLOGIES WITH SUB-CHANNEL AND/OR PSEUDO-CHANNEL CONFIGURATIONS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+11.3%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 887 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month