Prosecution Insights
Last updated: April 19, 2026
Application No. 18/195,707

SYSTEMS AND METHODS FOR CONTINUAL DEVICE AUTHENTICATION IN VIRTUAL ENVIRONMENTS

Non-Final OA §103§112
Filed
May 10, 2023
Examiner
KHAN, SHER A
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
3 (Non-Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
284 granted / 333 resolved
+27.3% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
345
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 333 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/13/2026 has been entered. Response to Amendments and Arguments Applicant argues in the Remarks dated 01/13/2026 against issuing of drawing objection for fig 1A. Examiner has considered and has found this argument persuasive. Drawing objection issued against Fig 1A has been withdrawn. However, drawing objection issued against Fig 1B & Fig 1C has been maintained. The Applicant has amended all independent claims 1, 11 & 16 and added new dependent claims 25 & 26 and additionally cancelled claims 23 & 24. The Applicant argues in the remarks that cited art does not teach amended limitation “determine one or more biometric indicators defined by the first avatar of the first user, wherein the one or more the biometric indicators defined by the first avatar of the first user comprise a digital representation of at least one physical feature of the first user physically formed on the first avatar in the virtual reality environment and viewable on the first avatar” and “determine authentication credentials associated with the first user based on the one or more biometric indicators of the first avatar that are physically formed on the first avatar.” Examiner reviewed this amended part and associated arguments but found them unpersuasive as the disclosure/specification does not explicitly support recitation of “physically formed” and “viewable on the first avatar”. The Applicant also did not specifically mention where Examiner may find these amended limitations and Examiner could not find this amended part of the claim in the disclosure/specification and the Applicant also did not specifically mention where Examiner may find these amended limitations. Paragraphs 0059 & 0070 of disclosure mentions about digital representation/ (virtual representation) of biometric features but nowhere it mentions about these biometric features are “physically formed” and also it does not explicitly mention the limitation “viewable on the first avatar”. The Applicant argued in the remarks that cited arts, alone or in combination, do not teach “the biometric indicators defined by the first avatar of the first user comprise a digital representation of at least one physical feature of the first user physically formed on the first avatar in the virtual reality environment and viewable on the first avatar”, the. amended part of the independent claims 1, 11 & 16. Examiner considered his arguments but found them moot as Examiner has changed ground. Drawing Objection . The drawings (Figs 1B & 1C) are objected to because these figures contain blank boxes and numbers. Applicant must supply a suitable legend (few/short words/text to that identify these blank boxes). A proposed drawing correction or corrected drawings are required in reply to the Office action to avoid abandonment of the application. The objection to the drawings will not be held in abeyance. The following are direct quotations of 37 CFR 1.84(n), (o), repeated below:(n) Symbols. Graphical drawing symbols may be used for conventional elements when appropriate. The elements for which such symbols and labeled representations are used must be adequately identified in the specification. Known devices should be illustrated by symbols which have a universally recognized conventional meaning and are generally accepted in the art. Other symbols which are not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable. (o) Legends. Suitable descriptive legends may be used subject to approval by the Office, or may be required by the examiner where necessary for understanding of the drawing. They should contain as few words as possible. Claim Rejections - 35 USC§ 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL-The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1 (including dependent claims 2-3, 5-8 ,10 & 25-26) is rejected under 35 U.S.C. 112 (a), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites the limitation “determine one or more biometric indicators defined by the first avatar of the first user, wherein the one or more the biometric indicators defined by the first avatar of the first user comprise a digital representation of at least one physical feature of the first user physically formed on the first avatar in the virtual reality environment and viewable on the first avatar” and “determine authentication credentials associated with the first user based on the one or more biometric indicators of the first avatar that are physically formed on the first avatar;” in amended claim body. Regarding claim 1, the claim(s) recite(s) limitations that are not supported by the instant application as originally filed. The applicant failed to provide support (i.e., page(s), line(s), and drawing(s)) for the newly added claims. The Applicant is advised to clearly and concisely provide claim language that is consistent and correlates to the specification and mindful not to improperly utilized language that is clearly not supported. The Examiner respectfully requests the applicant to provide page(s), line(s), and figure(s) of the instant application that supports the limitation of the claim(s) and/or any supportive comment(s) to help clarify and resolve this issue(s). Regarding claims 11 & 16, these amended claims also recite similar limitations like claim 1 and are rejected for same reasons as set forth for claim 1. Dependent claims 13-15 & 21-22 & 18-20 are also rejected under 112(a) for their dependencies on claims 11 & 16 respectively. Due to the 112 (a) rejections of the current claim language, the Examiner has given a reasonable interpretation of said language and the claims are rejected as broadest and best interpreted. In addition, applicant is welcome to point out where in the specification the Examiner can find support for this language if Applicant believes otherwise. This list of examples is not intended to be exhaustive. The Examiner respectfully requests the applicant to review all claims and clarify the issues as listed above as well as any other issue(s) that are not listed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-2, 11, 16, 21-22 & 25-26 are rejected under 35 USC 103 as being unpatentable over Adams (US 20180157820 A1) in view of Baughman (US 20100046806 A1) and Bhogat (US20090187405) Regarding claim 1, Adams teaches a system for continual device authentication in virtual environments, the system comprising: at least one non-transitory storage device; and at least one processor coupled to the at least one non-transitory storage device, wherein the at least one processor is configured to: receive a request for authentication of a first user device associated with a first user, wherein the first user device is associated with an existing session of a virtual reality environment for the first use and associated with first user device; [0063] FIG. 2 is a schematic diagram of an embodiment of a virtual reality user device employed by virtual reality system 100. Virtual reality user device 200 is configured to receive virtual authentication object 128, and display virtual authentication object 128 to user 102 via the display of virtual reality environment 126.] Although, Adams teaches receiving request from first user associated first device which is associated with the virtual reality environment, he does not teach explicitly, however, Baughman teaches: determine one or more biometric indicators defined by the first avatar of the first user, wherein the one or more biometric indicators comprise a digital representation of at least one physical feature of the first user physically formed on the first avatar in the virtual reality environment and viewable on the first avatar; [0099] Embodiments may map the user's real world cognitive, behavioral, and/or physical traits onto the user's avatar. For example, physiological, behavioral, and/or cognitive biometric information may be obtained and transferred to the user's virtual biometric wallet, wherein the biometric information may be used to deduce that the user is angry in the real world. In embodiments, this deduction may be used to make the user's avatar appear angry in the VU, thereby augmenting the avatar's emotions based on the user's real world emotions.[0100] A user's real world behavior traits may also be mapped onto the user's avatar to provide a more realistic virtual experience. For example, a person's real world behavioral traits such as gait, voice, tics, and signature may be stored in the user's virtual biometric wallet for avatar identification and verification. Real world cognitive traits such as thoughts and intelligence may also be translated to the virtual biometric wallet to make the user's avatar more representative of the user and/or for avatar identification and verification purposes. [0101] In addition to behavioral and cognitive traits, embodiments may translate real physiological traits such as fingerprint, face, palm, pina shape, hand knife, iris, retina, DNA and signature to the virtual biometric wallet. This allows the user's avatar to have, e.g., the same fingerprints, face, and/or iris, etc., as the real world user and heightens the user's virtual experience. Additionally, this information also aids in identification and verification of the user via the user's avatar. determine authentication credentials associated with the first user based on the one or more biometric indicators of the first avatar that are physically formed on the first avatar; [0014] In yet another aspect of the invention, a method for verifying virtual users, comprises providing a computer infrastructure being operable to: ascertain characteristics and behavioral traits about a real world user; link one or more of the real world user characteristics and the behavioral traits together; apply the characteristics and behavioral traits to the virtual user; and authenticate the virtual user based on the characteristics (biometric indicators as credentials) and the behavioral traits (biometric indicators as credential) of the real world user.] . Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, with the disclosure of Baughman. The motivation or suggestion would have been to implement a system operable to: ascertain biometric data; link multiple types of the biometric data together using one or more analytic algorithms; transfer the biometric data to a virtual construct; and apply the biometric data to a virtual representation of a user using the virtual construct. (abstract, para 0001-0014, Baughman) Although, Adams and Baughman teach virtual reality environment, they do not teach, however, Bhogat teaches: determine a first avatar for the first user that is a digital representation of at least a portion of the first user in the virtual reality environment; [0010] The problems identified above are in large part addressed by the systems, arrangements, methods and media disclosed herein to identify an avatar with an online service which can utilize audio biometrics. The method can include prompting a client application with a request for an utterance, processing the reply to the request and creating a voiceprint or a voice profile of the speaker or participant. The voiceprint can be associated with an avatar and when an utterance is received, the avatar can be identified by comparing the utterance to the voiceprint. Such a compare function can be utilized to identify avatars as they move from on-line service to on-line service and in some embodiments voice biometrics can be utilized to authenticate an avatar for specific activities.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams and Thief with the disclosure of Bhogat. The motivation or suggestion would have been to implement a system that will provide efficient, reliable and improved techniques for utilizing voice biometrics to authenticate an avatar for specific activities. (abstract, para 0001-0009, Bhogat) Regarding claim 2, Adams teaches wherein the processor is further configured to cause the first device to render the virtual reality environment for the first user. [please see para 0005] [0005] The virtual reality user device displays the virtual authentication object to the user via the display. The user may manipulate the virtual authentication object to enter an authentication code. The virtual reality user device detects the dynamic gestures performed by the user and forms an authentication request. The information about the detected gestures may include spatial information describing how the user manipulated the virtual authentication object and may include information about how fast or slow the user manipulated the virtual authentication object. Regarding claims 11 & 16, these claims are interpreted to be same as claim 1 and rejected for the same reasons as set forth for claim 1. Regarding claim 21, Adams teaches further comprising code that, when executed, causes the apparatus to cause the first device to render the virtual reality environment for the first user. [please see paragraph 0061: …display of virtual reality environment….] Regarding claim 22, Adams teaches wherein the determined authentication credentials further comprise one or more unique device identifiers associated with the first user device. [please see paragraph 0085…identifying…user device…. stored credential…. identify and authenticate user…...] Regarding claim 25, although Adams and Bhagat teach virtual reality environment, they do not teach explicitly, however, Baughman teaches wherein the at least one physical feature of the first user physically formed on the first avatar comprises a fingerprint of the first user. [0100] A user's real world behavior traits may also be mapped onto the user's avatar to provide a more realistic virtual experience. For example, a person's real world behavioral traits such as gait, voice, tics, and signature may be stored in the user's virtual biometric wallet for avatar identification and verification. Real world cognitive traits such as thoughts and intelligence may also be translated to the virtual biometric wallet to make the user's avatar more representative of the user and/or for avatar identification and verification purposes. [0101] In addition to behavioral and cognitive traits, embodiments may translate real physiological traits such as fingerprint, face, palm, pina shape, hand knife, iris, retina, DNA and signature to the virtual biometric wallet. This allows the user's avatar to have, e.g., the same fingerprints, face, and/or iris, etc., as the real-world user and heightens the user's virtual experience. Additionally, this information also aids in identification and verification of the user via the user's avatar. . Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, Bhogat with the disclosure of Baughman. The motivation or suggestion would have been to implement a system operable to: ascertain biometric data; link multiple types of the biometric data together using one or more analytic algorithms; transfer the biometric data to a virtual construct; and apply the biometric data to a virtual representation of a user using the virtual construct. (abstract, para 0001-0014, Baughman) Regarding claim 26, although Adams and Bhagat teach virtual reality environment, they do not teach explicitly, however, Baughman teaches wherein the at least one physical feature of the first user physically formed on the first avatar comprises a retina of the first user. [0100] A user's real world behavior traits may also be mapped onto the user's avatar to provide a more realistic virtual experience. For example, a person's real world behavioral traits such as gait, voice, tics, and signature may be stored in the user's virtual biometric wallet for avatar identification and verification. Real world cognitive traits such as thoughts and intelligence may also be translated to the virtual biometric wallet to make the user's avatar more representative of the user and/or for avatar identification and verification purposes.[0101] In addition to behavioral and cognitive traits, embodiments may translate real physiological traits such as fingerprint, face, palm, pina shape, hand knife, iris, retina, DNA and signature to the virtual biometric wallet. This allows the user's avatar to have, e.g., the same fingerprints, face, and/or iris, etc., as the real-world user and heightens the user's virtual experience. Additionally, this information also aids in identification and verification of the user via the user's avatar.] . Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, Bhogat with the disclosure of Baughman. The motivation or suggestion would have been to implement a system operable to: ascertain biometric data; link multiple types of the biometric data together using one or more analytic algorithms; transfer the biometric data to a virtual construct; and apply the biometric data to a virtual representation of a user using the virtual construct. (abstract, para 0001-0014, Baughman) Claim 3 is rejected under 35 USC 103 as being unpatentable over Adams (US 20180157820 A1) in view of Baughman (US 20100046806 A1), Bhogat (US20090187405) and Thief (US 20230139813 A1) Regarding claim 3, although Adams, Baughman and Bhogat teach virtual reality environment, they do not teach explicitly, however, Thief teaches wherein the received authentication credentials further comprise one or more unique device identifiers associated with the first user device. [please see para 0073] [0073] At step 908, the credentials may be received from the computer system. In an embodiment, the VR or AR device may receive the credentials from the computer system over the local network. In an embodiment, the credentials may comprise a digital certificate (unique identifier), cryptographic information, an authentication token, or other credentials.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman and Bhogat with the disclosure of Thief. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques for authenticating access to a video platform from a virtual environment. (abstract, para 0001& 0022, Thief) Claims 5, 13 & 18 are rejected under 35 USC 103 as being unpatentable over Adams in view of Baughman, Bhogat and Chambon (US20210141892) Regarding claims 5, 13 & 18, although Adams, Baughman, Bhogat teach virtual reality environment, they do not teach clearly, however, Chambon teaches wherein the at least one processor is further configured to: cause presentation of a plurality of virtual input objects in the virtual reality environment; receive one or more user inputs via the plurality of virtual input objects in the virtual reality environment; and determine the authentication credentials associated with the first user based upon the one or more user inputs. [please see para 0034, 0059, 0084] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, and Bhogat with the disclosure of Chambon. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques utilizing predefined group of virtual interactive objects that may form a preselected password of the user for gaining access to the proprietary media content on the content platform. (abstract, para 0002- 0007, Chambon) Claims 6, 14 & 19 are rejected under 35 USC 103 as being unpatentable over Adams in view of Baughman, Bhogat and Chambon (US20210141892) and Sundar (US 20190379671 A1) Regarding claims 6, 14 & 19, although Adams, Baughman, Bhogat teach virtual reality environment, they do not teach clearly, Sunder teaches, however, wherein the one or more user inputs received via the plurality of virtual input objects in the virtual reality environment correspond to one or more unique device identifiers associated with the first user device. [please see para 0019] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, and Bhogat with the disclosure of Chambon. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques utilizing predefined group of virtual interactive objects that may form a preselected password of the user for gaining access to the proprietary media content on the content platform. (abstract, para 0002- 0007, Chambon) Claims 7, 15 & 20 are rejected under 35 USC 103 as being unpatentable over Adams in view of Baughman, Bhogat and Chambon (US20210141892) and Borunda (US11429939B1) Regarding claims 7, 15 & 20, although Adam, Baughman, Bhogat and Chambon teach virtual reality environment, they do not teach explicitly, however, Borunda teaches wherein the at least one processor is further configured to randomize the plurality of virtual input objects in the virtual reality environment for subsequent authentication of the first user device during the existing session of the virtual reality environment for the first user and the associated first user device. [please see Col 02 & 03, lines 55-65 & 01-03, respectively] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, Bhogat and Chambon with the disclosure of Borunda. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques to use user identification information that identifies the user and verification information required to verify the user's identity via a virtual reality platform. (abstract, Col 01, line 15-20 & 45-65, Col 02, lines 01-55, Borunda) Claim 8 is rejected under 35 USC 103 as being unpatentable over Adams in view of Baughman, Bhogat and Sloane (US 20220398301 A1) Regarding claim 8, although Adams and Baughman and Bhogat teach virtual reality environment, they do not teach clearly, however, Sloane teaches wherein the at least one processor is further configured to iteratively authenticate the first user device for accessing the virtual reality environment based on authentication credentials iteratively determined during a time period associated with the existing session of the virtual reality environment. [ 0027] Continuous Authentication is generally considered to be superior to traditional forms of (login-based) authentication because while login-based authentication checks a user's identity only once, at the start of a login session, continuous authentication recognizes the correct user for the duration of ongoing work. Continuous authentication is thus able to spot the moment at which an unauthorized person seizes control of the session, immediately ending the session, logging the account out, and protecting critical systems and data. In an authenticated session hosted by an augmented reality application, the present invention uses digital visual elements (virtual objects) to implement continuous authentication. In addition, the present invention uses ranging technology to identify positional information of real-world objects captured in the real-time visual feed and uses the positional information to determine authorized access. [0062 FIG.3 illustrates a process flow for implementing continuous authentication based on virtual object manipulation 300, in accordance with an embodiment of the invention. As shown in block 302, the process flow includes receiving, from the computing device of the user, a request for secondary authorized access. Similar to primary authorized access, the secondary authorized access is the ability to access the restricted place or resource. It is obvious to a skilled person that continuous authentication authenticates a user while existing session is going (existing) on]] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams and Baughman and Bhogat with the disclosure of Sloane. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques for continuous authentication which is generally considered to be superior to traditional forms of (login-based) authentication because while login-based authentication checks a user's identity only once, at the start of a login session, continuous authentication recognizes the correct user for the duration of ongoing work. Continuous authentication is thus able to spot the moment at which an unauthorized person seizes control of the session, immediately ending the session, logging the account out, and protecting critical systems and data. (abstract, para 0001-0007, Sloane) Claim 10 is rejected under 35 USC 103 as being unpatentable over Adams in view of Baughman, Bhogat and Arie (US 20130076788 A1) Regarding claim 10, although Adam, Baughman, and Bhagat teaches virtual reality environment, they do not teach explicitly, however, Arie teaches wherein the at least one processor is further configured to halt presentation the existing session of the virtual reality environment for the first user and the associated first user device in response to a failure to receive valid authentication credentials associated with the first user. [0277] It comprises the following verification: [0278] Title icon valid [0279] Objects Images validity [0280] Multimedia Video, Audio, Text, PDF, images, Animation, Augmented reality files validity [0281] Multimedia Video, Audio, Text, PDF, images, Weblinks, Youtube links, Animation, Augmented reality URL links validity [0282] Objects Images relation to Video, Audio, Text, PDF, images, Weblinks, Youtube links, Animation, Augmented reality data [0283] Verification that all object images has Video, Audio, Text, PDF, images, Weblinks, Youtube links, Animation, Augmented reality relations [0284] Total size of title Content data [0285] Preparation (1602)--system 400 prepares the data and files that are needed to the object feature generator (1604). This may be performed, for example from computer 1410 by a system manager. This process may create a list of the object images and multimedia expression. Each row of the file will have the object image name and then the multimedia expression names. [0286] Object Features Generator (1604) Processing on the object images to extract the objects features needed by the object recognition algorithm to recognize the titles objects. The algorithm uses as an input the objects images and multimedia expressions and output an object features. [0287] Title-Content ready (1606) Update the database on a new (or modified) title-Content. Issue a success message to the website page. The title content may be compressed to save storage space and improve user download time. [0288] Display Error (1608) In case of validation process (1600) failure, the process will halt and error message will be displayed on the web page. This will include instruction how to fix the error and what to do next.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Adams, Baughman, and Bhogat with the disclosure of Arie. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques to provide user-personalized and dynamically personalized content management in virtual/augmented reality environment. (para 0001-0008, Arie) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHER A KHAN whose telephone number is (571)272-8574. The examiner can normally be reached M-F 8:00 am-500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at 571-272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHER A KHAN/ Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §103, §112
Sep 03, 2025
Response Filed
Oct 10, 2025
Final Rejection — §103, §112
Jan 13, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598069
MONITORING IN DISTRIBUTED COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12562909
LINKING DIGITAL AND PHYSICAL NON-FUNGIBLE ITEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12537670
KEY SHARD VERIFICATION FOR KEY STORAGE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12530491
SELECTIVE DELETION OF SENSITIVE DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12526157
IDENTITY AUTHENTICATION METHOD AND APPARATUS, AND DEVICE, CHIP, STORAGE MEDIUM AND PROGRAM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+23.3%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 333 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month