Prosecution Insights
Last updated: April 19, 2026
Application No. 18/976,523

Personalized Keyword Log

Non-Final OA §101§102§103
Filed
Dec 11, 2024
Examiner
TRAN, TRANG U
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Orcam Technologies Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
719 granted / 915 resolved
+20.6% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
935
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
35.2%
-4.8% vs TC avg
§112
2.7%
-37.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 915 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 52 is rejected under 35 U.S.C. 101 because the claimed invention is direct to non-statutory subject matter as follows. Claim 52 does not fall within at least one of the four categories of patent eligible subject matter because it is directed toward a computer-readable medium. The specification does not clearly define a computer-readable medium. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers transitory propagating signals per se. As the broadest reasonable interpretation of claim 52 covering a signal per se, the claim must be rejected under 35 U.S.C. §101 as non-statutory subject matter. However, the Examiner respectfully submits a claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation “non-transitory” to the claim. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 23-28, 33-37 and 39-52 are rejected under 35 U.S.C. 102(a)(1) as being anticipate by Dahir et al. (US Patent No. 10,819,667 B2). In considering claim 23, Dahir et al. discloses all the claimed subject matter, note 1) the claimed at least one microphone configured to capture voices from an environment of a user and output at least one audio signal is met by the microphones 330, 340 positioned to capture audio from users 301-304 (Fig. 3, col. 6, lines 20-64), 2) the claimed at least one processor programmed to execute a method is met by the processor 220 (or independent processor of interface 210) (Figs. 2-3, col. 6, lines 20-64), 3) the claimed analyzing the at least one audio signal to identify a conversation, logging the conversation is met by the conversation analysis process 248 may include a fingerprint analyzer 408 configured to analyze audio data 402, in order to identify a primary user associated with the conversations captured in audio data 402 and log his or her conversations (Fig. 4, col. 6, line 65 to col. 7, line 59), 4) the claimed analyzing the at least one audio signal to automatically identify words spoken during the logged conversation is met by a keyword extractor 404 may extract keywords found in audio data 402 using Natural Language Processing (NLP) (Fig. 4, col. 7, line 60 to col. 8, line 61), 5) the claimed comparing the identified words to a user-defined list of key words to identify at least one key word spoken during the logged conversation is met by the processor which may use natural language processing (NLP) or another technique, to compare keywords spoken in the conversations to a list of keywords associated with the topic (Figs. 4-5, col. 12, lines 10-52), 6) the claimed associating, in at least one database, the identified spoken key word with the logged conversation is met by the memory 240 which stores the list of keywords (Figs. 4-5, col. 12, lines 10-52), and 7) the claimed providing, to the user, at least one of an audible or visible indication of the association between the at least one spoken key word and the logged conversation is met by the UI process 418 may be configured to provide display and/or audio data to a UI regarding the conversations associated with a particular topic, based on conversation data 416 (Figs. 4-5, col. 8, lines 42-65 and col. 12, lines 10-52). In considering claim 24, the claimed wherein the at least one of the audible or visible indication of the association between the at least one spoken key word and the logged conversation is provided after a predetermined time period or during a future encounter conversation is met by the UI process 418 may be configured to provide display and/or audio data to a UI regarding the conversations associated with a particular topic, based on conversation data 416 (Figs. 4-5, col. 8, lines 42-65 and col. 12, lines 10-52). In considering claim 25, the claimed wherein the method further comprises analyzing the at least one audio signal to distinguish a voice of the user from other sounds captured by the at least one microphone is met by the fingerprint analyzer 408 which may identify the voice of the primary user by matching portions of audio data 402 to voice fingerprints supplied by the primary user, such as a pre-recording by the primary user (Fig. 4, col. 6, line 65 to col. 7, line 59). In considering claim 26, the claimed wherein identifying the at least one key word spoken during the logged conversation comprises identifying, in the audio signal, representations of key words spoken by the user or by at least one other is met by a keyword extractor 404 may extract keywords found in audio data 402 using Natural Language Processing (NLP) (Fig. 4, col. 7, line 60 to col. 8, line 61). In considering claim 27, the claimed wherein the method further comprises identifying the at least one individual is met by the fingerprint analyzer 408 which may identify the voice of the primary user (Fig. 4, col. 6, line 65 to col. 7, line 59). In considering claim 28, the claimed wherein identifying the at least one individual comprises recognizing, based on analysis of the audio signal, a voice of the at least one individual is met by the fingerprint analyzer 408 which may identify the voice of the primary user by matching portions of audio data 402 to voice fingerprints supplied by the primary user, such as a pre-recording by the primary user (Fig. 4, col. 6, line 65 to col. 7, line 59). In considering claim 33, the claimed wherein the method further comprises analyzing the at least one audio signal to acquire a first measurement of at least one voice characteristic is met by the topic identifier 406 may employ a word frequency measure, such as the term frequency-inverse document frequency (TF-IDF) of the various keywords used in a given conversation, to weight the keywords used in the conversation (Fig. 4, col. 7, line 60 to col. 8, line 61). In considering claim 34, the claimed wherein the at least one voice characteristic comprises at least one of a pitch, a tone, a rate of speech, a volume, a center frequency, a frequency distribution, or a responsiveness of the voice is met by the word frequency measure, such as the term frequency-inverse document frequency (TF-IDF) of the various keywords used in a given conversation, to weight the keywords used in the conversation (Fig. 4, col. 7, line 60 to col. 8, line 61). In considering claim 35, the claimed wherein the method further comprises applying a voice classification rule to classify at least a portion of the audio signal into one of a plurality of voice classifications based on the at least one voice characteristic is met by the machine learning model 410 is to learn the context characteristics of the conversations of the primary user and, in turn, use these characteristics to update conversation data 416 (Fig. 4, col. 8, line 62 to col. 10, line 54). In considering claim 36, the claimed wherein the portion of the audio signal comprises a representation of the identified at least one key word is met by machine learning model 410 may assess the context characteristic(s) of the conversations associated with a given topic, to identify other conversations that may also be related to that topic (Fig. 4, col. 8, line 62 to col. 10, line 54). In considering claim 37, the claimed wherein applying the voice classification rule comprises applying the voice classification rule to a component of the audio signal associated with a voice of the user or another individual is met by one function of machine learning model 410 is to learn the context characteristics of the conversations of the primary user and, in turn, use these characteristics to update conversation data 416 (Fig. 4, col. 8, line 62 to col. 10, line 54). In considering claim 39, Dahir et al. discloses all the claimed subject matter, note 1) the claimed wherein the method further comprises: associating, in the at least one database, the voice classification with the identified at least one spoken key word and the logged conversation is met by the memory 240 which stores the list of keywords (Figs. 4-5, col. 12, lines 10-52), and 2) the claimed providing, to the user, at least one of an audible or visible indication of the association between the voice classification, the at least one spoken key word, and the logged conversation is met by the UI processor 418 may be configured to provide display and/or audio data to a UI regarding the conversations associated with a particular topic, based on conversation data 416 (Figs. 4-5, col. 8, lines 42-65 and col. 12, lines 10-52). In considering claim 40, the claimed wherein the voice classification rule is based on at least one of a neural network or a machine learning algorithm trained on one or more training examples is met by the machine learning model 410 (Fig. 4, col. 8, line 62 to col. 10, line 54). In considering claim 41, the claimed wherein the method further comprises applying a context classification rule to classify the environment of the user into one of a plurality of contexts including at least a work context and a social context, based on information provided by at least one of the audio signal, an image signal, an external signal, or a calendar entry is met by the privacy module 414 may provide control over the audio capture device(s) 422 based on the conversation participants identified by fingerprint analyzer 408, the location(s) associated with the participants, and/or other factors,… for example, this may be for work-related conversations and locations (Fig. 4, col. 10, line 9 to col. 11, line 62). In considering claim 42, the claimed wherein the context classification rule is based on at least one of a neural network or a machine learning algorithm trained on one or more training examples is met by the machine learning model 410 is to learn the context characteristics of the conversations of the primary user (Fig. 4, col. 8, line 62 to col. 10, line 54). In considering claim 43, the claimed wherein the plurality of contexts include at least a work context and a social context is met by the privacy module 414 may provide control over the audio capture device(s) 422 based on the conversation participants identified by fingerprint analyzer 408, the location(s) associated with the participants, and/or other factors,… for example, this may be for work-related conversations and locations (Fig. 4, col. 10, line 9 to col. 11, line 62). In considering claim 44, the claimed wherein the external signal is one of a location signal or a Wi-Fi signal is met by the location(s) associated with the participants, and/or other factors,… for example, this may be for work-related conversations and locations (Fig. 4, col. 10, line 9 to col. 11, line 62). In considering claim 45, the claimed wherein: the at least one processor is included in a secondary computing device configured to be wirelessly linked to the at least one microphone, and the secondary computing device comprises at least one of a smart phone, a mobile device, a laptop computer, a tablet computer, a desktop computer, a smart speaker, an in-home entertainment system, or an in-vehicle entertainment system is met by the computer 375 or phone 370 which may capture audio, such as when user 304 interacts with computer 375 or phone 370 (e.g., when user 304 interacts with a videoconferencing application, etc.) (Fig. 3, col. 6, lines 27-64). In considering claim 46, the claimed wherein providing an indication of the association comprises providing the indication via the secondary computing device is met by the computer 375 or phone 370 (Fig. 3, col. 6, lines 27-64). In considering claim 47, the claimed wherein analyzing the at least one audio signal to identify a conversation comprises identifying at least one of a start time of the conversation, an end time of the conversation, a context classification of the conversation, a context classification of the association, a voice classification of the association, or participants in the conversation is met by the fingerprint analyzer 408 which may identify the voice of the primary user (Fig. 4, col. 6, line 65 to col. 7, line 59). In considering claim 48, the claimed wherein logging the conversation comprises identifying the conversation in the at least one database by at least one of a start time of the conversation, an end time of the conversation, a context classification of the conversation, a context classification of the association, a voice classification of the association, or participants in the conversation is met by the UI processor 418 may be configured to provide display and/or audio data to a UI regarding the conversations associated with a particular topic, based on conversation data 416 (Figs. 4-5, col. 8, lines 42-65 and col. 12, lines 10-52). In considering claim 49, the claimed wherein the at least one key word is determined dynamically in response to a word spoken by an individual in the logged conversation is met by a keyword extractor 404 may extract keywords found in audio data 402 using Natural Language Processing (NLP) (Fig. 4, col. 7, line 60 to col. 8, line 61). In considering claim 50, the claimed further comprising identifying an intonation in which the at least one key word is said is met by the topic identifier 406 (Fig. 4, col. 7, line 60 to col. 8, line 61). Claim 51 is rejected for the same reason as discussed in claim 23 above. Claim 52 is rejected for the same reason as discussed in claim 23 above. Claim Rejections - 35 USC § 103 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 29-32 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Dahir et al. (US Patent No. 10,819,667 B2) in view of Mikhailov (US Patent No. 11,423,889 B2). In considering claim 29, Dahir et al. disclose all the limitations of the instant invention as discussed in claims 25-28 above, except for providing the claimed further comprising a camera configured to capture images from an environment of the user and output an image signal, wherein identifying the at least one individual comprises recognizing, based on analysis of the image signal, the at least one individual. Mikhailov teaches that VA 130 includes a camera (not shown in FIG. 1 or FIG. 2) that may interact with a processor to identify the speaker and/or participant using image recognition techniques (Fig. 1, col. 5, line 9 to col. 6, line 36). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the camera as taught by Mikhailov into Dahir et al.’s system in order to capture the speaker along with the audio signal. In considering claim 30, the claimed wherein recognizing the at least one individual, based on the analysis of the image signal, comprises recognizing at least -one of a face, a posture, or a gesture of the at least one individual represented by the image signal is met by image recognition techniques to identify a speaker and/or participant (Fig. 2, col. 6, lines 25-47 of Mikhailov). The motivation of the combined references has been discussed in claim 29 above. In considering claim 31, the claimed wherein the camera and the at least one microphone are included in a common housing is met by user 302 may be carrying an audio capture device, such as a smart phone 370, a wearable device (e.g., smart watch, etc.), or the like (Fig. 3, col. 6, lines 27-64 of Dahir et al.). The motivation of the combined references has been discussed in claim 29 above. In considering claim 32, the claimed wherein the common housing is configured to be worn by the user is met by user 302 may be carrying an audio capture device, such as a smart phone 370, a wearable device (e.g., smart watch, etc.), or the like (Fig. 3, col. 6, lines 27-64 of Dahir et al.). The motivation of the combined references has been discussed in claim 29 above. In considering claim 38, Dahir et al. disclose all the limitations of the instant invention as discussed in claims 23, 33 and 35 above, except for providing the claimed wherein the plurality of voice classifications denote the speaker’s mood. Mikhailov teaches that in some embodiments, the speech segmentation package may identify a gender of the speaker, an age of the speaker, a dialect of the speaker, an accent of the speaker, a tone of voice, an emotional content of the speech of the speaker, or any other aspects that uniquely identify the speaker based on the audio characteristics of the receiving data (Fig. 1, col. 13, line 65 to col. 14, line 22). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate identify an emotional content of the speech of the speaker as taught by Mikhailov into Dahir et al.’s system in order to accurately identify the spoken words of the audio signal. Conclusion 8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wyss et al. (US Patent No. 11,468,897 B2) disclose systems and methods related to automated transcription of voice communications. Ashoori et al. (US Patent No. 10,885,080 B2) disclose cognitive ranking of terms used during a conversation. 9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG U TRAN whose telephone number is (571)272-7358. The examiner can normally be reached M-F 10:00AM- 6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W. MILLER can be reached at 571-272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. January 20, 2026 /TRANG U TRAN/Primary Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Dec 11, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603986
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12598288
METHOD AND DEVICE FOR DETECTING POWER STABILITY OF IMAGE SENSOR
2y 5m to grant Granted Apr 07, 2026
Patent 12596077
Passive Camera Lens Smudge Detection
2y 5m to grant Granted Apr 07, 2026
Patent 12591995
METHOD AND APPARATUS FOR DEFORMATION MEASUREMENT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576717
DRIVING ASSISTANCE APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 915 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month