Prosecution Insights
Last updated: April 18, 2026
Application No. 18/964,250

AUTOMATED NAME READER SYSTEM AND METHOD

Final Rejection §103§112
Filed
Nov 29, 2024
Examiner
GIULIANI, GIUSEPPI J
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
The Regents of the University of Michigan
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
65%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
162 granted / 279 resolved
+3.1% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This action is in response to the applicant’s response filed 22 December 2025, which is in response to the USPTO office action mailed 4 September 2025. Claims 1-5 and 14 are amended. Claims 1-20 are currently pending. Response to Arguments With respect to the 35 USC §103 rejections of claims 1-20, the applicant’s arguments have been fully considered but have not been deemed persuasive. First, the applicant argues “According to the Office Action, the motivation to combine Correa and Selvaggi is to improve the effectiveness of announcements and the ease of making announcements. Paragraph [0049] of Selvaggi is cited for this motivation. Although Selvaggi does discuss improving effectiveness and ease of announcements, paragraph [0049] discusses this motivation in connection with preparing ‘frequent types of announcements.’ This motivation is completely contrary to the automated name calling system of Correa. In Correa, the field of the invention, more specifically column 1, lines 7-13, indicates that the automated name calling system is used for reading a student's name at an event such as a graduation ceremony. This type of special event only happens a few times a year. Further, calling a student's name at a graduation ceremony is certainly not a ‘frequent type of announcement’ as a student will likely only graduate from an institution once. Thus, the motivation provided by paragraph [0049] of Selvaggi as cited in the Office Action is inapplicable to the automated name calling system of Correa.” (Remarks, pg. 6). Respectfully, this argument is not persuasive. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Correa discloses providing a human reader with a name pronunciation via a headset (Correa, [Col. 4 Lines 37-40]). The examiner interprets this reads on “outputting the named user audio via audio playback over an electronic audio speaker” (e.g. claim 1 lines 8-9). As noted in the rejection Correa does not explicitly teach an audio file. However, Selvaggi discloses a database which stores a speech recording of a person speaking a preferred name (Selvaggi, [0046]). The examiner interprets the speech recording stored in the database is an audio file. The combination of Correa with Selvaggi provides the reader with the speech recording of the preferred name in order to assist in the pronunciation. This combination would have been desirable because this both improves the effectiveness of announcements and the ease of making the announcements (Selvaggi, [0049]). Note also, the examiner interprets that, within the context of a graduation ceremony, the calling of student’s names is a “frequent type of announcement” because, in general, many names will be called during the course of the ceremony. Therefore, this argument is not persuasive. Next, the applicant argues “There is no reason to combine the announcement system of Selvaggi with the automated name calling system of Correa. As explained by column 4, lines 37-40 of Correa, providing the reader with the pronunciation of the student’s name over the headset 68 aids the reader in reading name of the student aloud. In all embodiments of Correa, a human speaker is included such that the speaker reads the name of each student aided by various components of the automated name calling system such as the display 64 and the headset 68. Using an announcement system to play a synthesized speech version of a person’s name, such as that disclosed in Selvaggi, would be contrary to the purpose of Correa. As explained previously, Correa maintains the use of a live speaker during special events and aids that person by helping the speaker improve the pronunciation of student’s names. Eliminating the speaker altogether in favor of outputting an audio file via audio playback over an electronic audio speaker would not have been obvious in view of the cited references. Further, there is no motivation to combine Correa and Selvaggi because one of ordinary skill in the art would not look at an announcement system playing ‘frequent types of announcement(s)’ to modify an aid system provided to a live speaker at a graduation ceremony.” (Remarks, pg. 7). In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In this case, Correa discloses providing a reader with a name pronunciation via a headset (Correa, [Col. 4 Lines 37-40]). Selvaggi discloses a database which stores a speech recording of a person speaking a preferred name (Selvaggi, [0046]). The combination of Correa with Selvaggi provides the reader with the speech recording of the preferred name via the headset in order to assist in the reader’s pronunciation, rather than eliminating the reader altogether as argued. Accordingly, this argument is not persuasive. Last, the applicant argues “Accordingly, the proposed modification changes the principle of operation of Correa. Pursuant to MPEP § 2143.01(VI), if the proposed modification of Correa changes the principle of operation thereof, a prima facie case of obviousness has not been established. Here, amended claim 1 is directed to ‘outputting the named user audio file via audio playback over an electronic audio speaker.’ The Office Action proposed modifying an automated name calling system using a human speaker with a public announcement system. However, this combination of references would require ‘substantial reconstruction and redesign of the elements’ of Correa and ‘change the basic principle under which the [Correa] construction was designed to operate.’ In re Ratti, 270 F.2d 810, 813 (C.C.P.A. 1959). As explained previously, Correa was designed to operate via providing aid to a human speaker. Elimination of the human speaker such that an electronic announcement system announces student’s names during a graduation ceremony changes the basic principle of operation and would require substantial redesign of Correa’s automated name calling system.” (Remarks, pg. 7). In response to applicant's argument that the proposed modification changes the principle of operation of Correa, the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). As noted above, the combination of Correa with Selvaggi provides the reader with the speech recording of the preferred name via the headset in order to assist in the reader’s pronunciation, rather than eliminating the reader altogether as argued. Therefore, the combination of references would not require “substantial reconstruction and redesign of the elements” of Correa nor “change the basic principle under which the [Correa] construction was designed to operate”, because the combination merely plays the speech recording via the headset rather than eliminating the human speaker. Accordingly, this argument is not persuasive. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a user code generation subsystem”, “a named user audio presentation subsystem” and “a named user visual presentation subsystem” in claim 18, “an on-spot interface computer system comprising a named user information entry computer” in claim 19,. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 7 and 9 recite the limitation “and/or” (e.g. claim 7 line 3). It is unclear to the examiner which limitations are required or optional. The claims are rendered indefinite due to this lack of clarity. Note, the dependent claims are also rejected because they do not remedy the deficiencies inherited by their parent claim. Appropriate action is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-14 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Correa et al., US 10,762,404 B1 (hereinafter “Correa” – as cited in the IDS filed 25 July 2025) in view of SELVAGGI, US 2021/0350784 A1 (hereinafter “Selvaggi”). Claim 1: Correa teaches a method of outputting audio for one or more named users, comprising carrying out the following steps automatically using an audio presentation system: receiving a named user code (Correa, [Col. 2 Lines 65-66] note a name calling ceremony such as a graduation, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code 30 which can be scanned by a user's electronic device such as phones, tablets, and the like. When said QR code 30 is scanned by said user, the information stored within RFID tag 26 is made available to said user; i.e. an RFID tag and a QR read on a named user codes, [Fig. 5] note 120, [Col. 4 lines 27-31] note In scanning step 120, said user begins walking to a platform of said name calling ceremony. Before said user walks the platform to have their name called, said user's RFID tag 26 is scanned by an RFID reader 22 in the form of a glove 23 or a stationary reader 21; i.e. scanning reads on receiving); identifying a named user based on the named user code (Correa, [Col. 3 Line 67]-[Col. 4 Lines 1-2] note Once said RFID tag 26 has been scanned, the information stored within said RFID tag 26 is transferred to said podium assembly 60, [Col. 2 Lines 66-67] note RFID tag 26 contains a user's name and academic information); retrieving a named user audio for the named user; retrieving named user information for the named user (Correa, [Col. 4 Lines 25-32] note RFID tag 26 contains said user's information such as their name and picture. In scanning step 120, said user begins walking to a platform of said name calling ceremony. Before said user walks the platform to have their name called, said user's RFID tag 26 is scanned by an RFID reader 22 in the form of a glove 23 or a stationary reader 21. RFID reader 22 then collects the data stored within said RFID tag 26); and outputting the named user audio via audio playback over an electronic audio speaker and the named user information for the named user (Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony). Correa does not explicitly teach audio file. However, Selvaggi teaches this (Selvaggi, [Fig. 1] note speech recording, [0046] note FIG. 1 shows a representation of a person information database 10 and a table 11 that represents records stored within the database… The records are keyed to a Person ID field and include a given name, family name, residence address, citizenship, phonetic pronunciation of a preferred name, and a speech recording of a person speaking the preferred name, among other possible information, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name. It shows the person a message 81 instructing them to record their name, [0078] note When the person is satisfied with their recording, they may activate a Done button 86). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the name pronunciations of Correa with the speech recordings of Selvaggi according to known methods (i.e. providing a speech recording of a name pronunciation). Motivation for doing so is that this both improves the effectiveness of announcements and the ease of making the announcements (Selvaggi, [0049]). Claim 2: Correa and Selvaggi teach the method of claim 1, wherein the named user code is a quick response (QR) code or other bar code, and wherein the named user code is provided to the named user as a part of a named user code generation process that is used to generate the QR code or other bar code (Correa, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code 30). Claim 3: Correa and Selvaggi teach the method of claim 2, wherein the named user code is received by using a named user code reader to read a physical representation of the QR code or other bar code (Correa, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code 30 which can be scanned by a user's electronic device such as phones, tablets, and the like). Claim 4: Correa and Selvaggi teach the method of claim 3, wherein the physical representation of the QR code or other bar code is a display of the QR code or other bar code on a display screen of a named user device, and wherein the named user device is a smartphone, tablet, or other handheld mobile computer (Correa, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code 30 which can be scanned by a user's electronic device such as phones, tablets, and the like). Claim 5: Correa and Selvaggi teach the method of claim 3, wherein the physical representation of the QR code or other bar code is a hard copy or print-out of the QR code or other bar code (Correa, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code). Claim 7: Correa and Selvaggi teach the method of claim 1, wherein the named user audio file is initially obtained and stored prior to receiving the named user code, and wherein the named user audio file is generated by the named user through a voice recording provided by the named user and/or by a text-to-speech (TTS) technique that takes an input configured by the named user (Selvaggi, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name. It shows the person a message 81 instructing them to record their name, [0078] note When the person is satisfied with their recording, they may activate a Done button 86, [0085] note the system may map the lexical text to a plurality of possible corresponding phoneme sequences). Claim 8: Correa and Selvaggi teach the method of claim 7, wherein the named user audio file includes automatically-generated speech, wherein the automatically-generated speech is generated based on a language input provided by the user, and wherein the language input is text or spoken words (Selvaggi, [0067] note Any type of pronunciation hint may be used, depending on the embodiment, but non-limiting examples include a geographic identifier such as a country name, an ethnic group, a religious preference, a gender, and a language associated with the person, [0082] note The system may then use pronunciation information generated from the lexical representation of the person's name as its best attempt at generating the proper name pronunciation). Claim 9: Correa and Selvaggi teach the method of claim 8, wherein the automatically-generated speech is generated based on a language specified by the named user and/or a dialect specified by the named user (Selvaggi, [0067] note Any type of pronunciation hint may be used, depending on the embodiment, but non-limiting examples include a geographic identifier such as a country name, an ethnic group, a religious preference, a gender, and a language associated with the person, [0082] note The system may then use pronunciation information generated from the lexical representation of the person's name as its best attempt at generating the proper name pronunciation). Claim 10: Correa and Selvaggi teach the method of claim 1, wherein the named user information is information that is to be presented to at a predetermined social event (Correa, [Col. 2 Lines 65-66] note a name calling ceremony such as a graduation). Claim 11: Correa and Selvaggi teach the method of claim 10, wherein the named user is one of a plurality of named users and each of the plurality of named users has an associated named user audio file and associated named user information (Selvaggi, [Fig. 1], [0046] note FIG. 1 shows a representation of a person information database 10 and a table 11 that represents records stored within the database, which may be suitable for some embodiments. The records are keyed to a Person ID field and include a given name, family name, residence address, citizenship, phonetic pronunciation of a preferred name, and a speech recording of a person speaking the preferred name, among other possible information), wherein the associated named user audio file for each of the plurality of named users is output via audio playback at the predetermined social event (Correa, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony); and wherein the associated named user information for each of the plurality of named users is output via visual playback at the predetermined social event (Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony). Claim 12: Correa and Selvaggi teach the method of claim 11, wherein the associated named user audio file and the associated named user information are output for playback at the predetermined social event in a coordinated manner so that the visual playback of the named user information occurs at the same time or within a predetermined amount of time as the audio playback of the named user audio file, and wherein the predetermined amount of time is less than one minute (Correa, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony). Claim 13: Correa and Selvaggi teach the method of claim 10, wherein the named user is one of a plurality of named users and each of the plurality of named users has an associated named user audio file (Selvaggi, [Fig. 1], [0046] note FIG. 1 shows a representation of a person information database 10 and a table 11 that represents records stored within the database, which may be suitable for some embodiments. The records are keyed to a Person ID field and include a given name, family name, residence address, citizenship, phonetic pronunciation of a preferred name, and a speech recording of a person speaking the preferred name, among other possible information), and wherein the predetermined social event is a graduation ceremony in which each of the plurality of named users has their name read aloud via audio playback of the associated named user audio file (Correa, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony). Claim 14: Correa teaches an automated named user audio presentation system having at least one processor and memory storing computer instructions that, when executed by the at least one processor, configure the audio presentation system to carry out the following steps automatically: receive a named user code (Correa, [Col. 2 Lines 65-66] note a name calling ceremony such as a graduation, [Col. 3 Lines 24-28] note clothing accessories 28 may further include a QR code 30 which can be scanned by a user's electronic device such as phones, tablets, and the like. When said QR code 30 is scanned by said user, the information stored within RFID tag 26 is made available to said user; i.e. an RFID tag and a QR read on a named user codes, [Fig. 5] note 120, [Col. 4 lines 27-31] note In scanning step 120, said user begins walking to a platform of said name calling ceremony. Before said user walks the platform to have their name called, said user's RFID tag 26 is scanned by an RFID reader 22 in the form of a glove 23 or a stationary reader 21; i.e. scanning reads on receiving); identify a named user based on the named user code (Correa, [Col. 3 Line 67]-[Col. 4 Lines 1-2] note Once said RFID tag 26 has been scanned, the information stored within said RFID tag 26 is transferred to said podium assembly 60, [Col. 2 Lines 66-67] note RFID tag 26 contains a user's name and academic information); retrieve a named user audio for the named user; retrieve named user information for the named user (Correa, [Col. 4 Lines 25-32] note RFID tag 26 contains said user's information such as their name and picture. In scanning step 120, said user begins walking to a platform of said name calling ceremony. Before said user walks the platform to have their name called, said user's RFID tag 26 is scanned by an RFID reader 22 in the form of a glove 23 or a stationary reader 21. RFID reader 22 then collects the data stored within said RFID tag 26); and output the named user audio via audio playback over an electronic audio speaker and the named user information for the named user (Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony). Correa does not explicitly teach audio file. However, Selvaggi teaches this (Selvaggi, [Fig. 1] note speech recording, [0046] note FIG. 1 shows a representation of a person information database 10 and a table 11 that represents records stored within the database… The records are keyed to a Person ID field and include a given name, family name, residence address, citizenship, phonetic pronunciation of a preferred name, and a speech recording of a person speaking the preferred name, among other possible information, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name. It shows the person a message 81 instructing them to record their name, [0078] note When the person is satisfied with their recording, they may activate a Done button 86). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the name pronunciations of Correa with the speech recordings of Selvaggi according to known methods (i.e. providing a speech recording of a name pronunciation). Motivation for doing so is that this both improves the effectiveness of announcements and the ease of making the announcements (Selvaggi, [0049]). Claim 18: Correa teaches an automated named user audio presentation system, comprising: a user code generation subsystem configured to generate a unique named user code for each of a plurality of named users, wherein the unique named user code is used to individually identify a named user from the plurality of named users (Note, this limitation is interpreted under 35 USC §112(f) as the processor along with the algorithm disclosed in the specification, Correa, [Col. 2 Lines 63-67]-[Col. 3 Lines 1-3] note RFID tag 26 may be configured to hold personal information of an individual such as a person's name and photo. In one embodiment, RFID assembly 20 is used in a name calling ceremony such as a graduation wherein said RFID tag 26 contains a user's name and academic information. Information may be uploaded to said RFID tag 26 by means of a computer 29, [Col. 3 Lines 23-28] note In yet another embodiment, clothing accessories 28 may further include a QR code 30 which can be scanned by a user's electronic device such as phones, tablets, and the like. When said QR code 30 is scanned by said user, the information stored within RFID tag 26 is made available to said user); a named user audio presentation subsystem having an electronic audio speaker and configured to receive the named user code or other information uniquely identifying an individual, to obtain a named user audio based on the named user code, and to audibly output a name of each of the plurality of named users through audio playback of the named user audio using the electronic audio speaker (Note, this limitation is interpreted under 35 USC §112(f) as the processor along with the algorithm disclosed in the specification, Correa, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony); and a named user visual presentation subsystem having an electronic display and configured to receive information identifying a particular one of the plurality of named users, to obtain named user information for the particular named user, and to visually output the named user information for the particular named user using the electronic display (Note, this limitation is interpreted under 35 USC §112(f) as the processor along with the algorithm disclosed in the specification, Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony). Correa does not explicitly teach audio file. However, Selvaggi teaches this (Selvaggi, [Fig. 1] note speech recording, [0046] note FIG. 1 shows a representation of a person information database 10 and a table 11 that represents records stored within the database… The records are keyed to a Person ID field and include a given name, family name, residence address, citizenship, phonetic pronunciation of a preferred name, and a speech recording of a person speaking the preferred name, among other possible information, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name. It shows the person a message 81 instructing them to record their name, [0078] note When the person is satisfied with their recording, they may activate a Done button 86). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the name pronunciations of Correa with the speech recordings of Selvaggi according to known methods (i.e. providing a speech recording of a name pronunciation). Motivation for doing so is that this both improves the effectiveness of announcements and the ease of making the announcements (Selvaggi, [0049]). Claim 19: Correa and Selvaggi teach the system of claim 18, further comprising an on-spot interface computer system comprising a named user information entry computer configured for receiving a name from a user and causing registration of the user as one of the plurality of named users (Selvaggi, [Fig. 12] note 121, 123, [0065] note Single systems or separate systems using an agreed database format may enroll people in the database by receiving, from people, a lexical text entry of their name). Claim 20: Correa and Selvaggi teach the system of claim 18, wherein the named user audio presentation subsystem, the named user visual presentation subsystem, and the on-spot interface computer system are co-located at an event space for a social event in which the name of each of the plurality of named users is to be audibly presented (Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony), and wherein the user code generation subsystem is remotely-located from the event space (Selvaggi, [0069] note a client/server architecture may be used with multiple computers to enroll the person. In some embodiments, a user may interact with a client device, such as a desktop computer, laptop computer, tablet, or smartphone, which communicates over a network with a server computer, [Fig. 12], [0092] note database server 123 responds, through the network, to the terminal 121 by sending a preferred name pronunciation). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Correa and Selvaggi in further view of Shablygin et al., US 2013/0208893 A1 (hereinafter “Shablygin”). Claim 6: Correa and Selvaggi do not explicitly teach the method of claim 1, wherein the named user audio file is stored with an anonymized filename, and wherein the anonymized filename is generated by encrypting a named user identifier of the named user. However, Shablygin teaches this (Shablygin, [0057] note prior to acceptance of the data for storage, the data is encrypted 33, The data can be encrypted by an algorithm and/or cryptographic key known only to the party using, for example, one or more of the encryption methods described herein, [0376] note service provider agent 3132 updates the file container adding a new entry in the access control list for the second user. The new entry includes the user's public ID and the permissions granted to the user, as well as the encrypted file encryption key). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the database records of Correa and Selvaggi with the data encryption of Shablygin according to known methods (i.e. encrypting the data records stored in the database). Motivation for doing so is that Requiring the user to be authenticated can provide the advantage of allowing the user to feel confident that his/her identity cannot be mimicked and that the data will be safe from both internal and external threats (Shablygin, [0055]). Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Correa and Selvaggi in further view of NAKAYAMA et al., US 2024/0275791 A1 (hereinafter “Nakayama”). Claim 15: Correa and Selvaggi teach the method of claim 1, wherein the method comprises a named user registration process, and wherein the named user registration process includes: information pertaining to the named user, wherein the information pertaining to the user is a named user identifier (Selvaggi, [Fig. 7] note enrollment through a web browser window 70, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name, [0215] note receiving a registration request from the person; associating the person with the person ID). Correa and Selvaggi do not explicitly teach generating the named user code for the named user, wherein the named user code is generated based on information pertaining to the named user; and providing the named user code to the named user. However, Nakayama teaches this (Nakayama, [Fig. 7], [0470] note the registration QR code issuer 37 issues (generates) the registration QR codes for the number of persons instructed in the instruction to issue the registration QR code. The number of registration QR codes issued in this way is determined depending on the number of children, which is information belonging to the school 200, and is not a number determined by the household 210 or a number determined by the point management system 11. As described above, the school ID, class ID, and child ID are embedded in the registration QR code, [0245] note when the group as the person 1092 other than the performer is the school, examples of a person who may be the performer 1091 associated with the school may include graduates of the school). It would have been obvious to one of ordinary skill in the art at the effective filing date of the application to combine the QR codes of Correa and Selvaggi with the registration QR codes of Nakayama according to known methods (i.e. providing a QR code registration system). Motivation for doing so is that by specifying a group to which printed materials are to be distributed, it may be configured such that the number of printed materials is automatically identified (Nakayama, [0150]). Claim 16: Correa, Selvaggi and Nakayama teach the method of claim 15, wherein the method comprises a named user playback process for a predetermined social event, and wherein the named user playback process includes: receiving the named user code; identifying the named user based on the named user code; retrieving the named user audio file for the named user; retrieving the named user information for the named user; and outputting the named user audio file and the named user information for the named user (Correa, [Col. 4 lines 2-6] note The information is displayed on said display 64 of said podium 62 wherein a reader using said podium 62 may read the name of the student presented on the display 64 for the graduation ceremony, [Col. 4 Lines 25-32] note RFID tag 26 contains said user's information such as their name and picture. In scanning step 120, said user begins walking to a platform of said name calling ceremony. Before said user walks the platform to have their name called, said user's RFID tag 26 is scanned by an RFID reader 22 in the form of a glove 23 or a stationary reader 21. RFID reader 22 then collects the data stored within said RFID tag 26, [Col. 4 Lines 37-40] note the reader is provided a headset 68 wherein said headset 68 provides the reader with the pronunciation of the names displayed on display 64 to aid the reader in reading aloud names for a name calling ceremony). Claim 17: Correa, Selvaggi and Nakayama teach the method of claim 15, wherein the named user registration process is performed for each of a plurality of named users including the named user before the named user playback process begins for any of the plurality of named users (Selvaggi, [Fig. 7] note enrollment through a web browser window 70, [Fig. 8], [0076] note FIG. 8 shows an example of a web browser window 70, which may be shown on a client device being used by the person, for recording a person's spoken name, [0215] note receiving a registration request from the person; associating the person with the person ID). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Giuseppi Giuliani whose telephone number is (571)270-7128. The examiner can normally be reached Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GIUSEPPI GIULIANI/Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Nov 29, 2024
Application Filed
Sep 02, 2025
Non-Final Rejection — §103, §112
Dec 22, 2025
Response Filed
Apr 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602410
MULTIMODAL CONTEXT SELECTION FOR LARGE LANGUAGE MODEL BASED RESOLUTIONS ADDRESSING TECHNICAL ISSUES
2y 5m to grant Granted Apr 14, 2026
Patent 12585649
CONDITIONAL BRANCHING FOR A FEDERATED GRAPH QUERY PLAN
2y 5m to grant Granted Mar 24, 2026
Patent 12561368
METHODS AND SYSTEMS FOR TENSOR NETWORK CONTRACTION BASED ON LOCAL OPTIMIZATION OF CONTRACTION TREE
2y 5m to grant Granted Feb 24, 2026
Patent 12561363
Visual Search Determination for Text-To-Image Replacement
2y 5m to grant Granted Feb 24, 2026
Patent 12536151
ACCURATE AND QUERY-EFFICIENT MODEL AGNOSTIC EXPLANATIONS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
65%
With Interview (+7.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month