DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 10/31/25 have been fully considered but they are not persuasive. With regards to claim rejection under 35 USC 112 second (claim 1-9) applicant stated that “It is submitted that the reference in the claims to the "evaluation unit" is easily understood by those skilled in the art. Typically such evaluation units are provided by a processor, ASIC or similar device, and may further incorporate a computer employed in the industrial process being monitored.” Examiner respectfully disagrees. Applicant further explained “This is the equivalent of using a comparator circuit or the equivalent computer processing to perform a comparator function, to provide an image sensor or camera to detect images or to provide an electrical power source to provide power. The inventor using these devices need not know how to construct these devices to use them or even (in the case of a comparator) understand the concept of a Wheatstone bridge”. This explanation is not in the specification, need to be included in the specification to overcome the rejection. Applicant also has the option to amend the claim avoiding 112 sixth language. With regards to 35 USC 112 second rejection stand as applicant argument was not persuasive.
On page 14-18 the applicant argued on obviousness combination of arts. Examiner recognizes that obviousness can only be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988) and In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992). In this case, the differences between the prior art and the claims in issue have been set forth. The level of ordinary skill in the art is deemed to be a person who is presumed to be aware of all prior art, specifically relating to control. The rejection is based on what was known prior to the time the Applicant created the invention and rest on a factual basis, and supported by the motivation as noted in the above office action. Under such considerations, the prior art teaches the claims as written.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
As per claim 1-9 , this claim limitations “an evaluation unit ….” has been interpreted under 35 U.S.C. 112, sixth paragraph, because it uses a non-structural term “ unit ” coupled with functional language “ determine …” respectively without reciting sufficient structure to achieve the function. However, the written description fails to disclose the corresponding structure, material, or acts for the claimed function. Applicant’s specification (para [009], [0018]), that “an evaluation unit for determining information of the code”, however does not describe the details of how it structured (i.e. hardware).
Claims 1- 9 are rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. Claim element “an evaluation unit ….” has been invokes 35 U.S.C. 112, sixth paragraph without reciting sufficient structure to achieve the function.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-9 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim element “an evaluation unit to determine ….” is a limitation that invokes 35 U.S.C. 112, sixth paragraph without reciting sufficient structure to achieve the function. However, the written description fails to disclose the corresponding structure (hardware), material, or acts for the claimed function.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-11, 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Simon(US 20050212657 A1) in view of Rongley(US 5758322 A) and further in view of Lee Dong et al(KR 20210030646 A).
With regards to claim 1, Simon discloses, Code reader device (1, 1A) for reading an optical code (FIG 1, 5, 3 140 and associated text; [0043]; ), with a lighting unit (8) to illuminate a reading area (2, 2A) of the code reader device (1, 1A) ([0049]; ), a receiver unit (4) for capturing the code from the reading area (2, 2A), an evaluation unit (5) configured to determine information from the code ([0049] As mentioned earlier, for the sake of illustration the primary biometric sensor 202 is a fingerprint sensor. When the tip of the appropriate finger is applied to the primary biometric sensor 202, the programmable microchip 204 reads the actual biometric data sensed by the primary biometric sensor 202 and compares it with the stored biometric data on the card 200. A match is indicated by the GO indicator 202A, which is a green LED and which, when illuminated, determines that match has been determined. The data lock is then released and the card reader 140 sends the stored data to the data processing unit 120.), and an audio unit (6) for recording a user's (aN, N) voice (aV, V) ([0046]; In a preferred embodiment, indicator signals 202A, 202B are provided on the sensor 202. The auxiliary sensors 208, 210 may include a retinal image scanner, a facial image scanner, a voice print scanner, etc.),
Simon does not exclusively but Rongley teaches, wherein the evaluation unit (5) is trained to carry out a voice learning process in order to teach and store a voice (aV or V) as an individual voice (aV) of an authorized user (aN), so that the voice (aV) is stored as a personalized voice (aV) of the authorized user (aN) (Col 5 line 25-40; The employee may also train the system to recognize new words or retrain the system to better recognize words with which the system has been having difficulty. To accomplish this, the employee says "train words" as shown in box 120 which causes the system to go into a vocabulary acquisition mode in which the employee may add words to her vocabulary or retrain the system for troublesome words. The addition or alteration of the employee's vocabulary is carefully circumscribed according to the employee's security level, i.e., level of access. That is, only words for which that particular employee has authorization will be entered into the employee's vocabulary. The system prompts the employee to repeat three times each word to be entered into the vocabulary. Each is then added to the employee's vocabulary. When the training session is complete, the system returns to screen 110. Col 14 line 45 to col 15 lin5; ), and
wherein the evaluation unit (5) is trained to carry out a voice recognition process in which the personalized voice (aV) is recognized as the voice of the authorized user (aN) (Col 5line 15-25; From display screen 110, the Employee Timeclock screen, the employee may perform a number of functions. In response to the entering of a password, the system retrieves the employee's time card file from the file server, clocks the employee in, and displays the time card 112 as shown )and is further trained to perform defined functions of the code reading device (1, 1A) only if the personalized voice (aV) of the authorized user (aN) has been recognized (Abstract: A method and apparatus for conducting point-of-sale transactions using both speaker dependent and speaker independent voice recognition in which a spoken utterance from a first user is captured with a sound input device. The spoken utterance is compared to a first plurality of stored patterns to find a first match.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Simon’s device with teaching of Rongley in order for conducting point-of-sale transactions using both speaker dependent and speaker independent voice recognition techniques(Rongley Col 1line 5-20;).
Simon in view of Rongley do not exclusively but Lee Dong teaches,
enabling calling a configuration function of the code reader device (1, 1A) with a voice command, notwithstanding installation of the code reader device (1, 1A) (Page 6 para 7; The processor 130 may identify a voice assistant corresponding to the trigger word when a trigger word for activating one voice assistant among a plurality of voice assistants is included in the converted text of the input user's voice. . The trigger word corresponds to a kind of promised command or'Wake Up input' for calling the corresponding voice assistant to receive the service of the voice assistant.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Simon in view of Rongley’s device with teaching of Lee Dong in order for providing a plurality of voice assistants in a timely manner in consideration of categories and/or users (Lee Dong Background-Art).
With regards to claim 2, 11 Simon in view of Rongley discloses, wherein the evaluation unit is further trained to start the voice teach-in process only if an authorization of the user (aN) has been established by means of a defined speech sequence, a password or a fingerprint; wherein a defined speech sequence, password or fingerprint is captured in order to confirm an authorization of the user (aN) and to initiate the personalization of the voice (Rongley Col 5 line 0-10; When beginning operation of the system, an employee of the fast food restaurant encounters the message "Please enter password or insert voice card now" on screen 100 as shown in FIG. 3. Upon the entering of a password by the employee (either manually or vocally), the voice file corresponding to the employee is loaded from the server to the PC's hard disk and, if one exists to the PC's RAM disk and the RAM disks of any registers on which the employee will be working. Col 5 line 15-30; From display screen 110, the Employee Timeclock screen, the employee may perform a number of functions. In response to the entering of a password, the system retrieves the employee's time card file from the file server, clocks the employee in, and displays the time card 112 as shown. … The employee may also train the system to recognize new words or retrain the system to better recognize words with which the system has been having difficulty. ). Motivation would be same as stated in claim 1.
With regards to claim 3, Simon in view of Rongley discloses, wherein a microphone (7) or a fingerprint sensor (8) is provided to perform the authorization (Rongley; col 2line 35-45; The user of the system may then conduct sales transactions using a series of vocal commands and responses. The system receives the user's voice by means of a sound input device such as a microphone. Col 5 line 0-10; When beginning operation of the system, an employee of the fast food restaurant encounters the message "Please enter password or insert voice card now" on screen 100 as shown in FIG. 3. Upon the entering of a password by the employee (either manually or vocally),). Motivation would be same as stated in claim 1.
With regards to claim 4, Simon in view of Rongley discloses, wherein the defined speech sequence is stored in the evaluation unit (5) of the code-reading device (1, 1A) and intended to recognize the voice of the authorized user (aN) by the voice recognition process (Col 2 line 30-60; Embodiments of the invention using both speaker dependent and speaker independent techniques are described. One speaker dependent system described herein stores a separate voice file for each user containing a vocabulary specific to that user. When the user logs on to the system, a local controller retrieves the user's voice file from main memory and places it in a local memory at the user's sales location. The user of the system may then conduct sales transactions using a series of vocal commands and responses. The system receives the user's voice by means of a sound input device such as a microphone. The local controller then compares the vocal commands and responses of the user to stored samples of the user's voice from the voice file). Motivation would be same as stated in claim 1.
With regards to claim 5, 13, Simon in view of Rongley discloses, wherein the defined function released to the authorized user (aN) comprises a start or stop of a measurement of the code-reading device (1, 1A) or a configuration of the code-reading device (1, 1A); wherein a measurement operation or a configuration of the code reader device (1, 1A) is started or terminated with the personalized voice (aV) (Rongley col 5 line 15-40; From display screen 110, the Employee Timeclock screen, the employee may perform a number of functions. In response to the entering of a password, the system retrieves the employee's time card file from the file server, clocks the employee in, and displays the time card 112 as shown. …The employee may also train the system to recognize new words or retrain the system to better recognize words with which the system has been having difficulty. To accomplish this, the employee says "train words" as shown in box 120 which causes the system to go into a vocabulary acquisition mode in which the employee may add words to her vocabulary or retrain the system for troublesome words. The addition or alteration of the employee's vocabulary is carefully circumscribed according to the employee's security level, i.e., level of access. That is, only words for which that particular employee has authorization will be entered into the employee's vocabulary. The system prompts the employee to repeat three times each word to be entered into the vocabulary. Each is then added to the employee's vocabulary. When the training session is complete, the system returns to screen 110. ). Motivation would be same as stated in claim 1, 10.
With regards to claim 7 , Simon in view of Rongley discloses, wherein the voice 10 recognition process is designed to perform a prioritization of the recognized voices (aV) and to assign differently prioritized functions of the code reader device (1, 1A) according to the prioritization of the voices (Col 2line 35-45; The controller does not compare received sounds to every sample in the voice file, but selects subsets of the voice file, also referred to herein as voice layers, for the comparison based upon the context, e.g., the stage of the transaction and/or the level of access of the user. In other words, the system "listens" for specific words at specific times. This feature facilitates the speed and accuracy of voice recognition. Once a word or phrase is recognized, the system may display or communicate the word or phrase to the user (e.g., for verification), or perform the function requested. Thus, the present invention moves through a plurality of voice layers within a user's vocabulary throughout the user's interaction with the system. ).
With regards to claim 8, Simon in view of Rongley discloses, wherein the taught-in and personalized voice (aV) is storable in a memory of the code-reading device (1, 1A) (Col 2 line 35-45; When the user logs on to the system, a local controller retrieves the user's voice file from main memory and places it in a local memory at the user's sales location. The user of the system may then conduct sales transactions using a series of vocal commands and responses. The system receives the user's voice by means of a sound input device such as a microphone. The local controller then compares the vocal commands and responses of the user to stored samples of the user's voice from the voice file ) and transferable to evaluation units (5) of other code-reading devices (1, 1A) (Col 2 line 35-45;The local controller then compares the vocal commands and responses of the user to stored samples of the user's voice from the voice file.). Motivation would be same as stated in claim 1.
With regards to claim 9, Simon in view of Rongley discloses, wherein the evaluation unit 20 (5) is designed to indicate acoustically or visually to the authorized user (aN) whether the voice recognition process has been successfully performed or not (Col 2 line 50-60; Once a word or phrase is recognized, the system may display or communicate the word or phrase to the user (e.g., for verification), or perform the function requested.). Motivation would be same as stated in claim 1.
With regards to claim 10 , Simon discloses, Computer-implemented method for operating a code reader device (1, 1A) that has an audio unit (6) for capturing acoustic signals from a user's (aN, N) voice 25 (V or aV), comprising the steps:
Capturing a speech sequence of the user (aN, N), Teach-in the voice (V) of an authorized user (aN) as an individual voice (aV) and save the voice (aV) as a personalized voice (aV) ([0046]; In a preferred embodiment, indicator signals 202A, 202B are provided on the sensor 202. The auxiliary sensors 208, 210 may include a retinal image scanner, a facial image scanner, a voice print scanner, etc.), and
Simon does not exclusively but Rongley teaches,
Sharing a defined function of the code reader device (1, 1A) with the authorized user (aN) associated with the personalized voice (aV) (Abstract: A method and apparatus for conducting point-of-sale transactions using both speaker dependent and speaker independent voice recognition in which a spoken utterance from a first user is captured with a sound input device. The spoken utterance is compared to a first plurality of stored patterns to find a first match.), and
Executing the defined function when a function call is captured as a voice command with the personalized voice (aV) from the audio unit (5) (Col 5line 15-25; From display screen 110, the Employee Timeclock screen, the employee may perform a number of functions. In response to the entering of a password, the system retrieves the employee's time card file from the file server, clocks the employee in, and displays the time card 112 as shown). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Simon’s device with teaching of Rongley in order for conducting point-of-sale transactions using both speaker dependent and speaker independent voice recognition techniques(Rongley Col 1line 5-20;).
Simon in view of Rongley do not exclusively but Lee Dong teaches,
enabling calling a configuration function of the code reader device (1, 1A) with a voice command, notwithstanding installation of the code reader device (1, 1A) (Page 6 para 7; The processor 130 may identify a voice assistant corresponding to the trigger word when a trigger word for activating one voice assistant among a plurality of voice assistants is included in the converted text of the input user's voice. . The trigger word corresponds to a kind of promised command or'Wake Up input' for calling the corresponding voice assistant to receive the service of the voice assistant.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Simon in view of Rongley’s device with teaching of Lee Dong in order for providing a plurality of voice assistants in a timely manner in consideration of categories and/or users (Lee Dong Background-Art).
With regards to claim 14, Simon in view of Rongley discloses, wherein the personalized voice (aV) of the authorized user (aN) is stored as a record which is transmitted to further code-reading devices (1, 1A), so that the defined function of these additional code-reading devices (1, 1A) is released to the authorized user (aN) (Rongley; Col 5 line 0-15; When beginning operation of the system, an employee of the fast food restaurant encounters the message "Please enter password or insert voice card now" on screen 100 as shown in FIG. 3. Upon the entering of a password by the employee (either manually or vocally), the voice file corresponding to the employee is loaded from the server to the PC's hard disk and, if one exists to the PC's RAM disk and the RAM disks of any registers on which the employee will be working. Alternatively, the insertion of a voice card into card reader 30 (FIG. 1) will cause the system to download the employee's voice file from the voice card to the RAM disk, or to locate the employee's already loaded voice file on the RAM disk. Once the employee's voice file is loaded onto the RAM disk, the system checks the time clock to determine whether the employee has already clocked in. If the employee has not clocked in, the system advances to display screen 110 as shown in FIG. 4; Note: personalized voice recorded into voice card which is transferable). Motivation would be same as stated in claim 10.
With regards to claim 15, Simon in view of Rongley discloses, wherein recognized voices (V, aV) are differentiated in order to prioritize the voices (V, aV) differently and to grant the prioritized voices (aV) access rights to the 20 corresponding defined function of the code reader device (1, 1A) (Rongley; 15. The method of claim 13 further comprising directing the system to create a new user; assigning at least one level of access rights to the new user;col 1 line 45-60; With such a system, the different ways in which different people pronounce the same words would be, in a sense, "filtered out" as a potential recognition problem. With a speaker dependent system, the theoretical size of the vocabulary may be very large. However, certain drawbacks exist with current speaker dependent technology. For example, with a speaker dependent system, the creation of voice files for each user is time consuming and requires considerable memory resources, thereby limiting both the number of users and the size of the recognizable vocabulary. Such a system would not be practicable, for example, in a fast food drive-through application). Motivation would be same as stated in claim 10.
Claims 6, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Simon(US 20050212657 A1) in view of Rongley(US 5758322 A) and further in view of Lee Dong et al(KR 20210030646 A) and further in view of Lee et al(US 20230282208 A1).
With regards to claim 6, 12, Simon in view of Rongley and Lee Dong do not but Lee discloses, wherein the voice (V, aV) is trained by machine learning of an artificial intelligence, so that different voices (V, aV) are distinguished from each other on the basis of learned voice patterns ([0064]; In an embodiment, the electronic apparatus 100 may identify the similarity of the uttered voice by using a trained AI neural network model and the general-purpose text database. That is, the electronic apparatus 100 may identify a word having a high similarity from the uttered voice based on the trained AI neural network, and identify a word having a high match probability of the identified word among the words included in the general-purpose text database as a word of the uttered voice. The electronic apparatus 100 may identify the candidate text with a high similarity through the process described above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Simon in view of Rongley and Lee Dong’s device/method with teaching of Lee in order for performing an operation based on recognizing a user's command utterance without a call word. (Abstract Lee;).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED WALIULLAH whose telephone number is (571)270-7987. The examiner can normally be reached 8.30 to 430 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 1-571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED WALIULLAH/Primary Examiner, Art Unit 2498