Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is in response to Applicants Request for Reconsideration filed 2/26/26 which has been entered. Claims 1-2, 4-5, 7, 9, and 19 have been amended. Claims 3, 6, 8, and 10-18 have been cancelled. Claims 20-26 have been added. Claims 1-2, 4-5, 7, 9, and 19-26 are still pending in this application, with Claims 1 and 19 being independent.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 24 and 26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Referring to claim 24, claim 24 recites the limitation "the sound level" in line 3. There is insufficient antecedent basis for this limitation in the claim. Examiner interprets as a sound level.
Referring to claim 26, claim 26 recites the limitations "the orientation icon", “the sound icon”, “the text information” and “the display interface.” There is insufficient antecedent basis for these limitations in the claim. Examiner interprets as an orientation icon, a sound icon, text information, and a display interface.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-5, 19, 21, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. US Publication No. 20130070928 in view of Sakai US Publication No. 20120162259.
Referring to claim 1, Ellis et al teaches a sound signal processing method (para 0007: “a method for recognizing audio events”) for a head-mounted display apparatus (para 0050: “a display on a visor of glasses or a helmet visor worn by the driver”) a plurality of microphone arrays (para 0050: “a microphone can be provided on one or more places on the exterior of a vehicle to capture audio of the environment surrounding the vehicle”), the sound signal processing method comprising:
acquiring a sound signal by the plurality of microphone arrays, wherein the plurality of microphone arrays are arranged at a plurality of positions (para 0050: “a microphone can be provided on one or more places on the exterior of a vehicle to capture audio of the environment surrounding the vehicle”);
determining an ambient sound signal from the sound signal acquired by the plurality of microphone arrays (para 0050: “a microphone can be provided on one or more places on the exterior of a vehicle to capture audio of the environment surrounding the vehicle”);
determining a sound category corresponding to the ambient sound signal based on a pre-created sound database, wherein the pre-created database stores a plurality of categories of sound signal samples (para 0050: “the vehicle can execute process 100 to recognize non-speech audio events outside the vehicle, such as, emergency vehicle sirens, vehicle horn honking, motorcycle engines, etc”; para 0037: “an HMM trained using a training dataset with audio features extracted from the training dataset using mel-frequency cepstral coefficients (MFCCs) can be used to determine whether audio features extracted at 120 belong to a class of audio features contained in the training dataset.”; para 0030: “sounds captured and labeled using a mobile device can be compiled into a database to be used in training a classification model.”); and
displaying corresponding prompt information in the head-mounted display apparatus based on the sound category (para 0050: “an alert can be provided to the driver of the vehicle through….a visual display…a direction where an event originated can be determined based on the relative amplitude of the event at microphones placed at different positions on a vehicle, such as on the front and rear of the vehicle, and the direction where the event originated can be provided with the corresponding alert”; para 0045: “the application can generate an alert based on the identified non-speech audio events. For example, if the classification models identify an audio event as matching a door knock class, an alert can be generated that indicates that a door knock has been identified. In some embodiments, the form of the alert can be based on the class that the event matches most closely.”).
However, Ellis et al. does not teach determining sound source position per se, but Sakai teaches
a plurality of microphone arrays are arranged in the head-mounted display apparatus (Fig. 1: plurality of microphone arrays 101 are arranged on HMD 10),
acquiring a sound signal by the plurality of microphone arrays of the head-mounted display apparatus (para 0051: “collected sound information (sound signals) S1 to S4 of the four microphones 101”), wherein the plurality of microphone arrays are arranged at a plurality of positions of the head-mounted display apparatus (Fig. 1: plurality of microphone arrays 101 are arranged at a plurality of positions on HMD 10);
computing a position of an object that sends out the sound signal based on the plurality of positions corresponding to the plurality of microphone arrays of the head-mounted display apparatus (para 0051: “The sound source position detection unit 113 detects the positional information of a sound source based on the collected sound information (sound signals) S1 to S4 of the four microphones 101.”; para 0054: “The sound source position detection unit 113 estimates the position of the sound source, that is, the position of the sound source over a two-dimensional plane that includes the display surface by combining the arrival angles that are respectively calculated for each pair of microphones.”);
displaying corresponding prompt information in the head-mounted display apparatus based on the position of the object (para 0069: “Overlaid on such an image the sound information of the sound source at a position that corresponds to the sound source within the visual image is displayed.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to determine and display positioning information about the sound source, as taught in Sakai, in the method of Ellis et al. because “it is possible for the user to intuitively determine the position of the sound source and the information of the sounds that are output from the sound source.” Further, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to put the microphones on the head mounted display, as taught in Sakai, in the method of Ellis et al. because it allows for a user to be informed about sound in places other than a vehicle.
Referring to claim 4, Ellis et al teaches comparing the ambient sound signal with the plurality of categories of sound signal samples in the pre-created database; acquiring a target sound signal sample corresponding to the ambient sound signal; and determining the sound category based on a category of the target sound signal sample (para 0037).
Referring to claim 5, Ellis et al teaches determining a sound level based on the ambient sound signal and a volume threshold; and displaying the sound level on the head-mounted display apparatus (para 0027: “the program can measure the amplitude of the tracked audio event as the user moves around (which can be detected, for example, using accelerometers, the output of a camera, the output of a position detector, etc.) and inform the user of whether the audio event is getting louder or softer”; para 0062: “the amplitude (e.g., the energy of the audio received at 110) of the audio being stored in the buffer can be calculated, and it can be determined if the amplitude of the audio is over a threshold (e.g., 40 dB, 65 dB, etc.)”).
Referring to claim 19, Ellis et al teaches an electronic equipment, comprising: a processor; and a memory, configured for storing instructions executable by the processor, wherein the processor is configured to read the executable instructions from the memory and execute the instructions to implement operations (para 0007: “a method for recognizing audio events is provided, the method comprising: receiving, using a hardware processor in a mobile device”; para 0079: “mobile device 510 can include a processor 512, a display 514, an input device 516, and memory 518, which can be interconnected. In some embodiments, memory 518 can include a storage device (such as a computer-readable medium) for storing a computer program for controlling processor 512.”) comprising:
acquiring a sound signal by a plurality of microphone arrays, wherein the plurality of microphone arrays are arranged at a plurality of positions (para 0050: “a microphone can be provided on one or more places on the exterior of a vehicle to capture audio of the environment surrounding the vehicle”);
determining an ambient sound signal from the sound signal acquired by the plurality of microphone arrays (para 0050: “a microphone can be provided on one or more places on the exterior of a vehicle to capture audio of the environment surrounding the vehicle”);
determining a sound category corresponding to the ambient sound signal based on a pre-created sound database, wherein the pre-created database stores a plurality of categories of sound signal samples (para 0050: “the vehicle can execute process 100 to recognize non-speech audio events outside the vehicle, such as, emergency vehicle sirens, vehicle horn honking, motorcycle engines, etc”; para 0037: “an HMM trained using a training dataset with audio features extracted from the training dataset using mel-frequency cepstral coefficients (MFCCs) can be used to determine whether audio features extracted at 120 belong to a class of audio features contained in the training dataset.”; para 0030: “sounds captured and labeled using a mobile device can be compiled into a database to be used in training a classification model.”); and
displaying corresponding prompt information in the head-mounted display apparatus based on the sound category of the ambient sound signal (para 0050: “an alert can be provided to the driver of the vehicle through….a visual display…a direction where an event originated can be determined based on the relative amplitude of the event at microphones placed at different positions on a vehicle, such as on the front and rear of the vehicle, and the direction where the event originated can be provided with the corresponding alert”; para 0045: “the application can generate an alert based on the identified non-speech audio events. For example, if the classification models identify an audio event as matching a door knock class, an alert can be generated that indicates that a door knock has been identified. In some embodiments, the form of the alert can be based on the class that the event matches most closely.”).
However, Ellis et al. does not teach determining sound source position per se, but Sakai teaches
acquiring a sound signal by a plurality of microphone arrays arranged in a head-mounted display apparatus (para 0051: “collected sound information (sound signals) S1 to S4 of the four microphones 101”), wherein the plurality of microphone arrays are arranged at a plurality of positions of the head-mounted display apparatus (Fig. 1: plurality of microphone arrays 101 are arranged at a plurality of positions on HMD 10);
computing a position of an object that sends out the sound signal based on the plurality of positions corresponding to the plurality of microphone arrays of the head-mounted display apparatus (para 0051: “The sound source position detection unit 113 detects the positional information of a sound source based on the collected sound information (sound signals) S1 to S4 of the four microphones 101.”; para 0054: “The sound source position detection unit 113 estimates the position of the sound source, that is, the position of the sound source over a two-dimensional plane that includes the display surface by combining the arrival angles that are respectively calculated for each pair of microphones.”);
displaying corresponding prompt information in the head-mounted display apparatus based on the position of the object (para 0069: “Overlaid on such an image the sound information of the sound source at a position that corresponds to the sound source within the visual image is displayed.”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to determine and display positioning information about the sound source, as taught in Sakai, in the equipment of Ellis et al. because “it is possible for the user to intuitively determine the position of the sound source and the information of the sounds that are output from the sound source.” Further, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to put the microphones on the head mounted display, as taught in Sakai, in the equipment of Ellis et al. because it allows for a user to be informed about sound in places other than a vehicle.
Referring to claim 21, Ellis et al teaches comparing the ambient sound signal with the plurality of categories of sound signal samples in the pre-created database; acquiring a target sound signal sample corresponding to the ambient sound signal; and determining the sound category based on a category of the target sound signal sample (para 0037).
Referring to claim 23, Ellis et al teaches determining a sound level based on the ambient sound signal and a volume threshold; and displaying the sound level on the head-mounted display apparatus (para 0027: “the program can measure the amplitude of the tracked audio event as the user moves around (which can be detected, for example, using accelerometers, the output of a camera, the output of a position detector, etc.) and inform the user of whether the audio event is getting louder or softer”; para 0062: “the amplitude (e.g., the energy of the audio received at 110) of the audio being stored in the buffer can be calculated, and it can be determined if the amplitude of the audio is over a threshold (e.g., 40 dB, 65 dB, etc.)”).
Claim(s) 2 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. and Sakai, as applied to claims 1 and 19 above, and further in view of Patsiokas et al. US Publication No. 20170032402.
Referring to claim 2, Ellis et al teaches acquiring the sound signal for a preset time duration (para 0061: “the audio signal received at 110 can be stored in a buffer that stores a predetermined amount of an audio signal (e.g., ten seconds, a minute, etc.)”). However, Ellis et al. and Sakai do not teach obtaining a timestamp and confidence, but Patsiokas et al. teaches determining the ambient sound signal that has a target timestamp with a confidence greater than a confidence threshold (para 0295: “determine if it matches a gunshot or explosion with sufficient confidence to report the event. When an acoustic signature of interest is identified, the processor can produce (6) a message with the following information: (a) a time stamp of when the sound was detected by the vehicle”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to report an audio event if confidence is high enough, as taught in Patsiokas et al., in the method of Ellis et al. and Sakai because it ensures that a sound was substantial enough to warrant notification.
Referring to claim 20, Ellis et al teaches acquiring the sound signal for a preset time duration (para 0061: “the audio signal received at 110 can be stored in a buffer that stores a predetermined amount of an audio signal (e.g., ten seconds, a minute, etc.)”). However, Ellis et al. and Sakai do not teach obtaining a timestamp and confidence, but Patsiokas et al. teaches determining the ambient sound signal that has a target timestamp with a confidence greater than a confidence threshold (para 0295: “determine if it matches a gunshot or explosion with sufficient confidence to report the event. When an acoustic signature of interest is identified, the processor can produce (6) a message with the following information: (a) a time stamp of when the sound was detected by the vehicle”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to report an audio event if confidence is high enough, as taught in Patsiokas et al., in the equipment of Ellis et al. and Sakai because it ensures that a sound was substantial enough to warrant notification.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. and Sakai, as applied to claims 1 and 5 above, and further in view of Ushimaru US Publication No. 20030231871.
Referring to claim 7, Ellis et al. and Sakai do not teach multiple sound level thresholds, but Ushimaru teaches the volume threshold comprises a first volume threshold, a second volume threshold and a third volume threshold, wherein the first volume threshold is greater than the second volume threshold, and the second volume threshold is greater than the third volume threshold, and wherein the sound signal processing method further comprises: determining the sound level as a first sound level in a case that a volume of the ambient sound signal is greater than the first volume threshold; determining the sound level as a second sound level in a case that the volume of the ambient sound signal is smaller than the first volume threshold and greater than the second volume threshold; and determining the sound level as a third sound level in a case that the volume of the ambient sound signal is smaller than the second volume threshold and greater than the third volume threshold (para 0059: “The detection of the sound output level uses a first threshold and a second threshold that is lower than the first threshold, with which the three time periods with "volume level=high", "volume level=medium" and "volume level=low" are detected. A time period of audio data, whose amplitude, i.e. sound volume level, is lower than the second threshold, is detected as the time period with "volume level=low." A time period of audio data, whose amplitude, i.e. sound volume level, is between the first and second thresholds, is detected as the time period with "volume level=middle." A time period of audio data, whose amplitude, i.e. sound volume level, is higher than the first threshold, is detected as the time period with "volume level=high." – Examiner notes that the third and lowest threshold would merely be whatever threshold constitutes the detection of sound). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to determine multiple sound levels, as taught in Ushimaru, in the method of Ellis et al. and Sakai because it gives the driver a better indication of the real qualities of the sound and it can also help to indicate how close the sound source may be.
Claim(s) 9 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. and Sakai, as applied to claims 1 and 19 above, and further in view of Kim et al. US Publication No. 20160150338.
Referring to claim 9, Ellis et al. teaches displaying on a display interface of the head mounted display apparatus (para 0050) and Sakai teaches determining an orientation icon based on the position of the object; displaying the orientation icon on a display interface of the display apparatus (para 0010). However, Ellis et al. and Sakai do not teach all the display specifics, but Kim et al. teaches determining a sound icon and text information based on the sound category; displaying the sound icon, text information on a display interface of the display apparatus (para 0311: “display an image representing at least one of a type of sound”; para 0194: “a text communicating that the baby 1103 is crying”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to display various elements about the sound events, as taught in Kim et al., in the method of Ellis et al. and Sakai because it provides the user with more information in order to better understand the sound event and act accordingly.
Referring to claim 22, Ellis et al. teaches displaying on a display interface of the head mounted display apparatus (para 0050) and Sakai teaches determining an orientation icon based on the position of the object; displaying the orientation icon on a display interface of the display apparatus (para 0010). However, Ellis et al. and Sakai do not teach all the display specifics, but Kim et al. teaches determining a sound icon and text information based on the sound category; displaying the sound icon, text information on a display interface of the display apparatus (para 0311: “display an image representing at least one of a type of sound”; para 0194: “a text communicating that the baby 1103 is crying”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to display various elements about the sound events, as taught in Kim et al., in the equipment of Ellis et al. and Sakai because it provides the user with more information in order to better understand the sound event and act accordingly.
Claim(s) 24 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al. and Sakai, as applied to claims 1, 5, 19, and 22 above, and further in view of Kim et al. US Publication No. 20160150338 and Thompson US Publication No. 20020101350.
Referring to claim 24, Ellis et al. teaches displaying information on a display interface of the head-mounted display apparatus (para 0050), while controlling the head-mounted display apparatus to vibrate (para 0051) and Sakai teaches displaying the orientation icon on the display interface of the head mounted display apparatus (para 0010) and Kim et al. teaches displaying the sound icon, text information on the display interface of the display apparatus (para 0311: “display an image representing at least one of a type of sound”; para 0194: “a text communicating that the baby 1103 is crying”); determining a display color based on the sound level; and displaying icons on a display interface of the display apparatus according to the display color (para 0318: “express a volume of the sound that is currently generating in at least one of a color”).
However, Ellis et al., Sakai, and Kim et al. do not teach a vibration level based on the sound level, but Thompson teaches determining a vibration amplitude based on the sound level while controlling the apparatus to vibrate according to the vibration amplitude (para 0037). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to provide a vibration level that corresponds to the sound level, as taught in Thompson, in the equipment of Ellis et al., Sakai, and Kim et al. because it provides the user with a better idea of how severe the ambient sound is so that the user can act accordingly.
Referring to claim 26, Ellis et al. teaches displaying information on a display interface of the head-mounted display apparatus (para 0050), while controlling the head-mounted display apparatus to vibrate (para 0051) and Sakai teaches displaying the orientation icon on the display interface of the head mounted display apparatus (para 0010). However, Ellis et al. and Sakai do not teach all the display specifics, but Kim et al. teaches displaying the sound icon, text information on the display interface of the display apparatus (para 0311: “display an image representing at least one of a type of sound”; para 0194: “a text communicating that the baby 1103 is crying”); determining a display color based on the sound level; and displaying icons on a display interface of the display apparatus according to the display color (para 0318: “express a volume of the sound that is currently generating in at least one of a color”). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to display various elements about the sound events, as taught in Kim et al., in the method of Ellis et al. and Sakai because it provides the user with more information in order to better understand the sound event and act accordingly.
However, Ellis et al., Sakai, and Kim et al. do not teach a vibration level based on the sound level, but Thompson teaches determining a vibration amplitude based on the sound level while controlling the apparatus to vibrate according to the vibration amplitude (para 0037). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to provide a vibration level that corresponds to the sound level, as taught in Thompson, in the method of Ellis et al., Sakai, and Kim et al. because it provides the user with a better idea of how severe the ambient sound is so that the user can act accordingly.
Claim(s) 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ellis et al., Sakai, and Kim et al., as applied to claims 19 and 22 above, and further in view of Ushimaru.
Referring to claim 25, Ellis et al., Sakai, and Kim et al. do not teach multiple sound level thresholds, but Ushimaru teaches the volume threshold comprises a first volume threshold, a second volume threshold and a third volume threshold, wherein the first volume threshold is greater than the second volume threshold, and the second volume threshold is greater than the third volume threshold, and wherein the operations further comprise: determining the sound level as a first sound level in a case that a volume of the ambient sound signal is greater than the first volume threshold; determining the sound level as a second sound level in a case that the volume of the ambient sound signal is smaller than the first volume threshold and greater than the second volume threshold; and determining the sound level as a third sound level in a case that the volume of the ambient sound signal is smaller than the second volume threshold and greater than the third volume threshold (para 0059: “The detection of the sound output level uses a first threshold and a second threshold that is lower than the first threshold, with which the three time periods with "volume level=high", "volume level=medium" and "volume level=low" are detected. A time period of audio data, whose amplitude, i.e. sound volume level, is lower than the second threshold, is detected as the time period with "volume level=low." A time period of audio data, whose amplitude, i.e. sound volume level, is between the first and second thresholds, is detected as the time period with "volume level=middle." A time period of audio data, whose amplitude, i.e. sound volume level, is higher than the first threshold, is detected as the time period with "volume level=high." – Examiner notes that the third and lowest threshold would merely be whatever threshold constitutes the detection of sound). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to determine multiple sound levels, as taught in Ushimaru, in the equipment of Ellis et al., Sakai, and Kim et al. because it gives the driver a better indication of the real qualities of the sound and it can also help to indicate how close the sound source may be.
Response to Arguments
Most of Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 2/26/26 have been fully considered but they are not persuasive.
Applicant states in para 3 of page 12 of the remarks:
“Furthermore, Ellis discloses using a hidden Markov model (HMM) to compare audio features extracted from audio events to classification models, the HMM returning a probability that a particular audio feature corresponds to a class. Ellis discloses converting extracted audio features from a hertz scale to a mel scale, obtaining mel-frequency cepstral coefficients from the converted audio features, and using the obtained mel-frequency cepstral coefficients in the HMM model for classifying the non-speech audio events. Ellis classifies non-speech audio events by converting audio features from a hertz scale to a mel scale, obtaining mel-frequency cepstral coefficients in the mel scale, and using the mel-frequency cepstral coefficients in the HMM model for classifying. Ellis does not disclose the claimed pre-created sound database that stores a plurality of categories of sound signal samples. Ellis does not disclose determining a sound category based on the pre-created sound database that stores a plurality of categories of sound signal samples as claimed.”
Examiner respectfully disagrees. Para 0030 of Ellis states “sounds captured and labeled using a mobile device can be compiled into a database to be used in training a classification model.” Para 0037 of Ellis also sates “the application can compare the audio features extracted at 120 with at least one classification model.”. These statements indicate there is a pre-created database of sound samples that are used in classifying sound type, therefore, Ellis teaches the noted limitations.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Examiner respectfully requests, in response to this Office Action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application.
When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE A FALEY whose telephone number is (571)272-3453. The examiner can normally be reached on Monday to Wednesday, 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ahmad Matar can be reached on (571)272-7488. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Any response to this action should be mailed to:
Commissioner of Patents and Trademarks
P.O. Box 1450
Alexandria, Va. 22313-1450
Or faxed to:
(571) 273-8300, for formal communications intended for entry and for
informal or draft communications, please label “PROPOSED” or “DRAFT”.
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Arlington, VA 22314
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHERINE A FALEY/Primary Examiner, Art Unit 2693