Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application, 2023-099792, filed in the Japan Patent Office on June 19, 2023. The priority document has been received.
Information Disclosure Statement
The information disclosure statement filed on June 19, 2023 is in compliance with 37 CFR 1.97, and have been considered by Examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Regarding Claim 1,
Step 1: the claim recites an apparatus, which belongs to statutory categories of invention.
Step 2A Prong One: the claim recites the limitations:
a) A parameter definition apparatus for defining a parameter for audio processing of audio data by which sound effects are imparted to the audio data, the parameter definition apparatus comprising:
b) define the parameter for the audio processing of the audio data based on the acquired at least one piece of data;
The limitation of defining a parameter, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting:
1) at least one memory storing a program;
2) at least one processor that executes the program to:
Nothing in the claim precludes the steps from practically being performed in the human mind alone using observation, evaluation, judgment, and opinion or with the aid of pen and paper. For example, but for the “one memory” and “one processor” language, “definition”, “defining”, and “define” in the context of this claim encompasses a human manually evaluating and/or judging the audio data, e.g. volume of the audio data, which can be done using pen and paper as a physical aid. See MPEP § 2106.04(a)(2)(III).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the human mind alone or with the aid of pen and paper but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Note that even if most humans would use a physical aid (e.g., pen and paper, a slide rule, or a calculator) to help them complete the recited “definition”, the use of such physical aid does not negate the mental nature of this limitation. Nor does the recitation of a memory and a processor in the claim negate the mental nature of this limitation because the claim merely uses the memory and the processor as a tool to perform the otherwise mental process.
Step 2A Prong Two: This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements:
1) at least one memory storing a program;
2) at least one processor that executes the program to:
The additional elements 1) and 2) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the judicial exception using generic computer components. The memory and processor are used as tools to perform the acquiring, defining, and transmitting steps of the claim. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. See MPEP § 2106.05(f).
Also, the claim recites additional elements:
3) acquire from a first data processing device, which is different from a user device configured to use the audio data, at least one piece of data from among a plurality of pieces of data, including:
4) transmit the defined parameter for the audio processing to the second data processing device.
The additional elements 3) and 4) are mere data gathering/transmitting recited at a high level of generality, and thus is insignificant extra-solution activity. See MPEP § 2106.05(g). Furthermore, all uses of the recited judicial exception require such data gathering/transmitting, and, as such, the additional elements do not impose any meaningful limits on the claim. The additional elements amount to necessary data gathering/transmitting. See MPEP § 2106.05.
Therefore, the limitations remain insignificant extra-solution activity even upon reconsideration and do not amount to significantly more, as shown in the court cases.
See OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93 (Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price);
CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (Obtaining information about transactions using the Internet to verify credit card transactions).
Accordingly, even when viewed in combination, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the claim recites the additional elements:
1) at least one memory storing a program;
2) at least one processor that executes the program to:
The additional elements 1) and 3) amount to no more than mere instructions to apply the judicial exception using generic computer components. Mere instructions to apply a judicial exception using generic computer components cannot provide an inventive concept.
Also, the claim recites the additional element:
3) acquire from a first data processing device, which is different from a user device configured to use the audio data, at least one piece of data from among a plurality of pieces of data, including:
4) transmit the defined parameter for the audio processing to the second data processing device.
The additional elements 3) and 4) simply append well-understood, routine, and conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception is not indicative of an inventive concept. MPEP § 2106.05(d)(II) expressly states that the courts have recognized the computer function of receiving or transmitting data over a network, e.g., using the Internet to gather data as a well‐understood, routine, and conventional computer function when it is claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activities. Thus, a person of ordinary skill in the art would readily comprehend that it is well-understood, routine, and conventional in the computing art to receive a module characteristic from a system environment.
Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the additional elements as a combination adds nothing that is not already present when looking at the additional elements taken individually. Even when considered in combination, the additional elements represent mere instructions to apply a judicial exception using generic computer components, insignificant extra-solution activities, and only the idea of a solution or outcome, and therefore do not provide an inventive concept. The claim is not patent eligible.
Claims 2-8 are dependent on claim 1 and are rejected under 35 U.S.C. 101 as directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more for at least the reasons stated above.
Claim 2 recites additional limitations:
a) the user device is disposed in a vehicle, and
b) the first data processing device is a vehicle management server configured to manage the vehicle via a network.
Claim 3 recites additional limitation:
a) wherein the environmental data includes at least one of a vehicle operation state of the vehicle or a running state of the vehicle.
Claim 4 recites additional limitations:
a) the first data processing device is a distribution server configured to distribute the audio data via a network, and
b) the user device is configured to use the sound data distributed from the distribution server via the network.
Claim 5 additional the limitation:
a) wherein the audio property data includes at least one of a genre of the audio data or a file format of the audio data.
Claim 6 recites additional limitations:
a) acquire the audio data;
b) process the acquired audio data based on the defined parameter for the audio processing; and
c) transmit the processed audio data to the user device.Claim 7 recites additional limitation:
a) wherein the second data processing device is also different from the user device.
Claim 8 recites additional limitation:
a) wherein the user device includes the second data processing device.
These claims are dependent on Claim 1, but do not add any feature or subject matter that would solve the judicial exception deficiencies of Claim 1.
Claims 2-8 further recite additional elements that do not integrate the judicial exception into a practical application of the judicial exception and thus, are not significantly more than the abstract idea. Specifically, the additional element recited in claim 2 a) fails to meaningfully limit the claim because it amounts to merely link the abstract idea to a particular field of use. Mere linking the abstract ideas to a particular field of use does not integrate the abstract idea into a practical application. See MPEP 2106.05(h) for more details; and the additional element recited in Claim 6 b) fails to meaningfully limit the claim because it does not require any particular application of the judicial exception and is, at best, the equivalent of merely adding the words “apply it” (or an equivalent) to the judicial exception. See MPEP § 2106.05(f); and the additional elements recited in Claim 6 a) and c) fail to meaningfully limit the claim because they are mere data gathering/outputting recited at a high level of generality, and thus are insignificant extra-solution activities (simply appends well-understood, routine, and conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception is not indicative of an inventive concept). See MPEP § 2106.05(g). Therefore, claims 2-8 do not add any steps or additional elements, when considered both individually and as a combination, that would convert Claim 1 into patent-eligible subject matter.
Claim 9 is a method claim corresponding to the apparatus Claims 1. The additional element “a method of” does not integrate the judicial exception into a practical application of the judicial exception and thus, is not significantly more than the abstract idea. Accordingly, even when viewed in combination, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Therefore, claim 9 is rejected for the same reasons as given in claim 1’s 101 rejection.
Claim 10 recites a non-transitory medium storing a program executable by a computer to conduct the operation steps in the apparatus of claim 1. The additional element “a non-transitory medium” is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. The non-transitory medium is used as a tool to perform the acquiring, defining, and transmitting steps of the claim. See MPEP § 2106.05(f). Accordingly, even when viewed in combination, the additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Therefore, claim 10 is rejected for the same reasons as given in claim 1’s 101 rejection.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 6-7, and 9-10 are rejected under 35 U.S.C. 102(a1)/(a2) as being anticipated by Schmierer et al. (US Patent No. 11537358B1).
Regarding claim 1, Schmierer teaches “A parameter definition apparatus for defining a parameter for audio processing of audio data…” (Module 135 produces/specifies, i.e. defines, sound parameter data, i.e. a sound parameter, for audio processing of audio data performed by a sound synthesizer 140, which uses the parameter to generate/process audio data, see “Sound parameter data is generated from the vehicle parameter data. Audio data is generated using a synthesizer based on the sound parameter data”, Abstract; also see “… sound parameter data produced by the data mapping/normalization module 135 may not be an audio signal, but rather data that specifies parameters that sound synthesizer 140 can use to generate audio data …”, col. 5, ln. 66-67, col. 6, ln. 1-3; further see Element 135, Fig. 3; module 135 is a structure, thereby a parameter definition apparatus, see “Each of logic blocks 115, 120, 135, and/or 140 may be equivalently implemented using circuits, hardware, firmware, software containing instructions executable using a processor or processing device…”, col. 8, ln. 29-33),
“…by which sound effects are imparted to the audio data …” (The sound synthesizer 140 loads/imparts orchestrated scenes comprising of effects, i.e. sound effects, to audio data by the parameter data from module 135, see “… sound parameter data produced by the data mapping/normalization module 135 may not be an audio signal, but rather data that specifies parameters that sound synthesizer 140 can use to generate audio data …”, col. 5, ln. 66-67, col. 6, ln. 1-3; also see the underlined element “Sound synthesizer 140 may include an orchestration module, which may include a plugin-type system where different orchestrated sound scenes may be loaded, unloaded, and may be updated remotely via an app update or cloud-based APL Orchestrated scenes may comprise a set of instruments, sounds, effects, and loops”, col. 6, ln. 10-15),
the parameter definition apparatus comprising: at least one memory storing a program; at least one processor that executes the program; at least one processor that executes the program to (Module 135 in Fig. 1 can contain at least a memory storing a program, i.e. software containing instructions, and at least a processor that executes the program, and is the parameter definition apparatus, see underlined elements “Each of logic blocks 115, 120, 135, and/or 140 may be equivalently implemented using circuits, hardware, firmware, software containing instructions executable using a processor or processing device, and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. Logic may include a software-controlled microprocessor, discrete logic (e.g., logic gates), analog circuits, digital circuits, a microcontroller, a programmed logic device (e.g., ASIC, FPGA), a memory device containing computer-executable instructions, and/or the like”, col. 8, ln. 29-40):
acquire from a first data processing device (Vehicle Application 115 in Fig. 1 is a first data processing device, see “The vehicle application 115 may collect and transmit the vehicle sensor data 125 to the user device 110 without processing, or at least without substantially processing the vehicle sensor data 125 other than to merely format the data”, col. 4, ln. 14-17; module 135, i.e. the parameter definition apparatus, acquires data from 115, see Fig. 1),
which is different from a user device configured to use the audio data (Vehicle Application 115 is different from Audio system 130, which is a user device configured to use the audio data and is different from Vehicle Application 115, i.e. the first data processing device, see underlined elements “… cause a corresponding sound be output by the audio system 130 based on audio data generated by the app 120”, also see Fig. 1),
“at least one piece of data from…” (vehicle sensor data is at least one piece of data, see underlined element “The app 120 may perform a data mapping/normalization 135 routine that maps and/or normalizes the received processed vehicle sensor data from the vehicle 105”, col. 5, ln. 28-31).
among a plurality of pieces of data, including: environmental data indicative of information related to an environment where the audio data is to be played (data in Vehicle Application 115 of Fig. 1 is acquired from Sensors 145 of Fig. 1 including sensors such as camera, microphone, to collect environmental data and audio data, indicative of information related to an environment where the audio data is to be played ( see underlined elements “The vehicle sensor data 125 may also include data obtained from internal and external cameras or other sensors, including facial recognition data, and/or sensed gestures”, col.4, ln. 7-10; also see “A microphone may also provide vehicle sensor data 125 that may be used to determine whether a passenger is speaking, to infer a passenger's mood, and/or to determine whether a passenger is providing a verbal command”, col. 4, ln 11-14; data from camera and/or microphone contains information of a vehicle passenger, and is indicative of information related to an environment where the audio data is to be played, see “using an internal camera and/or microphone of vehicle 105, app 120 may detect that a vehicle passenger is speaking and cause generation of prospective sounds by sound synthesizer 140 to be temporarily disabled”, col. 7, ln. 53-56);
first device data indicative of information related to the user device (data obtained from the first data processing device, Vehicle Application 115 in Fig. 1 can be considered as the claimed first device data; these data come from Sensors 145 and contain data information such as camera, microphone, etc. These data are ultimately played back by the user device 130, i.e. audio output devices such as one or more loudspeakers, see Fig. 1, also see “The vehicle application 115 may, upon receipt of the audio data, pass the audio data to the audio system 130 to playback via one or more speakers of the vehicle 105 or other passenger audio playback devices”, col. 6, ln. 34-37; therefore, the first device data is inherently related to the loudspeakers of the user device 130);
second device data indicative of information related to a second data processing device configured to process the audio data (sound synthesizer 140 is a second data processing device, see “The sound parameter data and associated tracks translated from the vehicle data may be subsequently input to the sound synthesizer 140”, col. 5, ln. 54-56; sound synthesizer 140 is configured to generate, i.e. process, the audio data, see “…parameters that sound synthesizer 140 can use to generate audio data…”, col. 6, ln 2-3; the limitation “second device data indicative of information related to” is so broad that at least some of the data transmitted to the sound synthesizer 140 from module 135 can read on it since at least some of the data being transmitted to the second data processing device (i.e., sound synthesizer 140) is inherently related to the sound synthesizer which synthesizes/processes the data received from module 135);
audio property data indicative of information related to the audio data (Audio property data like parameter related data picked up by the microphone and sensors sensing other data from other server, any audio property data related to the audio signal acquired from the sensor(s) is related to the audio data);
define the parameter for the audio processing of the audio data based on the acquired at least one piece of data (mapping/normalization teaches the claimed “define”, and mapping/normalization 135 defines the parameter for audio processing of the audio data on the acquired at least one piece of data, see “a data mapping/normalization 135 routine that maps and/or normalizes the received processed vehicle sensor data from the vehicle 105”, col.5, ln. 29-31);
transmit the defined parameter for the audio processing to the second data processing device (module 135 routes, i.e. transmits, defined parameter for the audio processing to the second data processing device, i.e. the sound synthesizer 140, see “The data mapping/normalization module 135 may include routines that prepare and route the data received from the vehicle 105 and/or the higher-grade data to the input of the sound synthesizer 140”, col. 5, ln. 31-34).
Regarding claim 6, Schmierer teaches all the limitations previously set forth in claim 1’s 102 rejection.
Schmierer further teaches at least one processor that executes the program to (see underlined elements “The user device 110 may include a processor and memory containing instructions that, when executed, implement the app 120. The app 120 may perform a data mapping/normalization 135 routine”, col. 5, ln. 26-29):
acquire the audio data (the sound synthesizer takes or acquires sound parameter data and associated track, i.e. audio data, see “The sound parameter data and associated tracks translated from the vehicle data may be subsequently input to the sound synthesizer 140”, col. 5, ln. 56-58);
process the acquired audio data based on the defined parameter for the audio processing (sound synthesizer generates or processes the acquired audio data based on the defined parameter, see “Sound parameter data is generated from the vehicle parameter data. Audio data is generated using a synthesizer based on the sound parameter data”, Abstract; also see “The substantive processing of the vehicle sensor data 125 may instead be largely performed by a processor of the user device 110”, col. 4, ln. 23-25);
transmit the processed audio data to the user device (see “The generated audio data may be subsequently transmitted to the vehicle 105 using a transmission component of the user device 110”, col. 6, ln. 30-32).
Regarding claim 7, Schmierer teaches all the limitations previously set forth in claim 6’s 102 rejection.
Schmierer further teaches (the sound synthesizer, i.e. the second data processing device, is different from the audio system, i.e. the user device, see Element 140, 130, Fig. 1)
Regarding claim 9, since the claimed method comprises the same operations conducted by the apparatus in claim 1, claim 9 is rejected as being unpatentable over Schmierer for the reasons mentioned in claim 1’s 102 rejection.
Claim 10 recites a non-transitory medium storing a program executable by a computer to conduct the operation steps in the apparatus of claim 1. Therefore, claim 10 is rejected as being unpatentable over Schmierer for the reasons mentioned in claim 1’s 102 rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2-4, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Schmierer et al. (US Patent No. 11537358B1) in view of Pye et al. (US Patent No. 11134353).
Regarding claim 2, Schmierer teaches all the limitations previously set forth in claim 1’s 102 rejection.
Schmierer also teaches the user device is disposed in a vehicle (see Fig.1; also see “…to the audio system 130 of the vehicle 105…” col. 7, ln. 14-15).
Schmierer does not teach the underlined limitation that the first data processing device is a vehicle management server configured to manage the vehicle via a network.
Pye teaches the underlined limitation that the first data processing device is a vehicle management server configured to manage the vehicle via a network. (device profile database 130 manages vehicle related information via a network because it’s on cloud 105, thereby is a vehicle management server, see Fig.1, also see “device profile database 130 includes a plurality of device-specific EQ curves 131 that are each associated with a particular audio device, such as …in-vehicle audio system…”, col.8, ln.46-49).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have included the vehicle management server configured to manage the vehicle via a network as taught by Pye in the parameter definition apparatus as taught by Schmierer to yield an improved system. One of ordinary skill in the art would have been motivated to do so to implement a “cloud-based infrastructure” (Pye: col4., ln.31) so that the information can be accessed whenever there is “Internet connectivity” (Pye: col4., ln.34).
Regarding claim 3, Schmierer in view of Pye teaches all the limitations previously set forth in claim 2’s 103 rejection.
Schmierer in view of Pye further teaches the environmental data includes at least one of a vehicle operation state of the vehicle (Schmierer: the environmental data includes are operation state of the vehicle as underlined, see “…vehicle sensor data including one or more of: accelerator pedal angle, vehicle velocity, current road load, engine or motor speed, steering angle, GPS coordinates, microphone data…”, col.2, ln.40-43) or a running state of the vehicle (Schmierer: see underlined elements: “…vehicle sensor data including one or more of: accelerator pedal angle, vehicle velocity, current road load, engine or motor speed, steering angle, GPS coordinates, microphone data…”, col.2, ln.40-43).
Regarding claim 4, Schmierer teaches all the limitations previously set forth in claim 1’s 102 rejection.
Schmierer does not teach the underlined limitation that the first data processing device is a distribution server configured to distribute the audio data via a network.
Pye teaches the underlined limitation that the first data processing device is a distribution server configured to distribute the audio data via a network (a streaming serve in a cloud infrastructure as the first data processing device is a distribution sever configured to distribute the audio data via a network, see “…audio content is provided by a streaming service 104 that is implemented in a cloud infrastructure 105”, col. 3, ln.54-55; also see Element 104, Fig. 1).
Pye further teaches the user device is configured to use the sound data distributed from the distribution server via the network (audio environment is the user device that is configured to use the audio content, i.e. sound data, from the distribution server 104 via the network, see “audio environments 110 play audio content received from mobile computing device 140, for example via a wireless connection (e.g., Bluetooth® and/or WiFi®)…”, col. 3, ln. 64-67, also see Element 107, Fig. 1).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have included the distribution server configured to distribute the audio data via a network as taught by Pye in the parameter definition apparatus as taught by Schmierer to yield an improved system. One of ordinary skill in the art would have been motivated to do so to implement a “cloud-based infrastructure” (Pye: col4., ln.31) so that the information can be accessed whenever there is “Internet connectivity” (Pye: col4., ln.34).
Regarding claim 8, Schmierer teaches all the limitations previously set forth in claim 6’s 102 rejection.
Schmierer does not teach the user device includes the second data processing device.
Pye teaches the user device includes the second data processing device (a smart audio device, i.e. the user device, performs the audio signal processing, thereby includes the second data processing device, see “A further advantage is that the personalized audio experience can be implemented in an audio environment that includes smart audio devices that perform some or all of the audio signal processing for producing the personalized audio experience”, col. 2, ln. 38-43).
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have included the second data processing device in the user device as taught by Pye in the parameter definition apparatus as taught by Schmierer to yield an improved system. One of ordinary skill in the art would have been motivated to do so “for producing the personalized audio experience”(col. 2, ln. 41-42).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Schmierer et al. (US Patent No. 11537358B1) in view of Pye et al. (US Patent No. 11134353) in further view of Bielby et al. (US Patent No. 12443387B2).
Regarding claim 5, Schmierer in view of Pye teaches all the limitations previously set forth in claim 4’s 103 rejection.
Schmierer in view of Pye does not teach the audio property data includes at least one of a genre of the audio data or a file format of the audio data.
Bielby teaches the audio property data includes a genre of the audio data (audio characterization parameters include genre of music, wherein audio characterization parameters are the audio property data and music is the audio data, see "…audio characterization parameters, such as the type of audio contents, the source of 45 audio contents, the style and genre of music”, col. 25, ln. 44-46);
At the time of invention was effectively filed, it would have been obvious to one of ordinary skill in the art to have included a genre of the audio data in the audio property data as taught by Bielby in the parameter definition apparatus as taught by Schmierer in view of Pye to yield an improved system. One of ordinary skill in the art would have been motivated to do so in order to “recognize patterns of the audio activities in various conditions”( col. 25, ln. 42-43).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIN LEE whose telephone number is (571)272-1460. The examiner can normally be reached Monday thru Friday 8-5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIN LEE/Examiner, Art Unit 2695
/VIVIAN C CHIN/Supervisory Patent Examiner, Art Unit 2695