Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2023-0181906, filed on 12/14/2023.
Status of Application
Claims 1-20 are pending.
Claims 1 and 11 are the independent claims.
Claims 1, 8, 11, and 18 have been amended.
This Final Office Action is in response to the “Amendments and Remarks” received on 01/02/2026.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claim 1 is directed to a method. Therefore, claim 1 is within at least one of the four statutory categories.
Claim 11 is directed to an apparatus. Therefore, claim 11 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Claims 1 and 11 include limitations that recite an abstract idea (emphasized below) and Claim 11 will be used as a representative claim for the remainder of the 101 rejections.
Claim 11 recites: An apparatus for providing content, the apparatus comprising:
a communication unit, including a wireless transceiver of a vehicle, configured to i) receive conversation data captured by in-vehicle microphones and video data displayed by an in-vehicle display device and ii) receive point-of-interest (POI) information from a server;
and a processor configured to:
generate first information from content of conversations of passengers of the vehicle by analyzing the conversation data,
generate second information from a video displayed in the vehicle by analyzing the video data,
generate interest information based on the first information and the second information,
generate, based on the interest information and the POI information, content to be provided to the passengers of the vehicle, and
output, via one or both of a display or audio output device of the vehicle, the generated content to the passengers of the vehicle.
The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. Specifically, the “generating” steps encompass a user to gather information and draw conclusions of the information. Generating first/second/interest information by analyzing data is something that can be done mentally. Generating content based on data is also something that can be done mentally. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “apparatus comprising: a processor”, the examiner submits that these limitations are an attempt to generally link additional elements to a technological environment. In particular, the processor is recited at a high level of generality and merely automates the generating steps, therefore acting as a generic computer to perform the abstract idea. Additionally, the processor is claimed generically and are operating in their ordinary capacity and do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. The additional limitations are no more than mere instructions to apply the exception using a processor.
In addition to that, the examiner submits that receiving conversation data from a microphone, video data from a display device, and POI information and providing the generated content to the passengers using a display, are insignificant extra-solution activities that merely use a communication unit and display to perform the process. In particular, the receiving and outputting steps are recited at a high level of generality (i.e. as a general means of gathering data and transferring data), and amounts to mere data gathering and transferring, which is a form of insignificant extra-solution activity.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a processor or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent Claim 11 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the communication unit and processor amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations of receiving data and outputting data, the examiner submits that these limitations are insignificant extra-solution activities.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well-understood, routine, conventional activity in the field. The additional limitations of receiving the data and transferring data are well-understood, routine, and conventional activities because the background recites that the processors and units from which the data is acquired/received are all conventional units and processors. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, claim 11 is not patent eligible.
Further claim 1 is not patent eligible for the same reasons.
Dependent Claims 2-10 and 12-20 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea. The additional elements, if any, in the dependent claims are not sufficient to amount to significantly more than the judicial exception for the same reasons as with Claims 1 and 11.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-10, 11 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US-20220262046-A1 to An et. al. (“An”) in view of CN-111833881-B to Jiang et. al. (“Jiang”), further in view of US-20200349666-A1 to Hodge et. al. (“Hodge”) and US-20180249066-A1 to Katsumata et. al. (“Katsumata”).
Regarding claim 11, An teaches an apparatus for providing content, the apparatus comprising (An Abstract, [0061], and Claim 11):
a communication unit, including a wireless transceiver of a vehicle, configured to receive point-of-interest (POI) information from a server (An Abstract, [0061], and Claim 11);
and a processor (An Fig. 3 ref 130)
An does not teach that the communication unit is configured to receive conversation data and video data; a processor that is configured to:
generate first information from content of conversations of passengers of the vehicle by analyzing the conversation data,
generate second information from a video by analyzing the video data,
generate interest information based on the first information and the second information,
generate, based on the interest information and the POI information, content to be provided to the passengers of the vehicle,
and output, via one or both of a display or audio output device of the vehicle, the generated content to the passengers of the vehicle.
However, Jiang teaches a communication unit, including a wireless transceiver of a vehicle (Jiang Description “comprehensive information can be obtained through a car machine and a mobile phone”), is configured to receive conversation data and video data (Jiang Claim 1 “video information … voice information, …”); a processor that is configured to (Jiang claim 13):
generate first information from content of conversations of passengers of the vehicle by analyzing the conversation data (Jiang Claim 1 “obtaining comprehensive information, the comprehensive information comprises service information based on … voice information … passenger information and passenger behaviour information”),
generate second information from a video by analyzing the video data (Jiang Claim 1 “obtaining comprehensive information, the comprehensive information comprises service information based on … video information … voice information, … passenger information and passenger behaviour information…”),
generate interest information based on the first information and the second information (Jiang Claim 1 “obtaining comprehensive information, the comprehensive information comprises service information based on … video information … voice information, … passenger information and passenger behaviour information…”),
generate, based on the interest information and the POI information, content to be provided to the passengers of the vehicle (Jiang Claim 1 Steps S2-S4),
and output, via one or both of a display or audio output device of the vehicle, the generated content to the passengers of the vehicle (Jiang Description “The display device 1406 may display the result obtained by the processor 1401 executing the instruction.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to have modified the apparatus of An to incorporate the teachings of Jiang such that the communication unit is configured to receive conversation data and video data; a processor that is configured to: generate first information from content of conversations of passengers of the vehicle by analyzing the conversation data, generate second information from a video by analyzing the video data, generate interest information based on the first information and the second information, generate, based on the interest information and the POI information, content to be provided to the passengers of the vehicle, and output, via one or both of a display or audio output device of the vehicle, the generated content to the passengers of the vehicle. Doing so would allow for a reliable companion assistant system for vehicles (Jiang Description).
An as modified by Jiang does not teach that the communication unit is configured to receive conversation data captured by in-vehicle microphone. However, Hodge discloses that the communication unit is configured to receive conversation data captured by in-vehicle microphone (Hodge [0069] “user input may be received through one or more microphones 212. In one embodiment, microphone 212 is a digital microphone connected to audio module 206 to receive user spoken input, such as user instructions or commands. Microphone 212 may also be used for other functions, such as user communications, audio component of video recordings, or the like.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Hodge to An as modified by Jiang such that the communication unit is configured to receive conversation data captured by in-vehicle microphone. Doing so would allow the apparatus to receive user spoken input (Hodge [0069]).
An as modified by Jiang and Hodge does not teach that the communication unit is configured to receive video data displayed by an in-vehicle display device and that the processor is configured to generate second information from a video displayed in the vehicle by analyzing the video data. However, Katsumata teaches that the communication unit is configured to receive video data displayed by an in-vehicle display device (Katsumata Claim 1 “display the display video data containing the second-type range which has been subjected to the information volume reduction operation”) and that the processor is configured to generate second information from a video displayed in the vehicle by analyzing the video data (Katsumata Claim 5 “a recognition processing unit that performs vehicle recognition with respect to the display video data and determines number of recognized vehicles”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Katsumata to An as modified by Jiang and Hodge such that the communication unit is configured to receive video data displayed by an in-vehicle display device and that the processor is configured to generate second information from a video displayed in the vehicle by analyzing the video data. Doing so would allow for an appropriate amount of information to be recognized for the driver (Katsumata [0004]).
Regarding claim 14, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 11. Hodge further discloses that the second information includes information on words whose frequency of use in captions, images, and voice exceeds a preset second threshold among words in a list of words, wherein captions, images, and voice included in a video received by the vehicle from a server or an external device are analyzed to generate the list of words (Hodge [0080]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Hodge to An as modified by Jiang, Hodge, and Katsumata such that the second information includes information on words whose frequency of use in captions, images, and voice exceeds a preset second threshold among words in a list of words, wherein captions, images, and voice included in a video received by the vehicle from a server or an external device are analyzed to generate the list of words. Doing so would allow the apparatus to collect and analyze video data from live evens for passengers/drivers (Hodge [0019] & [0152]).
Regarding claim 15, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 14. Hodge further discloses that the second information includes information in which the words in the list of words are sorted in order of frequency of use during a preset second critical time (Hodge [0078] “For example, certain “trigger” words may be associated with particular events. When the “trigger” word is found present in the audio data, the corresponding event may be determined.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Hodge to An as modified by Jiang, Hodge, and Katsumata such that the second information includes information in which the words in the list of words are sorted in order of frequency of use during a preset second critical time. Doing so would allow the apparatus to determine the presence of patterns associated with events (Hodge [0078]).
Regarding claim 16, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 11. An further discloses that the POI information includes information on a store stored in a database in the server (An [0015] & [0025]).
Regarding claim 17, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 16. An further discloses that the information on the store includes at least one of identification information of the store, information on items currently on discount, discount rates of the items currently on discount, or a combination thereof (An [0015] & [0025]).
Regarding claim 18, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 16. An further discloses that the communication unit is further configured to receive location information of the vehicle (An Abstract “controlling a communication module of the user terminal to receive location information of a point of interest (POI) from a server”), and wherein the processor is configured to provide the generated content based on the location information of the vehicle (An Abstract “controlling a display of the user terminal to display a composite image by synthesizing an icon, in which the location information of the POI is displayed,”).
Regarding claim 19, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 18. An further discloses that the location information of the vehicle is received from a global positioning system (GPS) server (An [0073] – [0075]).
Regarding claim 20, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 11. Jiang further discloses that the interest information includes a common word or related words matching each other among words in the first information and the second information (Jiang Claim 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Jiang to An as modified by Jiang, Hodge, and Katsumata such that the interest information includes a common word or related words matching each other among words in the first information and the second information. Doing so would allow the apparatus to focus on key words that the user is likely to be interested in (Jiang Description).
With respect to Claims 1 and 4-10, all limitations have been examined with respect to the apparatus in claims 11 and 14-20. The apparatus taught/disclosed in claims 11 and 14-20 can clearly perform the method of claims 1 and 4-10. Therefore claims 1 and 4-10 are rejected under the same rationale.
Claim(s) 2-3 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over An in view of Jiang, further in view of Hodge, Katsumata, and JP-2014075008 to Nohata et. al. (“Nohata”).
Regarding claim 12, An as modified by Jiang, Hodge, and Katsumata teaches all of the elements of the current invention in claim 11. An as modified by Jiang, Hodge, and Katsumata does not teach that the first information includes information on words whose frequency of use exceeds a preset first threshold among words included in the conversations, obtained by analyzing the conversations of the passengers of the vehicle through voice recognition. However, Nohata teaches that the first information includes information on words whose frequency of use exceeds a preset first threshold among words included in the conversations (Nohata Description “the conversation content includes a predetermined word / phrase”), obtained by analyzing the conversations of the passengers of the vehicle through voice recognition (Nohata Description “In addition, the excitement determination unit 240 performs voice recognition”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Nohata to An as modified by Jiang, Hodge, and Katsumata such that the first information includes information on words whose frequency of use exceeds a preset first threshold among words included in the conversations, obtained by analyzing the conversations of the passengers of the vehicle through voice recognition. Doing so would allow the apparatus to determine passenger tension levels and support driving operations based on the tension levels (Nohata Abstract).
Regarding claim 13, An as modified by Jiang, Hodge, Katsumata, and Nohata teaches all of the elements of the current invention in claim 12. Nohata further teaches that the first information includes information in which words included in the conversations are sorted in order of frequency of use during a preset first critical time (Nohata Description “when a word / phrase of utterance content is acquired and matches a predetermined word / phrase uttered at the time of tension, the degree of tension is calculated to be high. “).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable expectation of success to further incorporate the teachings of Nohata to An as modified by Jiang, Hodge, Katsumata, and Nohata such that the first information includes information in which words included in the conversations are sorted in order of frequency of use during a preset first critical time. Doing so would allow the apparatus to determine passenger tension levels and support driving operations based on the tension levels (Nohata Abstract).
With respect to Claims 2-3, all limitations have been examined with respect to the apparatus in claims 12-13. The apparatus taught/disclosed in claims 12-13 can clearly perform the method of claims 2-3. Therefore claims 2-3 are rejected under the same rationale.
Response to Arguments/Remarks
With respect to Applicant’s remarks filed on 01/02/2026; Applicant's “Amendments and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to the claim rejections under 35 U.S.C. § 112 (b), applicants “Amendment and Remarks” have been fully considered and are not persuasive. Applicant argues:
Therefore the Office respectfully disagrees and the claim rejections under 35 U.S.C. § 112 (b) remain.
With respect to the claim interpretations under 35 U.S.C. § 112 (f), applicants “Amendment and Remarks” have been fully considered.
With respect to the claim rejections under 35 U.S.C. § 101, applicants “Amendment and Remarks” have been fully considered.
Applicant remarks:
Prong One Step 2A:
The complex nature of each of the claims 1 and 11 indicates that the subject matter of the claims cannot practically be performed in a human mind or by a human using a pen and paper. For example, a human mind is not equipped to receive conversation data captured by in-vehicle microphones and video data displayed by an in-vehicle display device and receive point-of-interest (POI) information from a server, in the manner now recited in claims 1 and 11. The human mind is also not equipped to generate, based on interest information [generated based on analyzing conversation data captured by in-vehicle microphones and video data displayed by an in-vehicle display device] and POI information [received from a server], content to be provided to the passengers of the vehicle, and output, via one or both of a display or audio output device of the vehicle, the generated content, in the manner now recited in claims 1 and 11. (Emphasis Added)
Prong Two Step 2A:
Applicants respectfully submit that each of the independent claims 1 and 11 is directed to a practical application. The subject matter of the independent claims 1 and 11 pertains to a technical improvement in the functioning of content provision technology for a vehicle. Specifically, the subject matter of the amended claims 1 and 11 may generate content based on i) interest information generated based on analyzing received conversation data captured by in- vehicle microphones and video data displayed by an in-vehicle display device and ii) POI information received from a server, and may output the generated content to the passengers of the vehicle via one or both of a display or audio output device of the vehicle. The subject matter of claims 1 and 11 may thus, for example, provide customized advertising content or other useful information, such as information on restaurants or other stores and discount information related thereto, to passengers of a vehicle based on i) the content of conversations of passengers and video information displayed in the vehicle in combination with ii) point of interest (POI) information received from a server, thereby improving content provision quality in the vehicle. See e.g., paragraphs [0008]-[0010], [0134], and [0135] of the instant application.
Claims 1 and 11 reflect how to achieve the technical improvement in the function of the content provision technology in vehicles. Specifically, the subject matter of independent claims 1 and 11 may: receive conversation data captured by in-vehicle microphones and video data displayed by an in-vehicle display device; generate first information from content of conversations of passengers of a vehicle by analyzing the conversation data; generate second information from a video displayed in the vehicle by analyzing the video data; generate interest information based on the first information and the second information; receive point-of-interest (POI) information from a server; generate, based on the interest information and the POI information, content to be provided to the passengers of the vehicle; and output, via one or both of a display or audio output device of the vehicle, the generated content to the passengers of the vehicle. As discussed above, the subject matter of independent claims 1 and 11 may thus enhance content provision to passengers in a vehicle by outputting customized advertising content or other useful information, such as information on restaurants or other stores and discount information related thereto, based on the content of conversations of passengers and the video displayed in the vehicle.
Office Response:
Prong One Step 2A:
The receiving and outputting steps are not identified as a mental process, but rather mere forms of data gathering/transferring. Content by definition is data that is to be expressed through some medium, such as speech, writing, or any other art, while information is data. A human is capable of generating data, with pen and paper, based on analyzing other given data.
Prong Two Step 2A:
Applicant argues that the claims improve the functioning of content provision technology for a vehicle. However, the claims are merely determining information to present and display on a screen. The claims are not reciting a specific improvement to the technology or architecture of content provision. The alleged improvement is simply the selection and presentation of information, which is still an abstract idea and mere data gathering/transferring.
Please see the 101 analyses above for the specific claim limitations. Like stated in the previous bullet point, the alleged improvement is simply the selection and presentation of information, which is still an abstract idea and mere data gathering/transferring.
Therefore the Office's respectfully disagrees with applicant’s arguments that the above additional elements amount to significantly more than the judicial exception itself and the rejections remain.
With respect to the claim rejections under 35 U.S.C. § 103, applicants “Amendment and Remarks” have been fully considered. Applicant has amended the independent claim and these amendments have changed the scope of the original application and the Office has supplied new grounds for rejection attached below in the FINAL office action and therefore the prior arguments are considered moot. Applicant further argues that the other independent claims which recite similar features are allowable and the dependent claims are also allowable since they depend on allowable subject and the Office respectfully disagrees. It is the Office's stance that all of the claimed subject matter has been properly rejected; therefore, the Office's respectfully disagrees with applicant’s arguments.
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON TOAN NGUYEN whose telephone number is (571)272-6163. The examiner can normally be reached M-T: 8-5:30 F1:8-12 F2: Off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Browne can be reached on 5712700151. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.N./Examiner, Art Unit 3666
/SCOTT A BROWNE/Supervisory Patent Examiner, Art Unit 3666